report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
GPRAMA specifies that APGs are to have ambitious targets that can be achieved within a 2-year period; a clearly identified agency official, known as a goal leader, who is responsible for achieving the goal; clearly defined quarterly milestones; and interim quarterly targets for performance measures, if more frequent updates of actual performance provide data of significant value. Other GPRAMA requirements provide additional information and context for the priority goals. For example, agencies are to describe how their APGs contribute to the agency’s long-term strategic goals, as well as any of the CAP goals developed by OMB, as applicable. This information can help illustrate how an agency’s efforts to achieve its priority goals fit within a broader, crosscutting context—both within the agency and across the federal government. In addition, agencies are to describe how they incorporated any input on their priority goals received during consultations with relevant congressional committees. GPRAMA also lays out a schedule for gradual implementation of its provisions, with a 3-year period of interim implementation following enactment in January 2011. It required agencies to identify their APGs and related information in their strategic plans and performance plans, published concurrently with the President’s Budget in February 2012. Agencies also were to provide information about their APGs for OMB to publish on Performance.gov by October 1, 2012, and agencies are to OMB provided update this information on at least a quarterly basis. guidance to agencies on implementing the act’s provisions, including those related to APGs, in several memorandums and its annual Circular No. A-11 in both 2011 and 2012. In addition to OMB’s guidance, the Performance Improvement Council (PIC) shared practices related to developing and implementing APGs with agencies in 2011 and 2012. The PIC established the Goal Setting Working Group in May 2011 to assist agencies in setting their 2012 to 2013 APGs. The group produced a draft guide to goal setting, which included criteria for selecting priority goals as well as elements and examples of effective goal statements. In September 2012, the PIC also produced a draft best practices guide for developing milestones; the guide described the characteristics of milestones and provided several examples. For each APG, agencies were required, by GPRAMA or OMB guidance, to make available to OMB for publication on Performance.gov and in their strategic plans or performance plans (1) a performance goal with a target level of performance to be achieved in a 2-year time frame; (2) an explanation of how the goal contributes to agency strategic goals; and (3) the identification of an agency official as the goal leader responsible for achieving the goal. Agencies provided information about each of these requirements for all of the 102 APGs on Performance.gov included in our assessment—which represents an important accomplishment in the development of priority goals. Figure 1 illustrates how information on Performance.gov for one of OPM’s priority goals meets the three requirements. The full goal statement for the APG provides a targeted level of performance to achieve (“participation of at least 2 multi-state health plans in State Affordable Insurance Exchanges”) within a 2-year timeframe (“by October 1, 2013”). The layout of information on Performance.gov shows that this APG supports OPM’s strategic goal to “Improve Access to Health Insurance,” as part of its strategic objective to “contract with multi-state health plans to be offered on affordable insurance exchanges.” Finally, information on the site identifies OPM’s Director of Healthcare and Insurance as the goal leader for this APG. As an additional example, the Social Security Administration (SSA) included information about each of its APGs in an appendix of its fiscal year 2013 performance plan. As shown in figure 2, for SSA’s priority goal to ensure faster hearing decisions, the plan provides the targeted level of performance (“reduce the average time…to 270 days”) and the timeframe (“by the end of fiscal year 2013”) in the “Priority Goals” column. In the same column, SSA indicated that the goal is linked to performance measure 1.1c, which supports the agency’s strategic goal “Deliver Quality Disability Decisions and Services.” In the “Goal Leaders(s)” column, SSA identifies the Executive Coordinator for Backlog Initiatives in the Office of Disability Adjudication and Review. SSA’s strategic plan for fiscal years 2013 to 2016 also identifies its APGs and how each supports an agency strategic goal. Figure 3 provides a table from the strategic plan that presents a list of goals that support each of its strategic goals and denotes those that are APGs with an asterisk. Our past work has shown that although the federal government faces a series of challenges that in many instances are not possible for any single agency to address alone, agencies often face a range of challenges and barriers when they attempt to work collaboratively. Our annual reports on duplication, overlap, and fragmentation highlight a number of areas where a more crosscutting approach is needed—both across agencies and within a specific agency. We found that duplication and overlap occur because programs have been added incrementally over time to respond to new needs and challenges, without a strategy to minimize duplication, overlap, and fragmentation among them. Also, there are not always interagency mechanisms or strategies in place to coordinate programs that address crosscutting issues, which can lead to potentially duplicative, overlapping, and fragmented efforts. GPRAMA establishes a new framework for taking a crosscutting and integrated approach to improving government performance, and effective implementation of that framework could play an important role in clarifying desired outcomes, addressing performance that spans multiple organizations, and facilitating actions to reduce unnecessary overlap, duplication, and fragmentation. Two provisions in GPRAMA, in particular, direct agencies to link their APGs with crosscutting federal efforts. First, the act requires agencies to identify federal organizations, program activities, regulations, policies, and other activities—both internal and external to the agency—that contribute to each of their APGs and include this information in their performance plans and provide it to OMB for publication on Performance.gov. In addition, OMB’s 2012 guidance directs agencies to include tax expenditures in their identification of organizations and programs that contribute to their APGs, as part of their updates to Performance.gov. Since 1994, we have recommended greater scrutiny of tax expenditures, as periodic reviews could help determine how well specific tax expenditures work to achieve their goals and how their benefits and costs compare to those of programs with similar goals. Second, APGs are to be informed by the CAP goals. The act also requires agencies to demonstrate in their performance plans any alignment between their performance goals—including their APGs—and the CAP goals. Both of these provisions are important because they show how agencies are coordinating efforts toward a common crosscutting issue. As we have previously reported, uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. Agencies identified at least one internal contributor for each of their APGs, though agencies differed in the amount of detail they provided. For example, the National Science Foundation (NSF), as shown in figure 4, identified its Directorate of Engineering as a lead organization and its Innovation Corps activities and programs as contributing programs to its APG to increase the number of entrepreneurs emerging from university laboratories. DOT, in its fiscal year 2013 performance plan, organized the descriptions of its planned performance—including its APGs—into broad themes under its strategic goals. As shown in figure 5, DOT identified the operating administrations, activities, enabling legislation, regulations, and other resources that contribute to each theme. Similarly, DOT identified on Performance.gov a range of contributing programs to its APG to reduce the risk of aviation accidents (see figure 6). Agencies identified external contributors for 73 of the 102 APGs. When they did identify external contributors, agencies differed in the amount of detail they provided. The Department of State (State), for instance, identified in its fiscal year 2013 congressional budget justification external contributors for six of the eight APGs it jointly developed with the U.S. Agency for International Development (USAID). These external contributors are generally at the department/agency, component, or program level, such as the Department of Defense (DOD), the Department of Justice’s Office of Overseas Prosecutorial Development Assistance and Training, and the Department of Justice’s International Criminal Investigative Training Assistance Program, respectively (see figure 7). Similarly, State and USAID identified on Performance.gov external contributors to their APG to advance low emissions, climate-resilient development, such as the Department of Agriculture (USDA), the Forest Service (a USDA component), the Environmental Protection Agency, and international governmental and nongovernmental organizations (see figure 8). We did not verify that agencies included all relevant internal and external federal contributors to their APGs. However, it was not always clear why external contributors were not identified for 29 of the 102 APGs. In some instances this could be explained by the goal being internally focused. For example, the Department of the Interior listed no external contributors to its internally-focused APG to “build the next generation of conservation and community leaders by supporting youth employment” at the department. However, our analysis indicates that 8 of the 29 APGs that lack external contributors are related to crosscutting areas that we have identified as at risk of potential fragmentation, overlap, or duplication. For example, NSF did not list any external contributors to its APG to develop a diverse and highly qualified science and technology workforce by having 80 percent of institutions funded through NSF’s undergraduate programs document the extent of use of proven instructional practices by September 30, 2013. Our past work has identified 209 programs across 13 federal agencies that are focused on science, technology, engineering, and mathematics education, some which may have efforts related to those NSF is undertaking for this goal. In addition, our in-depth examination of a sample of 21 APGs indentified several APGs related to our work on fragmentation, overlap, and duplication, including one where not all relevant contributors were identified. As we have previously reported, HUD, USDA, and the Department of the Treasury (Treasury) operate rental housing programs with overlapping purposes, although the products, areas served, and delivery methods differed, and recommended that further collaboration be undertaken and documented in strategic plans and performance plans by these agencies. HUD and USDA generally agreed with the recommendations; Treasury did not provide comments. As illustrated in figure 9, HUD identified two tax expenditures (Treasury) as contributors to its APG targeted at preserving affordable rental housing—the only APG out of all 102 to have tax expenditures identified as external contributors. However, HUD did not identify USDA or its rental housing programs. The Department of Commerce (Commerce) and State both have export- related APGs, and noted on Performance.gov that their APGs contribute to the broader CAP goal to double U.S. exports by the end of 2014. However, our analysis indicates that 27 additional APGs appear to support at least one of the 14 interim CAP goals, but agencies did not describe this connection. In part, this could be a result of OMB’s guidance, which does not state the requirement for agencies to show the alignment between their performance goals—including their APGs—and the CAP goals. Instead the guidance directs agencies to refer to Performance.gov, where the quarterly updates for the CAP goals will describe how the agency’s goals contribute to the CAP goal. While in a few instances CAP goals identified contributing APGs, this alignment was not also provided in the corresponding APG information on the site. For example, in the quarterly update published in December 2012, the export CAP goal identified the export-related APGs of Commerce and State—as well as that of USDA—as supporting the CAP goal’s strategies. Unlike Commerce and State, USDA did not describe how its export-related APG supports the broader export CAP goal. According to OMB staff, as the information presented on Performance.gov and its functionality is expanded and enhanced, they expect to cross-reference related pieces of information, which they stated would include the connections between APGs and any related CAP goals. We have reported that communicating the relationship between individual agency goals and outcomes that cut across federal agencies provides an opportunity to clearly relate and address the contributions of alternative federal strategies. In addition, as mentioned above, it is important for agencies to identify areas in which they should be coordinating efforts to meet crosscutting goals, and we have reported that strategic plans and performance plans can be tools for doing so. Without OMB guidance directing agencies to describe how their performance goals—including APGs—support any relevant CAP goals, agencies may not understand the importance of examining how their efforts contribute to broader federal outcomes and planning for those contributions. Similarly, although we did not analyze whether agencies included all relevant internal and external contributors for their APGs, our work on potential areas of fragmentation, overlap, and duplication helped identify several examples where agencies did not list relevant external contributors. In addition, OMB’s review process does not systematically check whether agencies have identified all relevant contributors. This raises questions as to whether larger issues exist with the completeness of agencies’ listings of APG contributors. More importantly, without complete information related to both of these requirements, it is unclear whether agencies have properly planned to coordinate their efforts. As we noted earlier, uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. As noted earlier, agencies are to define a target level of performance to be achieved within a 2-year timeframe for each APG. GPRAMA requires agencies to establish a set of performance measures (called performance indicators in the act), which are used to assess progress toward each goal, at least annually. The act also requires agencies to review and report on progress toward their APGs on at least a quarterly basis. One way agencies can gauge progress this frequently is through the development of interim quarterly performance targets—that is, targets for each quarter that falls within the 2-year period.interim targets for performance measures when more frequent updates of actual performance would provide data of significant value to the federal government, Congress, or program partners at a reasonable level of administrative burden. The Senate committee report that accompanied the bill that ultimately was enacted states that the quarterly performance review requirement for APGs is intended to increase the use of performance information to improve performance and results. Our past work has shown that although agencies collect a significant amount of performance information, they have not consistently used that information to improve management and results. We have previously identified practices for enhancing agency use of performance information, one of which is to communicate performance information, including performance against targets, frequently and effectively. Frequent, regular communication can help managers to inform staff and other stakeholders of their commitment to achieve the agency’s goals and to keep these goals in mind as they pursue their day-to-day activities. Frequently reporting progress toward achieving performance targets also allows managers to review the information in time to make improvements. Without related targets, agencies may be unable to demonstrate to key stakeholders, including Congress, program partners, and the public, that they are tracking progress frequently enough to address any performance issues related to their APGs as they arise. In the December 2012 update to Performance.gov, agencies identified 241 performance measures for gauging progress toward 91 of the 102 APGs. For the 11 APGs without performance measures, the agencies stated that the goals are more appropriately measured by milestones. Although OMB’s guidance strongly encourages agencies to use quantitative measures, it allows agencies the flexibility to develop qualitative goal statements that are supported by milestones to assess progress. All 24 agencies have at least one APG with an accompanying performance measure. The frequency with which agencies collect performance information for the measures varies, as illustrated in table 1. Agencies collect and report results on a majority of the measures (166 out of 241, or 69 percent) on at least a quarterly basis. Measuring and reporting results this frequently represents substantial progress in agencies’ ability to use performance information in a timelier manner to pinpoint and act on improvement opportunities. Previously, GPRA required agencies to report their performance on an annual basis. Figure 10 illustrates one of the two measures the Department of Health and Human Services (HHS) identified for its APG to increase the number of health centers certified as Patient Centered Medical Homes. The measure is the percent of health centers with at least one site recognized as a Patient Centered Medical Home. HHS provided interim performance targets for each quarter of the goal period, beginning with an interim target of 4 percent in the first quarter of fiscal year 2012, with subsequent targets increasing toward the final target of 25 percent by the fourth quarter of fiscal year 2013. In addition, HHS reported its progress toward those interim targets on a quarterly basis. In other cases, agencies did not provide interim targets to show the level of performance expected for each underlying measure. For a majority of their measures (136 or 56 percent), agencies provided interim targets that align with their measures (e.g. quarterly targets for quarterly measures for each quarter during the 2-year period of the goal). For example, similar to the HHS example above, for 90 measures agencies provided quarterly targets to be achieved through the end of the goal period (fourth quarter of fiscal year 2013) for each measure. But for 77 measures (32 percent), agencies provided interim targets that align with their measures for only a portion of the 2-year timeframe. Finally, for 28 measures (12 percent), agencies did not provide interim targets that align with their measures for any portion of the 2-year timeframe. As previously stated, GPRAMA requires agencies to develop interim quarterly performance targets for their measures if more frequent updates of actual performance would provide data of significant value to the federal government, Congress, or program partners at a reasonable level of administrative burden. While OMB’s 2012 A-11 guidance provides a definition of “reasonable administrative burden,” it does not define what constitutes “data of significant value.” Therefore it may be unclear to agencies when it would be appropriate to develop these targets. Furthermore, the guidance does not mention the interim quarterly performance target requirement. OMB staff told us that they expect agencies to provide such targets, and that they have communicated this expectation to agencies. OMB staff shared with us the user guide they developed for agencies to input data for publication on Performance.gov. According to the guide, indicators should include a target for each reporting period. The act requires each APG to have clearly defined quarterly milestones— scheduled events signifying the completion of a major deliverable or a set of related deliverables or a phase of work. Similar to performance measures, OMB’s guidance states that milestones will follow fiscal year quarters and notes that agencies may choose monthly milestones if preferred. In addition, a draft guide developed by the PIC describes characteristics of a good milestone, such as articulating concrete actions to be taken and being time-bound. Milestones can help agencies demonstrate that they have clear and fully developed strategies and are tracking progress to accomplish their goals. Such strategies, as identified in our past work, should (1) identify specific actions agencies are taking or plan to take to carry out their missions, (2) outline planned accomplishments, and (3) provide a schedule for their completion. Milestones can help show the connection between agencies’ day-to-day activities and their goals. In addition, by describing the strategies to be used to achieve results, including clearly defined milestones, and the resources to be applied to those strategies, agencies can provide information that would help key stakeholders, including Congress, better understand the relationship between resources and results. Without clearly defined milestones, agencies may have difficulty demonstrating that they have properly planned the actions needed, and are tracking progress, to accomplish their APGs. For 63 of the 102 APGs, agencies identified on Performance.gov clearly- defined milestones for both the near term (presented as “Next Steps,” with a scheduled completion date in the next fiscal quarter) and longer term (presented as “Future Actions,” covering the remainder of the goal period). Figure 11 provides an illustrative example of an APG with both near-term and longer-term milestones. For its goal to improve awareness of VA services and benefits by increasing the timeliness and relevance of online information available to veterans, service members, and eligible beneficiaries, VA provided milestones scheduled for completion in the second quarter of fiscal year 2013 (near term) and the fourth quarter of fiscal year 2013 (the end of the goal period). Agencies did not always identify the quarterly milestones they planned to accomplish in order to achieve their APGs during the 2-year goal period. Furthermore, the presentation of information about milestones on Performance.gov does not always convey the time frames for expected action. For the remaining 39 goals, agencies did not provide specific completion dates in discussions of near-term or longer-term plans (or in some cases both) for accomplishing the goal. As figure 12 illustrates, the Small Business Administration (SBA) provided planned actions it intends to take in the near term and longer term to help accomplish its APG to process disaster applications efficiently. However, it is unclear when SBA intends to complete these actions. OMB’s 2012 A-11 guidance does not adequately reflect that clearly- defined milestones should have scheduled completion dates and be publicly reported. The guidance states that APGs must have quarterly milestones to track progress, and it outlines the time frames that near- term and longer-term milestones should cover. However, the guidance does not state that agencies should provide specific completion dates for their milestones. In addition, contrary to GPRAMA, the guidance also states that agencies’ presentations of near-term milestones in the quarterly updates on Performance.gov are optional. When we asked OMB staff about this, they agreed that the designation of near-term milestones as optional for the quarterly updates was an error in A-11 guidance; it should have been required. They told us they intend to correct this error in the 2013 A-11 guidance. OMB staff further stated that OMB has communicated to agencies that near-term milestones are to be included in quarterly updates to Performance.gov in other ways. For example, the Performance.gov user guide states that “agencies will summarize how they plan to improve progress…and will include key milestones planned” for the near term, as part of the “Next Steps” portion of the APG information. Without clear and consistent guidance about developing and publishing milestones with clear completion dates, agencies may continue to omit key information about the actions they plan to undertake to accomplish their goals. Only 1 of the 24 agencies that developed APGs described how those goals reflect input from congressional consultations. GPRAMA states that APGs are to reflect the highest priorities of the agency as determined by the head of the agency and informed by consultations with Congress. Agencies are to consult with their relevant appropriations, authorization, and oversight committees when developing or making adjustments to their strategic plans, including their APGs, at least once every 2 years. Regarding this requirement, OMB’s guidance highlights that agencies should specifically consult with Congress on priority goal issue areas, and suggests agencies could start discussions of their next set of priority goals in the context of providing Congress an update on progress on the current APGs. The act also requires agencies to describe how input provided during congressional consultations was incorporated for each agency priority goal on Performance.gov. In addition, agencies are to similarly describe in their strategic plans how input from congressional consultations was incorporated into their goals. Without this information, it will be difficult to know whether an agency’s goals reflect congressional input, and therefore if the goals will provide useful information for congressional decision making. In the December 2012 update to Performance.gov, agencies provided information about how they engaged stakeholders during their goal development processes. Although 19 agencies stated that they included Congress as part of their stakeholder engagement, only SBA provided information about the input it received on its APGs from those consultations, as shown in figure 13. Two agencies, DOD and DOT, broadly mentioned that congressional input on agency goals was incorporated as appropriate. Education took a different approach and provided information about how it engaged stakeholders, including Congress in several instances, for each of its APGs. However, none of these agencies provided specific information on the input that was received or how it was incorporated. Several agencies also provided broad descriptions of their consultations in their strategic plans. For example, VA states in its plan that in November 2011 it initiated the process for consulting with Congress regarding the development of its agency priority goals and the VA strategic plan. Additionally, in the SSA strategic plan, the agency mentioned developing its plan in consultation with employees, stakeholders, advisory groups, and Congress. However, in none of these instances did agencies provide any further details about how these consultations influenced their strategic plans, including their APGs. OMB’s 2011 guidance, which covered the development of APGs for 2012 to 2013, stated that agencies should consult with Congress on priority OMB staff told goal issue areas, prior to submitting draft goals to OMB.us that agencies formed a working group on consultations and OMB staff worked with agencies on how to do the consultations well. However, OMB staff also told us that agencies were generally not comfortable publishing the input they received from Congress during their consultation for a variety of reasons, such as a reluctance to characterize competing or conflicting congressional interests. However, without such information, it is unclear that agencies have adequately engaged Congress and appropriately incorporated congressional feedback into their APGs. The consultation process was established by GPRA in 1993 so that agencies could take congressional views into account as appropriate. But as noted in the Senate committee report that accompanied the bill that ultimately became GPRAMA, little evidence existed that agencies had formally or significantly considered the input of key stakeholders when developing goals. The requirement for agencies to describe how congressional input was incorporated into their goals was intended to strengthen the consultation process. Our past work has noted the importance of considering Congress a partner in shaping agency goals. Successful consultations can create a basic understanding among stakeholders of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. We have also reported that agency consultations with Congress on the identification of priority goals presents an opportunity to develop such an understanding, especially given Congress’s role in setting national Although priorities and allocating the resources to achieve them.constructive communication across the branches of government can prove difficult, it is essential for sustaining federal performance improvement efforts. Consultations provide an important opportunity for Congress and the executive branch to work together to ensure that agency missions are focused; goals are specific, results-oriented, and address congressional concerns about performance; and strategies and funding expectations are appropriate and reasonable. Our prior audit work and in some instances that of relevant agency inspectors general provide additional perspective on the capacity of DHS, DOT, HUD, OPM, and VA to achieve their APGs. Our comments on each of these goals can be found in Appendixes II (DHS), III (HUD), IV (DOT), V (VA), and VI (OPM). Given the breadth of issues dealt with by these goals, our comments for each goal cover a range of topics. Despite this variation, several goals suggested that agencies continue to grapple with a common challenge from our past work related to measuring progress toward their goals and ensuring that the related performance data are accurate. For example, related to both HUD’s and VA’s APGs to assist in housing and reducing the number of homeless veterans, we have previously reported that HUD and VA lack key data on the population of homeless women veterans, including their characteristics and needs. This hampers VA’s ability to plan services effectively. In December 2011, we recommended that HUD and VA should collaborate to ensure appropriate data are collected and use these data to strategically plan for services. VA concurred with this recommendation and, in April 2013, stated that it had taken additional actions to inform policy and operational decisions about homeless and at- risk women veterans. For example, VA stated that it worked with HUD to ensure that gender specific data were collected during the 2013 Point in Time count of homeless persons. In another example related to DOT’s APG to reduce roadway fatalities, our past work has indicated that the quality of state traffic safety data systems varied across the six data systems maintained by states. In April 2010, we recommended that the National Highway Traffic Safety Administration (NHTSA) take steps to ensure that traffic records assessments provide an in-depth evaluation that is complete and consistent in addressing quality across all state traffic safety data systems. In response, NHTSA has taken a number of steps intended to improve the quality of the assessments and the data systems, and as of spring 2013, these efforts continue. Many of the meaningful results that the federal government seeks to achieve cannot be realized without effective coordination and collaboration both within and across agencies. Recognizing this, Congress and the executive branch established a new crosscutting and integrated approach for focusing on results and improving government performance with the passage and enactment of GPRAMA. The act’s requirements related to the development of APGs, along with more frequent reviewing and reporting of progress towards them, have the potential to address crosscutting and other federal performance management challenges our past work has identified. OMB and the PIC provided significant support to agencies during their development of APGs. For example, in 2011 and 2012, OMB developed and provided to agencies detailed guidance and memorandums to explain GPRAMA’s requirements and OMB’s expectations for APGs. In addition, the PIC formed the Goal Setting Working Group to assist agencies in setting their 2012 to 2013 APGs, and developed draft guides related to selecting APGs and developing milestones. Given these past efforts, both OMB and the PIC will have an important role moving forward to help ensure that agencies fully develop their APGs. Agencies have implemented key provisions related to their APGs. However, they have not always provided information about coordination and collaboration for crosscutting efforts. Agencies also did not always identify external contributors to their APGs. OMB’s review process for publishing APG information on Performance.gov checks to make sure agencies have identified at least one contributor, but it does not verify that agencies have identified all appropriate contributors. In addition, most agencies did not describe how their APGs contribute to CAP goals. OMB’s guidance does not adequately reflect that agencies should describe this linkage. Revised guidance could help ensure that agencies are aware of this requirement and provide information accordingly. Further, without providing information about external contributors or how APGs contribute to CAP goals, it is unclear whether agencies have adequately planned to address performance that spans multiple organizations, thereby putting these efforts at risk for duplication, overlap, and fragmentation and potentially wasting scarce funds and limiting the effectiveness of federal efforts. The requirement for agencies to review progress made toward their APGs on a quarterly basis is intended to increase agencies’ use of the significant amount of performance information they collect. This, in turn, can help agencies to improve their performance and results in a more timely manner—a challenge our work has previously highlighted. Agencies generally developed performance measures, and collect information more frequently than in the past. This shows promise for their ability to use this information to support more timely decision making, especially when improvements are needed. However, agencies did not always identify related interim performance targets. This could be because OMB’s A-11 guidance does not mention the interim quarterly performance target requirement or define when it would provide data of significant value and therefore be required, although other OMB guidance directs agencies to develop these targets. By revising its guidance documents to consistently include this information, OMB could help ensure that agencies are meeting its expectation (and the requirement) that agencies identify interim performance targets for their APGs when doing so would provide data of significant value. Without clear targets, which enable a comparison of results against planned performance, it is unclear if agency managers have the information they need to determine if they are making sufficient progress toward each APG—a practice our past work has shown can actually lead to increased use of information and improved results. In addition, by not providing targets, key stakeholders have little assurance that an agency is actively managing its performance to make progress towards its APGs on a quarterly basis, thereby limiting oversight and accountability opportunities. GPRAMA requires agencies to develop clearly defined quarterly milestones for their APGs, which can help demonstrate that agencies have identified concrete actions needed to accomplish their goals and when those actions should be taken. However, agencies did not consistently publish milestones with scheduled completion dates, thereby missing an opportunity to assure the public and key stakeholders that they have appropriate strategies in place to achieve their APGs. Although OMB’s 2012 A-11 guidance directs agencies to develop quarterly milestones for their APGs and outlines near-term and longer-term timeframes those milestones should cover, the guidance does not state that agencies should provide specific completion dates for their milestones. In addition, the guidance does not adequately reflect that GPRAMA requires these milestones to be published on Performance.gov due to an error. Although OMB has provided additional direction to agencies about publishing milestones, revising its A-11 guidance to correct the error would ensure that its direction to agencies is consistent and clear. Agencies should consult with Congress as a partner in developing their goals, in part to ensure that the resulting performance information is useful for congressional and executive branch decision making. Agencies’ consultations with Congress on their APGs provide opportunities for both parties to gain a better understanding of the competing demands that both confront and how those demands and limited resources require careful and continuous balancing. However, most agencies did not provide information about how they incorporated any views or suggestions obtained through congressional consultations when developing their goals. This lack of information leaves it unclear as to whether agencies made serious attempts to engage with Congress on identifying the agencies’ highest priorities for improved performance. To ensure that agencies fully develop their APGs, we make the following seven recommendations to the Director of OMB. To ensure that agencies can (1) compare actual results to planned performance on a more frequent basis, as appropriate, and (2) demonstrate how they plan to accomplish their goals as well as contribute to the accomplishment of broader federal efforts, we recommend the Director of OMB revise relevant guidance documents to provide a definition of what constitutes “data of significant value;” direct agencies to develop and publish on Performance.gov interim quarterly performance targets for their APG performance measures when the above definition applies; direct agencies to provide and publish on Performance.gov completion dates, both in the near term and longer term, for their milestones; and direct agencies to describe in their performance plans how the agency’s performance goals—including APGs—contribute to any of the CAP goals. When such revisions are made, we recommend the Director of OMB work with the PIC to test and implement these provisions. In addition, as OMB works with agencies to enhance Performance.gov to include additional information about APGs, we recommend that the Director of OMB ensure that agencies adhere to OMB’s guidance for website updates by providing complete information about the organizations, program activities, regulations, tax expenditures, policies, and other activities—both within and external to the agency—that contribute to each APG; and a description of how input from congressional consultations was incorporated into each APG. We provided a draft of this report for review and comment to the Director of OMB and the five agencies covered by our in-depth review of APGs (DHS, DOT, HUD, VA, and OPM). All six agencies provided technical comments, which we incorporated as appropriate. In oral comments, staff from OMB’s Office of Performance and Personnel Management agreed with the recommendations in our report. In its written comments, reproduced in appendix VII, VA concurred with the conclusions of our report and provided additional information about its strategic plan, and related performance measurement efforts, to reduce its backlog of compensation claims. However, as VA acknowledges in its comments, the plan does not provide individual performance goals and metrics for all initiatives, which we believe are necessary for VA to ensure it is spending its limited resources on proven methods to speed up disability claims and appeals processes. We also sought comments from relevant agencies covered by the illustrative examples used in this report. We received such comments from Commerce, NSF, SBA, State, and USAID, and incorporated them as appropriate. We are sending copies of this report to the Acting Director of OMB and the heads of the 24 agencies that developed APGs as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806, or [email protected]. Specific questions about our comments on the sample of APGs contained in appendixes II through VI may be directed to the contact listed for each goal. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix VIII. The GPRA Modernization Act of 2010 (GPRAMA) requires us to review the act’s implementation at several critical junctures, and this report is part of a series of reviews planned around the requirement. Our specific objectives for this report were to (1) examine the extent to which agencies have implemented selected planning and reporting requirements and leading practices related to agency priority goals (APG); and (2) comment on the priority goals of several selected agencies based on our prior work and that of relevant agency inspectors general (IGs) and identify our relevant open recommendations and matters for congressional consideration. To address both objectives, we reviewed information about the APGs published on Performance.gov in February 2012 and updated in December 2012, as well as the updated strategic plans and performance plans agencies published in 2012 to reflect GPRAMA requirements. To assess the reliability of information presented on Performance.gov we reviewed relevant documentation and interviewed Office of Management and Budget (OMB) staff about data quality control procedures. We determined that the data were sufficiently reliable for the purposes of this report. In addition, for the first objective, we reviewed and assessed the implementation of selected planning and reporting requirements for 102 of the 103 APGs developed by the 24 agencies selected by OMB and that were released concurrently with the President’s fiscal year 2013 budget on Performance.gov.requirements we used to assess implementation included whether the goal: (1) supports a federal government priority goal (also known as cross-agency priority or CAP goals); (2) contributes to agency strategic goals; (3) reflects input from congressional consultations; (4) identifies the federal organizations, program activities, regulations, policies, and other activities—both within and external to the agency—that contribute to the APG; (5) has a clearly identified agency official as the goal leader; (6) has targets for a 2-year timeframe; (7) has interim quarterly targets; and (8) The key GPRAMA planning and reporting has clearly defined quarterly milestones. In addition to the requirements of GPRAMA, our assessment of the extent of implementation was also informed by the Senate committee report accompanying GPRAMA, relevant OMB guidance, and our past work on how to effectively implement GPRA. To address the second objective, we selected 5 of the 24 agencies that developed APGs—the Departments of Homeland Security (DHS), Housing and Urban Development (HUD), Transportation (DOT), and Veterans Affairs (VA), and the Office of Personnel Management (OPM)— based on several factors, including the number and variety of types of federal programs involved in achieving the goals, such as direct service, grant, and regulatory programs, and whether the APGs were related to any of the CAP goals. We then reviewed the work that we and relevant IGs have conducted over a number of years related to each of the 21 APGs developed by the 5 agencies. Because the 21 APGs are a non- generalizable sample of all APGs, our views on those APGs cannot be generalized to the entire universe but provided insights about each of the 21 APGs, as well as a theme common to several APGs. We also updated the status of related key open recommendations and matters for congressional consideration We conducted our performance audit from July 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on our past work, as well as that of the DHS IG, we commented on each of DHS’s three priority goals for 2012 to 2013: 1. Ensure resilience to disasters by strengthening disaster preparedness and response capabilities. By September 30, 2013, every state will have a current, DHS-certified threat, hazard, identification and risk assessment (THIRA). 2. Improve the efficiency of the process to detain and remove criminal aliens from the United States. By September 30, 2013, reduce the average length of stay in immigration detention of all convicted criminal aliens prior to their removal from the country by 5 percent. 3. Strengthen aviation security counterterrorism capabilities by using intelligence driven information and risk-based decisions. By September 30, 2013, the Transportation Security Administration will expand the use of risk-based security initiatives to double the number of passengers going through expedited screening at airports, thereby enhancing the passenger experience. For each goal, we also identify our related past reports and provide an update on the status of any open recommendations and matters for congressional consideration that we previously made related to the goal. We also identify a GAO contact for our work related to each goal. Ensure resilience to disasters by strengthening disaster preparedness and response capabilities. By September 30, 2013, every state will have a current, DHS-certified threat, hazard, identification, and risk assessment. Our past work has identified a number of challenges DHS faces in achieving its goal of strengthening disaster preparedness and response capabilities including challenges associated with efforts to measure national preparedness capabilities and assess the impact of preparedness grant funding. These efforts involve the Federal Emergency Management Agency’s (FEMA) preparedness grants. FEMA provides state and local governments with funding in the form of grants to enhance the capacity of state and local emergency responders to prevent where possible, respond to, and recover from natural disasters and terrorism incidents involving chemical, biological, radiological, nuclear, or explosive devices, or cyber attacks. States and urban areas are required to conduct a THIRA as a condition of receiving preparedness grant funding under programs including the State Homeland Security Program, Emergency Management Performance Grant Program and Urban Area Security Initiative grant program. Department of Homeland Security, Federal Emergency Management Agency, Threat and Hazard Identification and Risk Assessment Guide Comprehensive Preparedness Guide (CPG) 201 First Edition, (Washington, D.C., April 2012.) completed their THIRAs. FEMA granted 6-month extensions to the December 31, 2012, deadline for five states and three local urban areas affected by Hurricane Sandy in late October 2012. In March 2011, we suggested that Congress may wish to consider limiting preparedness grant funding to maintaining existing capabilities (as determined by FEMA) until FEMA completes a national preparedness assessment of capability gaps at each level of government based on tiered, capability-specific performance objectives to enable prioritization of grant funding. In April 2011, Congress passed the Continuing Appropriations Act that reduced funding for FEMA preparedness grants by $875 million from the amount requested in the President’s fiscal year 2011 budget. In December 2011, Congress passed the Consolidated Appropriations Act for fiscal year 2012 that reduced funding for FEMA preparedness grants by $1.28 billion from the amount requested in the President’s fiscal year 2012 budget. In March 2011, we also suggested that FEMA should complete a national preparedness assessment of capability gaps at each level based on tiered, capability-specific performance objectives to enable prioritization of grant funding, and FEMA could identify the potential costs for establishing and maintaining those capabilities at each level and determine what capabilities federal agencies should provide. In June 2012, the DHS OIG reported that FEMA did not have a system in place to determine the extent that Homeland Security Grant Program funds enhanced the states’ capabilities to prevent, deter, respond to, and recover from terrorist attacks, major disasters, and other emergencies before awarding more funds to the states. As of March 2013, FEMA has not yet completed a national preparedness assessment of capability gaps at each level. According to FEMA officials, the urban area, state, territorial, and tribal nation THIRAs that were due December 31, 2012 will serve as the basis for assessing national preparedness capabilities and gaps. FEMA will coordinate the review and analysis by a THIRA Analysis and Review Team. The team has begun meetings to discuss the common themes and findings and develop an initial proposed list of priorities for building and sustaining the core capabilities and update the proposed list of priorities as needed. These actions are part of the overall process for THIRA analysis and review, which FEMA officials said will help them develop guidance for developing capabilities to meet national priorities. In July 2009, we recommended that the FEMA Administrator should develop and implement measures to assess how regional collaboration efforts funded by Urban Area Security Initiative grants build preparedness capabilities. FEMA contracted the National Academy of Public Administration to provide recommendations for quantifiable performance measures to assess the effectiveness of the State Homeland Security Program and Urban Area Security Initiative grants. The National Academy of Public Administration issued its report in October 2011 and FEMA released the report in April 2012. The report recommends that FEMA conduct an assessment of collaborative approaches, in coordination with local jurisdictions, states, regions, and urban areas, and use the results to develop a scoring system for future quantitative or qualitative performance measures on collaboration. As of March 2013, FEMA has not yet taken action in response to this recommendation. However, according to FEMA officials, the THIRA process, along with planned coordination meetings with urban area, state, tribal, and territorial officials will likely result in data that they can use to develop collaboration- related performance metrics. FEMA indentified the “percent of high priority core planning capabilities rated as proficient by states and territories” as a measure of the agency’s progress in achieving its priority goal. This measure reports the percent of high priority core capabilities related to planning that states and territories rate as proficient. According to FEMA, this information is gathered from the SPRs (annual self-assessments by states and territories of their levels of preparedness.) However, as we reported in October 2010, FEMA officials stated, while the SPRs had enabled FEMA to gather data on the progress, capabilities, and accomplishments of a state’s, the District of Columbia’s, or a territory’s preparedness program, these reports include self-reported data that may be subject to interpretation by the reporting organizations in each state and not be readily comparable to other states’ data. The officials also stated that they had taken steps to address these limitations, for example by creating a web-based survey tool to provide a more standardized way of collecting state preparedness information that will help them validate the information by comparing it across states. However, since April 2009, FEMA has made limited progress in assessing preparedness and capabilities and has not yet developed national preparedness capability requirements based on established metrics to provide a framework for these assessments, as we reported in March 2012. National Preparedness: FEMA Has Made Progress in Improving Grant Management and Assessing Capabilities, but Challenges Remain. GAO- 13-456T. Washington, D.C.: March 19, 2013. Managing Preparedness Grants and Assessing National Capabilities: Continuing Challenges Impede FEMA’s Progress. GAO-12-526T. Washington, D.C.: March 20, 2012. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Government Operations: Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. FEMA Has Made Limited Progress in Efforts to Develop and Implement a System to Assess National Preparedness Capabilities. GAO-11-51R. Washington, D.C.: October 29, 2010. Urban Area Security Initiative: FEMA Lacks Measures to Assess How Regional Collaboration Efforts Build Preparedness Capabilities. GAO-09-651. Washington, D.C.: July 2, 2009. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Stephen L. Caldwell, Director, Homeland Security and Justice Issues, [email protected], (202) 512-8777. Improve the efficiency of the process to detain and remove criminal aliens from the United States. By September 30, 2013, reduce the average length of stay in immigration detention of all convicted criminal aliens prior to their removal from the country by 5 percent. Our past work does not provide a basis to assess DHS’s ability to improve the efficiency of the process to detain and remove criminal aliens from the United States. DHS has reported progress toward achieving this priority goal since fiscal year 2010. The DHS annual performance report for fiscal years 2011 to 2013—which also serves as the agency’s annual performance plan—showed that the agency reduced the length of stay in detention of all convicted criminal aliens prior to removal from the United States from 37 days in fiscal year 2010 to 34.7 days in fiscal year 2011. This represents a decline of over 6 percent compared to the priority goal target to reduce the length of stay by 5 percent. DHS attributed this decrease in part to expanded detention capacity in locations where detainee transfers occur most often, as the need to transfer a detainee from a facility in one location to another location increased the average length of stay by approximately 14 days. The DHS strategic plan for fiscal years 2012 to 2016 shows planned targets to maintain the same number of days (35) in fiscal years 2012, 2013, and 2016. Similarly, the DHS annual performance report for fiscal years 2011 to 2013 shows the same planned target of less than or equal to 35 days for both fiscal year 2012 and fiscal year 2013. DHS reported that it continues to focus on the development of a detention system that has the right number and type of facilities in the right locations to align with enforcement and removal activities. DHS reported making improvements in prior years, but noted that various challenges, such as case backlogs, could inhibit success in achieving further improvements in length of stay requirements. For example, maintaining the average length of stay for criminal aliens at, or slightly below, 35 days in the long term may require the hiring of additional immigration judges, according to DHS. We currently have no open recommendations or matters for congressional consideration related to this priority goal. We currently have no reports related to this priority goal. Rebecca Gambler, Director, Homeland Security and Justice Issues, [email protected], (202) 512-6912. Strengthen aviation security counterterrorism capabilities by using intelligence driven information and risk-based decisions. By September 30, 2013, the Transportation Security Administration (TSA) will expand the use of risk-based security initiatives to double the number of passengers going through expedited screening at airports, thereby enhancing the passenger experience. TSA’s goal is stated in broad terms; consequently, quantitatively measuring progress toward meeting the goal of strengthening aviation security counterterrorism capabilities will be a challenge. Further, the stated performance measure, with its focus on expedited passenger screening, will not allow TSA to assess its progress in using intelligence driven information and risk-based decisions to meet this goal in other related areas, such as in screening checked baggage or air cargo. TSA, as a component of DHS, relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers focus on screening millions of passengers and pieces of carry-on and checked baggage, as well as tons of air cargo, on a daily basis. Our past work has analyzed TSA’s progress in implementing these security measures and identified challenges it has encountered in implementing cost-effective aviation security programs and measuring performance. To help achieve its priority goal of strengthening aviation security counterterrorism capabilities by using intelligence driven information and risk-based decisions, TSA officials stated that the agency will, among other steps, expand the use of its new “TSA Pre✓™” program to double the number of passengers going through expedited screening at airports by September 30, 2013. TSA introduced TSA Pre✓™ in October 2011, and plans on expanding it to 40 airports by March 2013. Based on current participation, frequent flyers of five airlines as well as individuals enrolled in other departmental trusted traveler programs—where passengers are pre-vetted and deemed a trusted traveler—are eligible to be screened on an expedited basis. This program is intended to allow TSA to focus its resources on higher risk travelers. Agency officials have reported that with the deployment of this program and other risk-based security initiatives, such as modifying screening procedures for passengers 75 and over and active duty service members, TSA has achieved its stated goal of doubling the number of passengers going through expedited screening. According to TSA, by the end of calendar year 2013, TSA will provide expedited screening to 25 percent of the individuals currently processed through security screening. Achieving this target will mean that approximately 450,000 of the 1.8 million passengers who travel on average each day from the nation’s airports will undergo some form of expedited screening. However, since this goal is focused on passenger screening, it will not allow TSA to assess its progress in using intelligence driven information and risk-based decisions in other areas to achieve the broader outcome of strengthening aviation security counterterrorism capabilities, such as in screening checked baggage or air cargo. We plan to initiate a review of TSA’s progress in implementing TSA Pre✓™ in 2013. In our past work, we found that TSA has taken steps to implement aviation security mechanisms that are more intelligence-driven and risk- based. For example, TSA implemented the Secure Flight program to allow it to focus resources on high risk passengers by vetting passengers’ names, dates of birth, and other information against terrorist watch lists. In May 2009, we reported that TSA had made significant progress in developing the Secure Flight program but also noted that it faced challenges in identifying passengers who might use false identifying information. We also assessed TSA’s efforts to implement a behavior detection program that seeks to selectively identify potentially high-risk passengers for additional screening. Our May 2010 report found that while TSA has taken actions to validate the science underlying the program and improve performance measurement, among other actions, more work remains to ensure the program’s effectiveness, such as developing comprehensive program performance measures. In March 2012, we reported that questions related to the program will remain until TSA demonstrates that using behavior detection techniques can help secure the aviation system against terrorist threats. TSA plans to or is currently implementing a number of other behavior based programs that we plan to report on in 2013. In November 2012, we recommended that TSA take steps to improve its oversight of air passenger screening complaint processes, by establishing (1) consistent policies for receiving complaints and informing passengers about complaint processes, (2) a process to systematically analyze information on complaints, and (3) a focal point to coordinate these efforts. In its comments on this report, DHS concurred with the recommendations and stated that TSA is taking steps to implement them. In May 2012, we recommended that to help DHS address challenges in meeting the air cargo screening mandate as it applies to air cargo carried on passenger flights inbound to the United States, mitigate potential air cargo security vulnerabilities, and enhance overall efforts to screen and secure inbound air cargo, the Secretary of Homeland Security should direct the Administrator of TSA to assess the costs and benefits of requiring all-cargo carriers to report data on the amount of inbound air cargo screening being conducted. In comments on the May 2012 report, DHS concurred with the recommendation and stated that TSA was working on developing a system that will provide the capability for all- cargo carriers to report data on screened high-risk inbound air cargo shipments. In April 2013, TSA reported that once this system becomes fully operational, these data will be available for each all-cargo carrier. In July 2011, we recommended that TSA develop a plan to deploy explosives detection systems (EDS) that meet the most recent explosives-detection requirements and ensure that new machines, as well as machines deployed in airports, will be operated at the levels established in those requirements. This plan should include the estimated costs for new machines and upgrading deployed machines, and the time frames for procuring and deploying new machines. In commenting on this report, DHS concurred with the recommendation. As of March 2013, TSA has a plan in place to evaluate and implement the most recent certified algorithms on the existing fleet of deployed EDSs. However, our recommendation calls for a plan to deploy new EDSs as well as to upgrade existing EDSs in airports to meet the 2010 EDS explosives detection requirements. Our recommendation was intended to ensure that all EDSs operating in airports meet the most recent requirements, which are currently the 2010 requirements. Consequently, we continue to believe that a plan is needed that describes the approach that TSA will use to deploy EDSs that meet the most recent explosives detection requirements and ensure that all deployed machines will be operated at the levels established in the latest requirements. In May 2010, we recommended that TSA perform a cost benefit analysis of TSA’s behavior detection program known as the Screening of Passengers by Observation Techniques (SPOT), including a comparison of the program with other security screening programs, such as random screening, or other already existing security measures. In commenting on this report, DHS concurred with the recommendation and TSA completed the analysis in December 2012. We are evaluating the cost-benefit analysis as part of ongoing work that we will report on in 2013. In May 2010, we recommended that TSA take steps to better measure the effectiveness of the SPOT program and evaluate the performance of TSA’s behavior detection officers, who implement the program at TSA- regulated airports. We also recommended that TSA establish a plan that includes objectives, milestones, and time frames to develop outcome- oriented performance measures to help refine the current methods used by behavior detection officers for identifying individuals who may pose a risk to the aviation system. In commenting on this report, DHS concurred with the recommendation and completed its plan in November 2012. We are evaluating the plan as part of ongoing work that we will report on in 2013. Air Passenger Screening: Transportation Security Administration Could Improve Complaint Processes. GAO-13-43. Washington, D.C.: November 15, 2012. Aviation Security: Actions Needed to Address Challenges and Potential Vulnerabilities Related to Securing Inbound Air Cargo. GAO-12-632. Washington, D.C.: May 10, 2012. Aviation Security: TSA Has Enhanced Its Explosives Detection Requirements for Checked Baggage, but Additional Screening Actions Are Needed. GAO-11-740. Washington, D.C.: July 11, 2011. Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges. GAO-10-763. Washington, D.C.: May 20, 2010. Aviation Security: TSA Has Completed Key Activities Associated with Implementing Secure Flight, but Additional Actions Are Needed to Mitigate Risks. GAO-09-292. Washington, D.C.: May 13, 2009. Stephen M. Lord, Director, Homeland Security and Justice Issues, [email protected], (202) 512-4379. Based on our past work, as well as that of the HUD IG, we commented on each of HUD’s six priority goals for 2012 to 2013: 1. Improve program effectiveness by awarding funds fairly and quickly. By September 30, 2013, HUD will improve internal processes to ensure that HUD can obligate 90 percent of Notice of Funding Availability (NOFA) programs within 180 calendar days from budget passage, ensuring that America’s neediest families have the shelter and services they need, when they need them. The timely obligation and subsequent disbursement of funds will positively impact the agency’s ability to achieve all of its priority goals. 2. Increase the energy efficiency and health of the nation’s housing stock. By September 30, 2013, HUD will enable a total of 159,000 cost effective energy efficient or healthy housing units, as part of a joint HUD-Department of Energy (DOE) goal of 520,000 in 2012 to 2013 and a total goal of 1.2 million units from 2010 through 2013. 3. Preserve affordable rental housing. By September 30, 2013, preserve affordable rental housing by continuing to serve 5.4 million families and serve an additional 61,000 families through HUD’s affordable rental housing programs. 4. Prevent foreclosures. By September 30, 2013, assist 700,000 homeowners who are at risk of losing their homes due to foreclosure. 5. Reduce vacancy rates. By September 30, 2013 reduce the average residential vacancy rate in 70 percent of the neighborhoods hardest hit by the foreclosure crisis relative to comparable areas. Hardest hit neighborhoods are defined as Neighborhood Stabilization Program (NSP) 2 Neighborhood Investment Clusters (NIC). 6. Reducing homelessness. By September 30 2013, in partnership with the VA, reduce the number of homeless veterans to 35,000 by serving 35,500 additional homeless veterans. HUD is also committed to making progress towards reducing family and chronic homelessness and is working towards milestones to allow for tracking of these populations. For each goal, we also identify our related past reports and provide an update on the status of any open recommendations and matters for congressional consideration that we previously made related to the goal. We also identify a GAO contact for our work related to each goal. Improve program effectiveness by awarding funds fairly and quickly. By September 30, 2013, HUD will improve internal processes to ensure that HUD can obligate 90 percent of NOFA programs within 180 calendar days from budget passage, ensuring that America’s neediest families have the shelter and services they need, when they need them. The timely obligation and subsequent disbursement of funds will positively impact the agency’s ability to achieve all of its priority goals. Although we have not conducted an in-depth analysis of HUD’s NOFA processes, our recent work and a recent bid protest decision highlight some of the challenges HUD has faced when trying to award funds quickly and the importance of using appropriate processes to award funds. In our bimonthly reviews of selected states’ and localities’ use of funds made available under the American Recovery and Reinvestment Act of 2009 (Recovery Act), we commented on the NOFA process HUD used to award nearly $1 billion in public housing capital funds to public housing agencies based on competition for priority investments, including investments that leveraged private sector funding or financing for renovations and energy conservation retrofit investments. In September 2009, we reported that HUD had received almost 1,800 applications for the funds and that its review process had been slower than expected. According to HUD officials, this was due to the number of applications with lengthy narratives needing review. Further, HUD officials stated that their staff were reviewing these applications while carrying out their ongoing responsibilities related to managing the public housing capital fund program. Despite these challenges, we reported in December 2009 that HUD had met the Recovery Act requirement to obligate all of the funds to public housing agencies by September 30, 2009. Specifically, HUD accepted applications from June 22 to August 18, 2009, and according to a HUD official, 746 housing agencies submitted 1,817 applications for these competitive grants. In September 2009, HUD awarded 396 competitive grants to housing agencies that successfully addressed the NOFA requirements. In addition, a recent bid protest decision highlights the importance of using appropriate processes to award funds regardless of the time involved. On August 15, 2012, we concluded that HUD’s use of a NOFA that resulted in the issuance of a cooperative agreement to obtain services for the administration of Project-Based Section 8 Housing Assistance Payment contracts was improper because the “principal purpose” of the NOFA was to obtain contract administration services for HUD’s direct benefit and use, which should be acquired under a procurement instrument that results in the award of a contract. In our August 15, 2012, bid protest decision, we recommended that HUD cancel the NOFA and solicit the contract administration services for the Project-Based Section 8 rental assistance program through a procurement instrument that would result in the award of contracts. In its response, HUD informed us of its intention to proceed with the NOFA and of its plan to make awards. However, as a result of litigation filed in the Court of Federal Claims that sought to enjoin it from proceeding with the NOFA, HUD announced its agreement not to make the awards until the court rules on the matter. As of March 2013, the court had not yet issued its decision. Assisted Housing Services Corporation; North Tampa Housing Development Corporation; The Jefferson County Assisted Housing Corporation; National Housing Compliance; Southwest Housing Compliance Corporation; CMS Contract Management Services and the Housing Authority of the City of Bremerton; Massachusetts Housing Finance Agency. B-406738 et al. August 15, 2012. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability. GAO-10-231. Washington, D.C.: December 10, 2009. Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed. GAO-09-1016. Washington, D.C.: September 23, 2009. Mathew J. Scirè, Director, Financial Markets and Community Investment, [email protected], (202) 512-8678. Increase the energy efficiency and health of the nation’s housing stock. By September 30, 2013, HUD will enable a total of 159,000 cost effective energy efficient or healthy housing units, as part of a joint HUD-DOE goal of 520,000 in 2012 to 2013 and a total goal of 1.2 million units from 2010 through 2013. Energy-efficient green building practices can increase up-front costs but also may provide long-term financial, environmental, and health benefits. In prior work, we credit HUD for using accepted energy-efficient green building standards developed by others, such as the Environmental Protection Agency’s (EPA) Energy Star program and the Enterprise Green Communities, as criteria for measuring progress toward its goal. These standards are generally recognized as effective measures of increased energy efficiency. However, our prior work also found that HUD could do more to promote energy efficiency. For example, in October 2008, we found that while HUD’s public housing office had shown leadership and initiative in partnering to develop a benchmarking tool that could be used to identify properties with high levels of utility consumption, HUD’s multifamily assisted housing had no such tool. In the absence of such a tool, HUD cannot target certain multifamily properties for green building improvements, which could result in benefits that include reduced resource consumption. In April 2013, HUD officials told us that they were collaborating with other federal agencies and industry partners to develop such a tool for its multifamily portfolio. Our October 2008 report also identifies ways that HUD could better meet its priority goal for cost-effective energy-efficient measures, particularly for water conservation. HUD officials we interviewed identified water conservation savings as significant and among the biggest potential opportunities for financial savings, but HUD had provided few incentive points for water conservation or indoor air quality measures in its competitive grant programs. Since our report, a number of HUD programs have added water savings devices to requirements for new construction and rehabilitation projects. In addition, the Interagency Rental Policy Working Group, which includes HUD, USDA, and Treasury, has adopted requirements for water saving products and energy star appliances for rehabilitation projects. As stated above, HUD’s priority goal is a portion of a larger HUD-DOE joint goal. DOE’s weatherization assistance program is one of the largest residential energy-efficiency programs in the nation and some DOE weatherization grantees also received HUD assistance. HUD officials told us that DOE grantees do not report which weatherization recipients received HUD assistance and HUD grantees are not required to report to HUD whether they received weatherization assistance. Consequently, double counting could occur, although HUD indicated that the likelihood of such double-counting is small. In October 2012, HUD officials told us that they were planning to use the results of a DOE evaluation of its weatherization program to identify any double counting and, if necessary, revise the overall HUD-DOE totals reported previously for 2009 and 2010. In April 2013, HUD officials told us that the data collection portion of DOE’s evaluation was complete and they were awaiting the results from DOE. We recommended in October 2008 that HUD ensure the completion of the regulation that would require the use of energy-efficient products and appliances for public housing as directed by the Energy Policy Act of 2005. HUD included the statutory requirement in a proposed rule published in February 2011, but as of March 2013, HUD had not published the final rule. We also recommended in October 2008 that HUD work with DOE to expeditiously implement energy-efficiency updates to the HUD manufactured housing code. Although manufactured housing is not part of HUD’s agency priority goal, we believe that energy-efficiency efforts in this area are directly related to the goal. Manufactured housing is an area in which HUD has significant influence because it has been responsible for establishing manufactured building code requirements since 1974. We found that HUD had not made significant energy efficiency updates to code for this program since 1994. HUD officials told us that pursuant to the requirements of the Energy Independence and Security Act of 2007 which moved responsibility for promulgating manufactured energy efficiency standards to DOE, they intended to wait to make energy- efficiency updates to the code because they were concerned about overlapping agency responsibilities between DOE and HUD. We concluded in October 2008 that waiting to take action could result in years more of some manufactured homes being built without improved energy standards. HUD has worked with DOE in developing more stringent energy standards for manufactured homes. For example, in February 2010, DOE published an advance notice of proposed rulemaking on energy efficient standards for manufactured homes pursuant to the 2007 Act and HUD officials told us that they met with DOE on the proposal. Until the rule is finalized, HUD and DOE will continue to miss an opportunity to improve the energy efficiency of manufactured housing units. Additionally, in October 2008 we recommended that HUD work with DOE’s Oak Ridge National Laboratory and EPA to develop a utility benchmarking tool for multifamily properties. We pointed out that HUD’s public housing office had shown leadership and initiative in partnering to develop a utility benchmarking tool that could be used to identify multifamily properties with high levels of utility consumption, and that HUD’s multifamily assisted housing could benefit from a similar tool that would allow properties to be targeted for green building improvements. In October 2012, HUD officials told us that Oak Ridge’s data tool is now out of date. They added that HUD was actively working with DOE, EPA, Fannie Mae, and industry representatives on a strategy to develop common data inputs and reporting standards for multifamily properties that could lead to a multifamily benchmarking tool. In April 2013, HUD officials told us that they are working to develop a multifamily energy star benchmark that will provide information on building performance on a portfolio basis. However, it is not clear when HUD intends to complete its energy start benchmark. Until such a tool is in place and HUD is able to benchmark utility costs in its multifamily portfolio, HUD will continue to miss opportunities to target less-efficient multifamily properties for green building improvements, and reduce resource consumption and utility expenses for itself and its funding recipients. In November 2011 we recommended that DOE, HUD, and EPA lead an effort to collaborate with other agencies to identify performance information, such as shared goals and common performance measures, for green building initiatives for the nonfederal sector. About one-third of the 94 initiatives we identified have goals and performance measures specific to green building and about two-thirds do not; therefore the results of most initiatives and their related investments in green building are unknown. DOE, HUD, and EPA generally agreed with the recommendation. In November 2012, HUD officials stated that they had met with EPA and DOE representatives to review our recommendation. The agencies generally agreed that initiatives that show potential for collaboration would be best served through existing interagency partnerships. HUD stated that the agencies might explore a higher level of centralized collaboration for the long term, but such efforts would require additional legislative or executive authority to implement. In October 2012, we found that key standards for manufactured homes provide a lower margin of safety against a carbon monoxide exposure incident than those for site built homes. We found that HUD’s ventilation standards establish standards for airflow, not air quality, and recommended that HUD test the performance of its installed ventilation systems and reassess its ventilation standards. Measuring the actual airflow achieved by installed ventilation systems would not only permit HUD to know whether its standards are being met, but also permit HUD to better understand the potential impact ventilation systems may have on indoor air quality and the overall health of the homes. HUD generally agreed with both recommendations and stated that it would bring them before the Manufactured Housing Consensus Committee, which is responsible for recommending proposed rules to HUD, for consideration. Manufactured Housing Standards: Testing and Performance Evaluation Could Better Ensure Safe Indoor Air Quality. GAO-13-52. Washington, D.C.: October 24, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Green Building: Federal Initiatives for the Nonfederal Sector Could Benefit from More Interagency Collaboration. GAO-12-79. Washington, D.C.: November 2, 2011. Green Affordable Housing: HUD Has Made Progress in Promoting Green Building, but Expanding Efforts Could Help Reduce Energy Costs and Benefit Tenants. GAO-09-46. Washington D.C.: October 7, 2008. William B. Shear, Director, Financial Markets and Community Investment, [email protected], (202) 512-8678. Frank Rusco, Director, Natural Resources and Environment, [email protected], (202) 512-3841. Preserve affordable rental housing. By September 30, 2013, preserve affordable rental housing by continuing to serve 5.4 million families and serve an additional 61,000 families through HUD’s affordable rental housing programs. HUD’s efforts to achieve this goal involve several HUD programs including the housing choice voucher, public housing, and project-based programs, which are HUD’s key programs for delivering rental assistance.factors that could impact HUD’s ability to meet this goal including the department’s ability to keep property owners in the project-based program and serve additional households. Our past work on these programs has identified a number of HUD has implemented recommendations we made in April 2007 to help enhance HUD’s ability to keep project-based property owners in the program. Specifically, we found that although HUD offered several incentives to keep property owners in the program, some property owners, managers, and industry representatives cited concerns with certain HUD policies and practices, especially the one-to-one replacement policy (which prohibited reductions in the total number of program units in a property when a contract was renewed), and the reimbursement process for operating costs in high cost areas. In 2011, HUD modified this policy and revised the way it calculates reimbursement for operating costs. We also found that between 2001 and 2005 owners renewed 92 percent of contracts and 95 percent of units covered by these contracts. Property owners, managers, and industry representatives with whom we spoke as part of our April 2007 report indicated that market conditions were the primary factors in owners’ decisions to leave or remain in the program. Similarly, HUD has implemented a recommendation we made in November 2005 to address late housing assistance payments to landlords, which may encourage owners to participate in HUD’s project-based program. Specifically, in 2007 HUD made improvements to its data, including verifying information on contract renewal dates and project costs, which should help the department more reliably determine the timing and amount of funding needed by the landlords, thereby improving the timeliness of its payments. Our work has also identified certain factors that could enhance HUD’s ability to meet its goal to preserve affordable rental housing. For instance, our March 2012 report on the housing choice voucher program identified potential areas that, if implemented, could help HUD reach more renter households. Specifically, excess reserves in the voucher program could be used to serve additional families, if authorized by Congress. In addition, certain rent reform options (that is, changes to the calculation of households’ payment toward rent) may allow HUD to serve more people. For example, if implemented, rent reform could reduce the federal cost burden—in some cases, quite considerably—or if Congress chose to reinvest cost savings in the program, allow the program to serve between 1,400 to 287,000 additional households, depending on which rent reform option was implemented. We also noted that HUD could do more to ensure that certain housing agencies continue to serve households. Specifically, our April 2012 report found that HUD may not be able to systematically ensure that agencies participating in the Moving to Work (MTW) demonstration program are meeting the requirement to serve substantially the same number of households through their rental assistance programs that they would As a result, the program have been able to serve prior to participation.may affect HUD’s ability to meet its goal of preserving affordable rental housing. We found that, contrary to internal control guidance, HUD did not have a process in place to systematically review compliance with several program requirements, including the requirement to serve substantially the same number of households. We concluded that because Congress is considering expanding the program to many more housing agencies, the absence of information needed to conduct compliance reviews is significant. We further stated that without more complete knowledge of the extent to which agencies are adhering to program requirements, it is difficult for Congress to know whether an expanded MTW will benefit additional agencies and the residents they serve. More broadly, as part of our work on overlap, fragmentation, and duplication in federal programs, we reported in August 2012, that although selected HUD, USDA, and the Department of the Treasury (Treasury) rental housing programs had overlapping purposes, the agencies’ products, areas served, and delivery methods differed. Specifically, we found that although HUD is the only agency that has a specific priority goal to preserve affordable rental housing, seven of the nine selected HUD, USDA, and Treasury programs we reviewed have the shared purpose of financing the development of new rental units or preserving existing units through refinancing or rehabilitation. However, we found that five of these programs differ in terms of tenant and geographic eligibility. Additionally, we found that HUD and USDA administer project-based rental assistance programs, which provide rental subsidies to property owners that provide housing to low-income households. We reported that although HUD serves more households in rural areas, a large share of units with USDA’s rental assistance were in rural ZIP codes, while a smaller share of units with HUD rental assistance were in these areas. In addition, we also found that all three agencies have been working to consolidate and align requirements in rental We housing programs through the Rental Policy Working Group.concluded that although its efforts have been consistent with many key collaborative practices, the group has not taken full advantage of opportunities to reinforce agency accountability for collaborative efforts through the agencies’ annual and strategic plans, or expanded its guiding principles to evaluate areas requiring statutory action to generate savings and efficiencies. In August 2012, we recommended that to further improve HUD, USDA, and Treasury’s efforts through the Rental Policy Working Group to consolidate and align certain requirements in multifamily housing programs, the Rental Working Group should take steps to document collaborative efforts in strategic and annual plans to help reinforce agency accountability for these efforts. HUD and USDA agreed with the recommendation. In April 2012, we recommended that the Secretary of the Department of Housing and Urban Development develop and implement a systematic process for assessing compliance with statutory requirements. In response to this recommendation, HUD stated that the agency had conducted an extensive effort that allowed it to monitor compliance with the requirement for agencies to continue assisting substantially the same total number of households that they would have been able to serve prior to participating in the MTW program. HUD further stated that it was testing implementation of the process and planned to formalize the process through the publication of a notice. On January 10, 2013, HUD issued a notice that describes a compliance effort that, according to HUD, will ensure that MTW agencies continue to meet the statutory obligation to serve substantially the same number of families as if they had not participated in the MTW demonstration.use a numerical indicator to make annual determinations of compliance. According to the notice, HUD will In March 2012, we recommended that the HUD Secretary provide information to Congress on (1) housing agencies’ estimated amount of excess subsidy reserves and (2) HUD’s criteria for how it will redistribute excess reserves among housing agencies so that they can serve more households. In taking these steps, the Secretary should determine a level of subsidy reserves housing agencies should retain on an ongoing basis to effectively manage their voucher programs. HUD neither agreed nor disagreed with our recommendation. HUD noted that it currently provides quarterly reports to the Congressional Budget Office on subsidy reserve levels. However, these quarterly reports do not include information on the estimated amount of agencies’ subsidy reserves that exceed prudent levels. HUD did not comment on its efforts to provide information to Congress on the criteria for how it will redistribute excess reserves among agencies so it can serve more households. In March 2013, HUD officials told us that, upon request, they provide information to HUD’s Appropriations Committee on subsidy reserves levels, including those balances above certain minimum reserve levels. We will continue monitoring the agency’s progress in implementing our recommendations. Housing Assistance: Opportunities Exist to Increase Collaboration and Consider Consolidation. GAO-12-554. Washington D.C.: August 16, 2012. Moving to Work Demonstration: Opportunities Exist to Improve Information and Monitoring. GAO-12-490. Washington D.C.: April 19, 2012. Housing Choice Vouchers: Options Exist to Increase Program Efficiencies. GAO-12-300. Washington D.C.: March 19, 2012. Project-Based Rental Assistance: HUD Should Update Its Policies and Procedures to Keep Pace with the Changing Housing Market. GAO-07-290. Washington D.C.: April 11, 2007. Project-Based Rental Assistance: HUD Should Streamline Its Processes to Ensure Timely Housing Assistance Payments. GAO-06-57. Washington D.C.: November 15, 2005. Daniel Garcia-Diaz, Director, Financial Markets and Community Investment, [email protected], (202) 512-8678. Prevent foreclosures. By September 30, 2013, assist 700,000 homeowners who are at risk of losing their homes due to foreclosure. HUD’s efforts to achieve this goal involve the Federal Housing Administration’s (FHA) early delinquency interventions and loss mitigation programs. However, our past work raised questions about whether FHA has collected and analyzed data to assess the effectiveness of these efforts in preventing redefaults. Further, the HUD IG raised questions about the extent to which certain efforts were conducted in accordance with program requirements. In June 2012, we reported that millions of borrowers faced an elevated risk of foreclosure and that various indicators showed that the housing market remained weak. In particular, we noted that the serious delinquency rate for FHA loans increased in the second half of 2011, counter to trends in the broader market. We reported that FHA had been working with loan servicers to identify best practices for reaching borrowers and had reporting requirements for servicers throughout the delinquency process. However, we found that although FHA had begun to calculate redefault rates for specific home retention actions, it had not used this information to assess the effectiveness of its foreclosure mitigation efforts. Doing so is particularly important because FHA loan modifications typically do not reduce borrower’s monthly payments to the levels that our analysis indicated result in more sustainable modifications. We also found that FHA had not assessed the impact of loan and borrower characteristics on the performance of its foreclosure mitigation efforts. In some cases, FHA did not have the data needed to conduct these analyses. In a September 2012 report, the HUD IG estimated that 11,693 preforeclosure sales completed during the 12-month period it reviewed did not meet HUD’s requirements for participation and recommended that HUD strengthen controls over the preforeclosure sale program. Preforeclosure sales are one type of FHA loss mitigation action included in HUD’s calculation of borrowers assisted. Including ineligible preforeclosure sales in the calculation of borrowers assisted could overstate foreclosure prevention efforts. HUD has previously reported performance that exceeds its target for preventing foreclosures for fiscal years 2012 and 2013. During the period covering fiscal years 2010 and 2011, HUD reported assisting 902,431 homeowners that were in danger of losing their homes to foreclosure— 496,197 through FHA early delinquency interventions and 406,234 through FHA loss mitigation programs. For fiscal years 2012 and 2013, HUD anticipates meeting its foreclosure prevention goal by reaching 500,000 homeowners with early delinquency interventions and an additional 200,000 through loss mitigation programs. Through the end of fiscal year 2012, HUD reported that it was more than halfway to meeting its goal, having reached 290,216 homeowners with early delinquency interventions and 154,933 homeowners through loss mitigation programs. In June 2012, we recommended that FHA conduct periodic analyses of the effectiveness and the long-term costs and benefits of its loss mitigation strategies and actions. These analyses should consider (1) the redefault rates associated with each type of home retention action and (2) the impact that loan and borrower characteristics have on the performance of different home retention actions. FHA should use the results from these analyses to reevaluate its loss mitigation approach and provide additional guidance to servicers to effectively target foreclosure mitigation actions. If FHA does not maintain data needed to consider this information, it should require servicers to provide the data. In an August 2012 response to our recommendations, HUD noted that it was performing a complete review of the structure of its home-retention assistance. HUD is also undertaking an analysis of borrower and loan data with the goal of proactively directing servicers as to which assistance actions should be targeted to particular borrowers. In November 2012, FHA issued Mortgagee Letter 2012-22, which contained changes to the requirements for servicers to follow when assessing borrowers for FHA loss mitigation home-retention options. We requested and plan to assess the analysis HUD completed as the basis for this change in FHA’s loss mitigation strategies to determine whether it fully responds to our recommendation. Foreclosure Mitigation: Agencies Could Improve Effectiveness of Federal Efforts with Additional Data Collection and Analysis, GAO-12-296. Washington, D.C.: June 28, 2012. Mathew J. Scirè, Director, Financial Markets and Community Investment, [email protected], (202) 512-8678. Reduce vacancy rates. By September 30, 2013 reduce the average residential vacancy rate in 70 percent of the neighborhoods hardest hit by the foreclosure crisis relative to comparable areas. Hardest hit neighborhoods are defined as Neighborhood Stabilization Program (NSP) 2 Neighborhood Investment Clusters (NIC). HUD will apply the results of the second phase of its Neighborhood Stabilization Program (NSP 2), funded under the American Recovery and Reinvestment Act, towards achieving this agency priority goal. The agency considers NSP, which provides grants to government and other entities to try to reduce the number of foreclosed and abandoned properties, its primary tool for mitigating the effects of foreclosures on As we reported in November 2011, high foreclosure neighborhoods.rates have contributed to increased vacancies, which can impose additional costs and challenges on communities, including increased public safety costs and lower tax revenues. While we have not directly assessed HUD’s capacity to achieve this agency priority goal, our prior work evaluating NSP indicates that through this program, HUD has the potential to reduce vacancies in areas receiving NSP funding. In December 2010, we examined HUD’s implementation of the first phase of NSP and grantees’ compliance with program requirements in using their funds to mitigate the impacts of foreclosures, which can include increased vacancies. We found that HUD and grantees had taken actions to try to ensure program compliance. While three phases of NSP were authorized by different pieces of legislation, all three rounds of NSP generally follow the same requirements. Therefore our previous findings are applicable to NSP 2, the contributing program to this agency priority goal. Through its NSP technical assistance program, HUD has hired The Reinvestment Fund (TRF) to conduct analyses of NSP investments across the United States. HUD is using aspects of this analysis to measure progress toward meeting this agency priority goal. As part of its quarterly studies, TRF is conducting analysis of trends in vacancy rates within NICs versus comparable areas (or neighborhoods). NICs are geographic areas with a concentration of properties to which NSP funds have been applied. On the Performance.gov website, HUD uses “NIC” to refer to those areas to which NSP 2 funds have been applied. For both the first and second quarter of 2012, HUD reported that about 78 percent of NICs had lower vacancy rates than at least one comparable area— outperforming the agency priority goal’s target of 70 percent. We have not assessed the reliability of TRF’s studies. However, in its analysis, TRF uses data from HUD’s Disaster Recovery Grant Reporting (DRGR) system, the information system used by all NSP grantees to report on their activities and results. In December 2010, we reported that inconsistencies in the manner in which grantees entered data into DRGR could complicate the analysis of program outputs and result in over counting and under counting of program outputs. We recommended that HUD take several actions to improve the consistency of the data collected from NSP grantees. In October 2012, HUD addressed these recommendations by issuing detailed guidance. For the purposes of measuring progress towards this goal, TRF uses property address information from DRGR for units that received NSP 2 funding. Quarterly, HUD standardizes and removes duplicate address information from DRGR to try to prevent any double counting of properties in TRF’s analyses. We currently have no open recommendations or matters for congressional consideration related to this priority goal. Vacant Properties: Growing Number Increases Communities’ Costs and Challenges. GAO-12-34. Washington, D.C.: November 4, 2011. Neighborhood Stabilization Program: HUD and Grantees Are Taking Actions to Ensure Program Compliance but Data on Program Outputs Could be Improved. GAO-11-48. Washington, D.C.: December 17, 2010. Mathew J. Scirè, Director, Financial Markets and Community Investment, [email protected], (202) 512-8678. Reducing homelessness. By September 30 2013, in partnership with the VA, reduce the number of homeless veterans to 35,000 by serving 35,500 additional homeless veterans. HUD is also committed to making progress towards reducing family and chronic homelessness and is working towards milestones to allow for tracking of these populations. HUD notes that several programs are expected to contribute to the achievement of its priority goal to reduce homelessness, including the HUD Veterans Affairs Supportive Housing (HUD-VASH) program, HUD homeless assistance programs, and the Homelessness Prevention and Our past work has identified a number of Rapid Re-Housing Program.issues related to these programs. For example, our June 2012 report on the HUD-VASH program—a collaborative initiative between HUD and VA that targets the most vulnerable, most needy, and chronically homeless veterans—states that the program has moved veterans out of homelessness. Specifically, according to VA, as of March 2012, nearly 31,200 veterans lived in HUD-VASH supported housing, which represents about 83 percent of the vouchers authorized under the program. In addition, our December 2011 report on homeless women veterans notes that although HUD collects data on homeless women and on homeless veterans, the department does not collect detailed information on homeless women veterans and neither HUD nor VA captures data on the overall population of homeless women veterans. Further, our report states that HUD and VA lack data on the characteristics and needs of these women on a national, state, and local level. Finally, our report notes that absent more complete data on homeless women veterans, VA does not have the information needed to plan services effectively, allocate grants to providers, and track progress toward its overall goal of ending veteran homelessness. Our May 2012 report on the fragmentation, overlap, and duplication among federal homelessness programs also identified issues related to the programs that are expected to contribute to the achievement of this priority goal. For example, our report noted HUD was one of eight federal agencies that administered 26 targeted homelessness programs in fiscal year 2011, suggesting fragmentation and some overlap among these programs.fund housing assistance, but also provides funding for mental health care, substance abuse treatment, and employment services. Similarly, HHS and VA administer programs that provide housing and employment assistance. Fragmentation and overlap can lead to inefficient use of resources. Some local service providers told us that managing multiple applications and reporting requirements was burdensome, difficult, and costly. Moreover, according to providers, persons experiencing More specifically, HUD not only administers programs that homelessness have difficulties navigating services that are fragmented across agencies. Further, our report states that limited information exists about the efficiency or effectiveness of targeted homelessness programs because evaluations have not been conducted recently—including for the six programs HUD administers. Finally, our report states that the U.S. Interagency Council on Homelessness (Interagency Council) strategic plan to prevent and end homelessness has served as a useful and necessary first step in increasing agency coordination and focusing attention on ending homelessness; however, the plan lacks key characteristics desirable in a national strategy. For example, the plan does not list priorities or milestones and does not discuss resource needs or assign clear roles and responsibilities to federal partners. In May 2012, we recommended that the Interagency Council and the Office of Management and Budget, in conjunction with the Secretaries of HHS, HUD, Labor, and VA, should consider examining inefficiencies that may result from overlap and fragmentation in their programs for persons experiencing homelessness. VA agreed with this recommendation. HHS, HUD, Labor, and the Interagency Council did not explicitly agree or disagree. We also recommended that to help prioritize, clarify, and refine efforts to improve coordination across agencies, and improve the efficiency and effectiveness of federal homelessness programs, the Interagency Council, in consultation with its member agencies, should incorporate additional elements into updates to the national strategic plan or other planning and implementation documents to help set priorities, measure results, and ensure accountability. According to the Interagency Council, its fiscal year 2013 report will focus on updates and progress made on the national strategic plan’s objectives. The Interagency Council’s national strategic plan broadly describes the federal approach to preventing and ending homelessness; however, until the key member agencies fully implement their plans, including setting priorities, measuring progress and results, and holding federal and nonfederal partners accountable, they are at risk of not reaching their goal of ending veteran and chronic homelessness by 2015, and ending homelessness among children, youth, and families by 2020. In December 2011, we recommended that in order to help achieve the goal of ending homelessness among veterans, the Secretaries of HUD and VA should collaborate to ensure appropriate data are collected on homeless women veterans, including those with children and those with disabilities, and use these data to strategically plan for services. In concurring with this recommendation, VA stated it had several initiatives already planned or under way to gather information on those homeless women veterans who are in contact with VA, including the development of a more streamlined and comprehensive data collection system. In April 2013, VA stated that it had taken additional actions to inform policy and operational decisions about homeless and at-risk women veterans. For example, VA stated that in 2013 it worked with HUD to ensure that gender specific data were collected during the 2013 Point in Time count of homeless persons. VA added that the results of the 2013 Point in Time count will be included in the Annual Homeless Assessment Report to Congress which will be published later in 2013 and will be used by the department to strategically plan and implement services for all homeless and at-risk veterans, including women veterans. In addition, VA stated that in 2012 it revised the Community Homelessness Assessment, Local Education and Networking Groups survey to capture gender specific data for homeless veterans to better identify the needs of women veterans and influence service provision. Veteran Homelessness: VA and HUD Are Working to Improve Data on Supportive Housing Program. GAO-12-726. Washington, D.C.: June 26, 2012. Homelessness: Fragmentation and Overlap in Programs Highlight the Need to Identify, Assess, and Reduce Inefficiencies. GAO-12-491. Washington, D.C.: May 10, 2012. Homeless Women Veterans: Actions Needed to Ensure Safe and Appropriate Housing. GAO-12-182. Washington, D.C.: December 23, 2011. Homelessness: A Common Vocabulary Could Help Agencies Collaborate and Collect More Consistent Data. GAO-10-702. Washington, D.C.: June 30, 2010. Alicia Puente Cackley, Director, Financial Markets and Community Investment, [email protected], (202)512-8678. Based on our past work, as well as that of the DOT IG, we commented on each of DOT’s four priority goals for 2012 to 2013: 1. Air traffic control systems can improve the efficiency of airspace. By September 30, 2013, replace a 40-year-old computer system serving 20 air traffic control centers with a modern, automated system that tracks and displays information on high altitude planes. 2. Advance the development of passenger rail in the United States. By September 30, 2013, initiate construction on all 7 high speed rail corridors and 36 individual high speed rail projects. 3. Reduce risk of aviation accidents. By September 30, 2013, reduce aviation fatalities by addressing risk factors both on the ground and in the air. Commercial aviation (i.e. airlines): Reduce fatalities to no more than 7.4 per 100 million people on board. General aviation (i.e. private planes): Reduce fatal accident rate per 100,000 flight hours to no more than 1.06. 4. Reduce the rate of roadway fatalities. Reduce the rate of roadway fatalities from 1.26 in 2008 to 1.03 per 100 million vehicle miles traveled by December 31, 2013. For each goal, we also identify our related past reports and provide an update on the status of any open recommendations and matters for congressional consideration that we previously made related to the goal. We also identify a GAO contact for our work related to each goal. Air traffic control systems can improve the efficiency of airspace. By September 30, 2013, replace a 40-year-old computer system serving 20 air traffic control centers with a modern, automated system that tracks and displays information on high altitude planes. The priority goal refers to the Federal Aviation Administration’s (FAA) replacement of the existing en route air traffic control automation system used in its en route air traffic control centers (centers) with a new system architecture, the En Route Automation Modernization system (ERAM). While our previous work has shown that FAA has experienced delays in deploying ERAM, FAA has since made progress toward achieving this goal. As we reported in September 2012, FAA has experienced delays in deploying ERAM, which affected overall acquisition and maintenance costs as well as time frames for other programs. Specifically, the ERAM program is almost 4 years behind its original schedule and about $330 million, or about 15 percent, over its original budget because of the following factors: unanticipated risks associated with operational complexities at the field sites, insufficient testing to identify software issues before deployment at the field sites, insufficient communication between the program office and field sites, and insufficient stakeholder (e.g., air traffic controller) involvement during system development and deployment. The delays added an estimated $18 million per year to the costs of maintaining the system that ERAM was meant to replace. Since new budget and schedule baselines for the ERAM program were established in June 2011, according to FAA reports, the program has made progress toward its goal of initial operating capability of ERAM by September 30, 2013. As of March 2013, FAA had achieved initial operating capability at 16 out of 20 centers and expects to achieve this goal. In September 2012, the DOT IG reported that FAA’s use of initial operating capability for tracking progress with ERAM gave FAA decision makers a false sense of confidence in the maturity of the system when in reality, much work and time still remained in implementing the system. For example, after FAA declared its first two sites as achieving initial operating capability, these sites experienced multiple failures after the milestone was achieved and went through a measured transition from limited operations to eventual continuous operations. In response to the DOT IG’s recommendation to better define key milestones to reflect progress, FAA is planning to establish criteria for entrance and exit at the various key milestones, including initial operating capability. FAA also plans to have all 20 centers operationally ready by August 2014. As we reported in September 2012, looking more broadly, ERAM is considered to be the backbone that will support the Next Generation Air Transportation System (NextGen)—a new air traffic management system that will replace the current radar-based system and is expected to enhance the safety and capacity of the air transport system—and delays with ERAM’s deployment illustrate challenges FAA faces in implementing NextGen. For example, delays in ERAM affected the implementation of two other key NextGen acquisitions—Data Communications and System Wide Information Management. In part because of ERAM’s delay, FAA pushed the Data Communications program’s start date from September 2011 to May 2012, revised the original plan for the first System Wide Information Management segment, and delayed the start date for another segment from 2010 to July 2012. The implementation of NextGen—both in the midterm (through 2020) and in the long term (beyond 2020)—will be affected by how well FAA manages these and other program interdependencies. To address past issues with cost estimate and schedule accuracy, such as those with ERAM, in February 2012, we recommended that when appropriate for major acquisition programs based on a program’s cost, schedule, complexity, and risk, FAA conduct an assessment of major acquisition programs to ensure they meet all of the established best practices for cost estimates and schedules contained in our guidance; require a fully integrated master schedule for each major acquisition program, including those that are NextGen components; and conduct independent cost estimates and schedule risk analysis. As of March 2013, FAA has taken steps to implement these recommendations. For example, according to FAA officials, FAA’s Acquisition Executive Board now considers whether an independent cost estimate or schedules risk analysis is advisable as parts of its program review. FAA is also developing an Integrated Master Schedule for the entire NextGen initiative that is, in part, intended to show how changes in program schedules affect other programs and the timelines for the NextGen initiative as a whole. To further strengthen schedule integration, FAA plans to continue populating the integrated master schedule and then begin integrating this tool with other FAA planning tools, including the National Airspace System Enterprise Architecture and NextGen Implementation Plan, in December 2013. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. September 12, 2012. Air Traffic Control Modernization: Management Challenges Associated with Program Costs and Schedules Could Hinder NextGen Implementation. GAO-12-223. February 16, 2012. NextGen Air Transportation System: FAA’s Metrics Can Be Used to Report on Status of Individual Programs, but Not of Overall NextGen Implementation or Outcomes. GAO-10-629. July 27, 2010. Gerald L. Dillingham, Ph.D., Director, Physical Infrastructure Issues, [email protected], (202) 512-2834. Advance the development of passenger rail in the United States. By September 30, 2013, initiate construction on 7 high speed rail corridors and 36 individual high speed rail projects. While our past work does not cover activities related to construction of high speed rail projects, we and the DOT IG have reported on planning and other weaknesses with the Federal Railroad Administration’s (FRA) High Speed Intercity Passenger Rail (HSIPR) program related to FRA’s capacity to achieve this APG. The federal government had not historically had a strong leadership role in intercity passenger rail but this changed in 2009 when the HSIPR program was authorized. The program provides grants to states and other entities to develop high speed intercity passenger rail corridors and projects. As of January 2013, FRA had awarded grants for 150 high speed intercity passenger rail projects. Of these projects, FRA had obligated about $9.2 billion for 9 corridor programs and 57 individual high speed rail projects as of December 2012. According to FRA, by this same date, states and other project sponsors had begun construction on 5 of the high speed rail corridors and 33 of the individual high speed rail projects. By the end of fiscal year 2013, FRA’s goal is to begin construction on a total of 7 corridor and 36 individual high speed rail projects. The fundamental weaknesses of the HSIPR program include not having well-defined goals, a clear strategic vision, and performance measures to track program progress. In March 2009, before the HSIPR program was established, we reported that several principles could help guide the potential federal role in high speed rail. These principles included, among other things, creating well-defined goals based on identified areas of national interest, incorporating performance and accountability for results into funding decisions, and employing the best analytical tools and approaches to emphasize return on investment. Similarly, in June 2010, we reported that FRA’s strategic vision for high speed rail, as outlined in the agency’s April 2009 Vision for High-Speed Rail in America, did not define the goals, stakeholder roles, or objectives for federal involvement in high speed intercity passenger rail and that the agency’s preliminary national rail plan did not include recommendations for future action. Further, although states would be among primary HSIPR grant recipients, many did not have rail plans that would establish strategies and priorities for rail investments or identify the public benefits of such investments. Many of these weaknesses continue. For example, in September 2012 the DOT IG reported that FRA’s HSIPR goals lacked the thoroughness needed to ensure that grant managers and decision makers, including Congress, could understand them, and that FRA generally lacked performance measures needed to assess the program’s progress in achieving its goals as well as complete monitoring mechanisms. According to DOT, FRA has taken action to address some of the issues reported by the DOT IG, including developing a standardized mechanism for collecting and tracking HSIPR grantee performance and compliance metrics, and developing a comprehensive grants management training curriculum. There are also weaknesses in how the HSIPR program is administered starting with how HSIPR grants are awarded. In March 2011 we reported that, although FRA had applied its established criteria during the eligibility and technical reviews of the HSIPR grant applications, we could not verify whether it applied its final selection criteria because the documented rationales for selecting projects were typically vague. We concluded that without a record that provided insight into why decisions were made, FRA invited skepticism about the overall fairness of its decision making. Other program weaknesses include the lack of guidance issued to HSIPR applicants and FRA’s grants administration framework. In March 2009 we recommended the Secretary of Transportation develop guidance and methods for ensuring reliability of ridership and other forecasts used to determine the viability of high speed rail projects. In March 2012, the DOT IG also reported that FRA had established only minimal requirements and guidance on the information HSIPR grant applicants must provide to FRA on project viability which did not provide enough detail to minimize bias and ensure accuracy in project viability assessments. In addition, in September 2012, the DOT IG reported that FRA had issued the policies and procedures for HSIPR grants management several years after the program had been established and that insufficient staffing and training undermined FRA’s efforts to effectively administer and ensure the accountability of HSIPR grant funds once they are awarded. FRA’s Grants Management Manual was not issued until April 2012, almost 3 years after the HSIPR program was authorized. FRA’s monitoring plan, which will, among other things, guide performance and compliance monitoring for the HSIPR program, was not finalized until March 2012. Aside from program weaknesses, we have found that implementing high speed rail projects is difficult. This difficulty could affect achievement of program goals. Our March 2009 report identified some of the challenges in developing and financing high speed rail projects, including securing the up-front investments for such projects and sustaining public and political support and stakeholder consensus. We concluded that whether any high speed rail proposals are eventually built hinges on addressing the funding, public support, and other challenges facing these projects. In March 2011, we recommended that FRA create additional records to document the substantive reasons behind award decisions in future HSIPR funding rounds to better ensure accountability for its use of federal funds. As of November 2012, FRA had enhanced its grant management manual with more explicit requirements for documenting the rationale behind its funding selections. In March 2009, we recommended, among other things, that the Secretary of Transportation develop guidance and methods to improve the reliability and accuracy of ridership, cost, and other forecasts for these systems. As of November 2012, FRA said it is implementing this recommendation in conjunction with stakeholders, partners, and researchers, through an iterative process of developing methods and guidance, using them, and then refining them. FRA is also working with a research panel of the Transportation Research Board to develop a handbook that will provide tools to decision makers in such areas as ridership forecasting and service characteristics (e.g., frequency of service). FRA said it expects implementing this recommendation will require from 5 to 10 years. In September 2012, DOT’s IG recommended that before awarding, obligating, and disbursing additional grant funds, FRA should take several actions to establish a comprehensive grants management program with clear program goals and mechanisms to track grantee performance toward those goals. In response to the IG’s recommendations, FRA officials concurred with each recommendation and said they would implement reports, tools and training programs to meet the IG’s recommendations starting in late 2012. Intercity Passenger Rail: Recording Clearer Reasons for Awards Decisions Would Improve Otherwise Good Grantmaking Practices. GAO-11-283. Washington, D.C.: March 10, 2011. High Speed Rail: Learning from Service Start-Ups, Prospects for Increased Industry Investment, and Federal Oversight. GAO-10-625. Washington, D.C.: June 17, 2010. High Speed Passenger Rail: Future Development Will Depend on Addressing Financial and Other Challenges and Establishing a Clear Federal Role. GAO-09-317. Washington, D.C.: March 19, 2009. Susan Fleming, Director, Physical Infrastructure Issues, 202-512-2834, [email protected]. Reduce risk of aviation accidents. By September 30, 2013, reduce aviation fatalities by addressing risk factors both on the ground and in the air. Commercial aviation (i.e. airlines): Reduce fatalities to no more than 7.4 per 100 million people on board. General aviation (i.e. private planes): Reduce fatal accident rate per 100,000 flight hours to no more than 1.06. DOT’s FAA has worked toward these goals by partnering with the airline industry and other stakeholders through the Commercial Aviation Safety Team (CAST), improving runway safety, shifting toward a risk-based analysis of airborne aviation system information, establishing safety management systems (SMS), renewing the General Aviation Joint Steering Committee (GAJSC), and developing a 5-year strategy for reducing general aviation fatalities. DOT reported that from 2009 through 2011 FAA exceeded its targets for reducing commercial air carrier fatalities, well below its 2013 goal. For general aviation fatality rates, however, FAA has not yet achieved its goal, in part due to challenges we have recently discussed. As we reported in April 2012, CAST has contributed to reducing commercial aviation accidents by analyzing past accidents and incidents to identify precursors and contributing factors, and ensuring that efforts to improve safety focus on the most prevalent accident categories. CAST has reduced commercial aviation risks by focusing on areas including controlled flight into terrain, loss of control, and runway incursions. CAST analyzes accident and incident data to identify precipitating conditions and causes, and then formulates an intervention strategy designed to reduce the likelihood of a recurrence. According to CAST, its work—along with new aircraft, regulations, and other activities—reduced the commercial aviation fatal accident rate by 83 percent from 1998 to 2008 and is an important aspect of FAA’s efforts to improve aviation safety by sharing and analyzing data. However, as we reported in October 2011, for safety at and around airports, including runways, the overall rate of runway incursions (the unauthorized presence of an airplane, vehicle, or person on the runway) at towered airports has trended steadily upward, as has the rate and number of airborne operational errors (errors made by air traffic controllers), though it is not clear whether these recent increases in operational errors can be attributed to several changes in reporting policies and procedures at FAA, or increases in actual incidents. We reported in September 2012 that FAA is seeking to further enhance commercial aviation safety by shifting to a data-driven, risk-based safety oversight approach—referred to as SMS. SMS represents a proactive approach to safety and is intended to continually monitor all aspects of aviation operations and collect appropriate data to identify emerging safety problems before they result in death, injury, or significant property damage. SMS implementation is required for FAA and several of its business lines and the agency is taking steps to require industry implementation. Several challenges remain that may affect FAA’s ability to effectively implement SMS. FAA is taking steps to address some of these, but challenges related to data concerns, its capacity to conduct analysis and oversight, and standardization of policies and procedures could negatively affect FAA’s efforts to implement SMS in a timely and efficient manner and require some skills that agency employees do not have. Addressing these challenges is ever more important with air travel projected to increase over the next 20 years. FAA has embarked on several initiatives to meet its goal of reducing the fatal general aviation accident rate; however it reported not meeting its target fatality rates in any year from 2009 through 2011. As we reported in October 2012, FAA reported the general aviation fatality rate exceeded the target rate by 7.4 percent for 2011. FAA initiatives to improve aviation safety include renewing the GAJSC and implementing the Flight Standards Service’s 5-year strategy for reducing general aviation fatalities. The GAJSC, a government-industry partnership similar to the CAST approach for commercial aviation, focuses on analyzing general aviation accident data to develop effective intervention strategies. We believe that the GAJSC has the potential to contribute to a reduction in general aviation accidents and fatalities over the long term. However, the 5-year strategy has shortcomings that jeopardize its potential for success because, among other things, the strategy lacks performance measures for significant activities. Without a strong performance management structure, FAA will not be able to determine the success or failure of the significant activities that underlie the strategy. Furthermore, there are some limitations in flight activity data and other data that preclude a confident assessment of general aviation safety. For example, FAA’s survey of general aviation operators, on which the agency bases its annual flight-hour estimates, continues to suffer from methodological and conceptual limitations, even with FAA’s efforts to improve it. In October 2012, we recommended that FAA (1) improve measures of general aviation activity by requiring the collection of the number of hours that general aviation aircraft fly, (2) set specific general aviation safety improvement goals—such as targets for fatal accident reductions—for individual industry segments (e.g. personal or corporate operations) using a data-driven, risk management approach and (3) determine whether the programs and activities underlying the 5-year strategy are successful and if additional actions are needed, develop performance measures for each significant program and activity underlying the 5-year strategy. In its comments to our report, FAA reported that it is working toward implementing these recommendations. In September 2012, we recommended that FAA develop a system to assess whether SMS meets its goals and objectives by identifying and collecting related data on performance measures. In comments to our report, FAA stated that it is currently involved in activities directed towards the development of safety performance measurement capabilities, including a process and measures for measuring safety performance. This activity is expected to be completed by April 2015. In October 2011, we recommended that FAA develop separate risk-based assessment processes, measures, and performance goals for runway safety incidents involving commercial and general aviation aircraft, and to expand the existing risk-based process for assessing airborne losses of separation. In comments to our report, FAA reported that it is working toward implementing these recommendations. General Aviation Safety: Additional FAA Efforts Could Help Identify and Mitigate Safety Risks. GAO-13-36. Washington D.C.: October 4, 2012. Aviation Safety: Additional FAA Efforts Could Enhance Safety Risk Management. GAO-12-898. Washington D.C.: September 12, 2012. Aviation Safety: FAA Is Taking Steps to Improve Data, but Challenges for Managing Safety Risks Remain. GAO-12-660T. Washington D.C.: April 25, 2012. Aviation Safety: Enhanced Oversight and Improved Availability of Risk- Based Data Could Further Improve Safety. GAO-12-24. Washington D.C.: October 5, 2011. Aviation Safety: Improved Data Quality and Analysis Capabilities Are Needed as FAA Plans a Risk-Based Approach to Safety Oversight. GAO-10-414. Washington D.C.: May 6, 2010. Gerald L. Dillingham, Ph.D., Director, Physical Infrastructure Issues, [email protected], (202) 512-2834. Reduce the rate of roadway fatalities. Reduce the rate of roadway fatalities from 1.26 in 2008 to 1.03 per 100 million vehicle miles traveled by December 31, 2013. We have issued a number of reports related to DOT’s efforts to reduce highway fatalities that highlight the need for improved performance accountability and data. The number of traffic fatalities decreased from 41,000 in 2000 to fewer than 33,000 in 2010; the fatality rate per 100 million miles traveled also decreased from 1.53 in 2000 to 1.11 in 2010.To help states reduce traffic fatalities, the National Highway Traffic Safety Administration (NHTSA) within DOT provides traffic safety grants to states to, among other things, promote and enforce safety belt use and impaired driving laws and improve traffic safety data systems. While NHTSA has made progress in developing performance measures to help NHTSA and states evaluate the effectiveness of traffic safety programs, we reported in March 2008 that state performance has generally not been tied to receipt of the grants and improvements in state traffic safety data are needed to support a more performance-based approach to improving traffic safety programs. On July 6, 2012, President Obama signed into law the Moving Ahead for Progress in the 21st Century Act.transportation program framework more performance-based by: (1) establishing national performance goals for the federal-aid highway program in several areas, including goals for the safety of the nation’s highways; and (2) requiring the Secretary of Transportation, in consultation with state departments of transportation and others, to establish performance measures linked to national goals, including measures for serious injuries and fatalities on public roads. Finally, the act required states to establish performance targets for those measures and report their progress in achieving planned outcomes through the statewide transportation plans. These provisions are consistent with a performance-based planning framework we recommended to Congress in December 2010 and, as they are implemented over the next several years, should help NHTSA and states focus their efforts on key actions needed to improve traffic safety and reduce highway fatalities. The act made the surface While states have implemented projects to improve traffic safety data systems, such as switching to electronic data reporting and adopting data collection forms consistent with national guidelines, enhancements in these systems are still needed to support a performance-based approach to improving traffic safety. In April 2010, we reported that our analysis of traffic records assessments—conducted for states by NHTSA technical teams or contractors at least every 5 years—indicated that the quality of state traffic safety data systems varied across the six data systems maintained by states. Assessments include an evaluation of system quality based on six performance measures. Across all states, we found that vehicle and driver data systems met performance measures 71 percent and 60 percent of the time, respectively, while roadway, crash, citation and adjudication, and injury surveillance data systems met performance measures less than 50 percent of the time. States face resource and coordination challenges in improving traffic safety data systems. For example, custodians of data systems are often located in different state agencies, which may make coordination difficult. In addition, rural and urban areas may face different challenges in improving data systems, such as limited technology options for rural areas or timely processing of large volumes of data in urban areas. States we visited have used strategies to overcome these challenges, including establishing an executive-level traffic records coordinating committee (TRCC), in addition to the technical-level committee that states are required to establish to qualify for federal traffic safety grant funding. An executive-level committee could help states address challenges by targeting limited resources and facilitating data sharing. In April 2010, we recommended that the Secretary of Transportation should direct the NHTSA Administrator to (1) ensure that traffic records assessments provide an in-depth evaluation that is complete and consistent in addressing all performance measures across all state traffic safety data systems and (2) study and communicate to Congress the value of requiring states to establish an executive-level TRCC in order to qualify for traffic safety data system grant funding. In response to the first recommendation, NHTSA developed a comprehensive approach for assessing the systems and processes that govern the collection, management, and analysis of traffic records data. Core to this approach is the set of questions for conducting assessments published in September 2012 in the Traffic Records Program Assessment Advisory. The Advisory includes standards of evidence to guide state officials in providing the information necessary to answer each assessment question. The assessment now asks a comprehensive, uniform set of questions about data quality performance measures across all state traffic safety data systems. NHTSA kicked off a pilot program to test the new process in Indiana in November 2012. This pilot was successfully completed in February 2013. As part of the new assessment process, NHTSA will create a database to house data from the new traffic records systems, conduct research and identify national trends. NHTSA can then inform states about how the ratings for each of their assessment questions compares with a national average. In response to the second recommendation, NHTSA’s study examining how executive level and technical level TRCCs coordinate traffic records systems management was initiated with the 2012 pilot test of the new traffic records assessment process in Indiana. Data collection will continue through the fiscal year 2015 assessment cycle at which point 10 states will have been assessed and the data set will be large enough to enable a quality analysis. The data for this study will include the information submitted by the states as well as the ratings received on the comprehensive, uniform set of assessment questions. Specifically, NHTSA plans to examine states’ responses to questions about their TRCC management, strategic planning, data integration, and the capabilities of the six core traffic records components. Notable practices demonstrated by effective TRCC organizations—particularly at the executive level—will be highlighted. Statewide Transportation Planning: Opportunities Exist to Transition to Performance-Based Planning and Federal Oversight. GAO-11-77. Washington, D.C.: December 15, 2010. Traffic Safety Data: State Data System Quality Varies and Limited Resources and Coordination Can Inhibit Further Progress. GAO-10-454. Washington, D.C.: April 15, 2010. Traffic Safety Programs: Progress, States’ Challenges, and Issues for Reauthorization. GAO-08-990T. Washington, D.C.: July 16, 2008. Traffic Safety: NHTSA’s Improved Oversight Could Identify Opportunities to Strengthen Management and Safety in Some States. GAO-08-788. Washington, D.C.: July 14, 2008. Traffic Safety: Improved Reporting and Performance Measures Would Enhance Evaluation of High-Visibility Campaigns. GAO-08-477. Washington, D.C.: April 25, 2008. Traffic Safety: Grants Generally Address Key Safety Issues, Despite State Eligibility and Management Difficulties. GAO-08-398. Washington, D.C.: March 14, 2008. Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs. GAO-08-400. Washington, D.C.: March 6, 2008. Phillip R. Herr, Managing Director, Physical Infrastructure Issues, [email protected], (202) 512-2834. Susan Fleming, Director, Physical Infrastructure Issues, [email protected], (202) 512-2834. Based on our past work, as well as that of the VA IG, we commented on each of VA’s three priority goals for 2012 to 2013: 1. Assist in housing 24,400 additional homeless veterans (12,200 per year) and reduce the number of homeless veterans to 35,000 in 2013, to be measured in the January 2014 Point-In-Time homelessness count. By September 2013, working in conjunction with the U.S. Interagency Council on Homelessness (Interagency Council) and HUD, VA will also assist homeless Veterans in obtaining employment, accessing VA services, and securing permanent supportive housing, with a long-range goal of eliminating homelessness among Veterans by 2015. 2. Improve accuracy and reduce the amount of time it takes to process veterans’ disability benefit claims. By September 30, 2013, reduce the veterans’ disability compensation and pension claims backlog to 40 percent from 60.2 percent while achieving 90 percent rating accuracy, up from 83.8 percent, in pursuit of eliminating the Veterans’ claims backlog (defined as claims pending more than 125 days) by 2015. 3. Improve awareness of VA services and benefits by increasing the timeliness and relevance of on-line information available to veterans, service members and eligible beneficiaries. By September 30, 2013, increase the number of registered eBenefits users from 1.0 million to 2.5 million. For each goal, we also identify our related past reports and provide an update on the status of any open recommendations and matters for congressional consideration that we previously made related to the goal. We also identify a GAO contact for our work related to each goal. Assist in housing 24,400 additional homeless veterans (12,200 per year) and reduce the number of homeless veterans to 35,000 in 2013, to be measured in the January 2014 Point-In-Time homelessness count. By September 2013, working in conjunction with the U.S. Interagency Council on Homelessness (Interagency Council) and HUD, VA will also assist homeless Veterans in obtaining employment, accessing VA services, and securing permanent supportive housing, with a long-range goal of eliminating homelessness among veterans by 2015. VA notes that several programs are expected to contribute to the achievement of its priority goal to reduce homelessness among veterans, including the HUD Veterans Affairs Supportive Housing (HUD-VASH) program, Grant and Per Diem program, Domiciliary Care for Homeless Veterans program, and Health Care for Homeless Veterans program. Our past work has identified a number of issues related to these programs. For example, our June 2012 report on the HUD-VASH program—a collaborative initiative between HUD and VA that targets the most vulnerable, most needy, and chronically homeless veterans—states that the program has moved veterans out of homelessness. Specifically, according to VA, as of March 2012, nearly 31,200 veterans lived in HUD- VASH supported housing, which represents about 83 percent of the rental assistance vouchers authorized under the program. In addition, our December 2011 report on homeless women veterans notes that although HUD collects data on homeless women and on homeless veterans, the department does not collect detailed data information on homeless women veterans and neither HUD nor VA captures data on the overall population of homeless women veterans. Further, our report states that HUD and VA lack data on the characteristics and needs of these women on a national, state, and local level. Finally, our report notes that absent more complete data on homeless women veterans, VA does not have the information needed to plan services effectively, allocate grants to providers, and track progress toward its overall goal of ending veteran homelessness. Our May 2012 report on the fragmentation, overlap, and duplication among federal homelessness programs also identified issues related to the programs that are expected to contribute to the achievement of this priority goal. For example, our report noted VA was one of eight federal agencies that administered 26 targeted homelessness programs in fiscal year 2011, suggesting fragmentation and some overlap among these programs.funding for supportive services such as health care, substance abuse treatment, and employment assistance, but also administers programs that provide housing and employment assistance. Similarly, HUD not only administers housing assistance, but also provides funding for mental More specifically, VA typically operates programs or provides health care, substance abuse treatment, and employment services. Fragmentation and overlap can lead to inefficient use of resources. Some local service providers told us that managing multiple applications and reporting requirements was burdensome, difficult, and costly. Moreover, according to providers, persons experiencing homelessness have difficulties navigating services that are fragmented across agencies. Further, our report states that limited information exists about the efficiency or effectiveness of targeted homelessness programs because evaluations have not been conducted recently—including for the 13 programs VA administers or co-administers. Finally, our report states that the Interagency Council strategic plan to prevent and end homelessness has served as a useful and necessary first step in increasing agency coordination and focusing attention on ending homelessness; however, the plan lacks key characteristics desirable in a national strategy. For example, the plan does not list priorities or milestones and does not discuss resource needs or assign clear roles and responsibilities to federal partners. In May 2012, we recommended that the Interagency Council and the Office of Management and Budget––in conjunction with the Secretaries of HHS, HUD, Labor, and VA––should consider examining inefficiencies that may result from overlap and fragmentation in their programs for persons experiencing homelessness. VA agreed with this recommendation. HHS, HUD, Labor, and the Interagency Council did not explicitly agree or disagree. We also recommended that to help prioritize, clarify, and refine efforts to improve coordination across agencies, and improve the efficiency and effectiveness of federal homelessness programs, the Interagency Council, in consultation with its member agencies, should incorporate additional elements into updates to the national strategic plan or other planning and implementation documents to help set priorities, measure results, and ensure accountability. According to the Interagency Council, its fiscal year 2013 report will focus on updates and progress made on the national strategic plan’s objectives. The Interagency Council’s national strategic plan broadly describes the federal approach to preventing and ending homelessness; however, until the key member agencies fully implement their plans, including setting priorities, measuring progress and results, and holding federal and nonfederal partners accountable, they are at risk of not reaching their goal of ending veteran and chronic homelessness by 2015, and ending homelessness among children, youth, and families by 2020. In December 2011, we recommended that in order to help achieve the goal of ending homelessness among veterans, the Secretaries of HUD and VA should collaborate to ensure appropriate data are collected on homeless women veterans, including those with children and those with disabilities, and use these data to strategically plan for services. In concurring with this recommendation, VA stated it had several initiatives already planned or under way to gather information on those homeless women veterans who are in contact with VA, including the development of a more streamlined and comprehensive data collection system. In April 2013, VA stated that it had taken additional actions to inform policy and operational decisions about homeless and at-risk women veterans. For example, VA stated that in 2013 it worked with HUD to ensure that gender specific data were collected during the 2013 Point in Time count of homeless persons. VA added that the results of the 2013 Point in Time count will be included in the Annual Homeless Assessment Report to Congress which will be published later in 2013 and will be used by the department to strategically plan and implement services for all homeless and at-risk veterans, including women veterans. In addition, VA stated that in 2012 it revised the Community Homelessness Assessment, Local Education and Networking Groups survey to capture gender specific data for homeless veterans to better identify the needs of women veterans and influence service provision. Veteran Homelessness: VA and HUD Are Working to Improve Data on Supportive Housing Program. GAO-12-726. Washington, D.C.: June 26, 2012. Homelessness: Fragmentation and Overlap in Programs Highlight the Need to Identify, Assess, and Reduce Inefficiencies. GAO-12-491. Washington, D.C.: May 10, 2012. Homeless Women Veterans: Actions Needed to Ensure Safe and Appropriate Housing. GAO-12-182. Washington, D.C.: December 23, 2011. Homelessness: A Common Vocabulary Could Help Agencies Collaborate and Collect More Consistent Data. GAO-10-702. Washington, D.C.: June 30, 2010. Alicia Puente Cackley, Director, Financial Markets and Community Investment, [email protected], (202) 512-8678. Improve accuracy and reduce the amount of time it takes to process veterans’ disability benefit claims. By September 30, 2013, reduce the veterans’ disability compensation and pension claims backlog to 40 percent from 60.2 percent while achieving 90 percent rating accuracy, up from 83.8 percent, in pursuit of eliminating the veterans’ claims backlog (defined as claims pending more than 125 days) by 2015. As we and other organizations have reported over the last decade, VA has faced challenges improving the accuracy and timeliness of its disability claims process. VA’s disability compensation benefits program has been included in our High Risk List, under “Improving and Modernizing Federal Disability Programs,” since 2003. Our December 2012 report on VA claims processing found that VA’s disability claims backlog—defined as claims awaiting a decision over 125 days—had more than tripled since September 2009. In fact, two-thirds of all disability claims awaiting a decision in August 2012 met VA’s backlog definition. Moreover, the timeliness of disability claims processing over the last several years has worsened: the average length of time to complete a claim increased from 161 days in 2009 to 260 days in 2012. The number of disability claims received is likely to remain high as VA projects that 1 million service members will become veterans over the next 5 years, portending ongoing challenges for VA to meet its goal of processing claims within 125 days by 2015. While our December 2012 report did not look at claims processing accuracy, the VA IG established a benefits inspection program in March 2009 which examines claims processing accuracy at VA regional offices. Based on a review of the VA IG’s benefits inspection findings across 21 VA regional offices in fiscal year 2012, accuracy rates ranged from 40 to 87 percent per office for a sample of selected types of claims. In December 2012, we reported that the Veterans Benefits Administration (VBA) has a number of ongoing initiatives designed to improve claims processing and help VA meet its timeliness goals, but the impact of some initiatives is uncertain: The Fully Developed Claims program, implemented nationally in June 2010, provides priority processing to veterans who submit claims with all relevant private medical evidence. The average processing time for claims involved in the program is 98 days, but veteran participation in the program has been low—only 4 percent of all compensation claims submitted in 2012—minimizing the impact on VA’s claims backlog. The Claims Organizational Model, which reorganizes claim staff into cross-functional teams, processes claims by complexity, and redesigns mailroom functions, was piloted in 3 regional offices in March 2012 and implemented in all regional offices as of March 2013. VBA developed standard medical forms—called Disability Benefits Questionnaires—designed to speed up the claims process by more accurately capturing medical evidence needed from medical providers. Although VBA tracks the number and completeness of questionnaires submitted, VBA is not measuring their impact on processing time. In 2010, VBA began to develop the Veterans Benefit Management System (VBMS), an initiative to help streamline the claims process and reduce processing times. According to VA officials, VBMS is intended to convert existing paper-based claims folders into electronic claims folders that will allow VBA employees electronic access to claims and their support evidence. Once completed, VBMS will allow veterans, physicians, and other external parties to submit claims and supporting evidence electronically. In August 2012, VBA officials told us that VBMS was not ready for national deployment, citing delays in scanning claims folders into VBMS as well as other software performance issues. A recent VA IG report also concluded that VBMS has experienced some performance issues and the scanning and digitization of claims lacked a detailed plan. However, according to VA, as of December 2012, 18 regional offices had implemented VBMS and all regional offices will implement VBMS by the end of calendar year 2013. In our December 2012 report, we stated that without a comprehensive plan to strategically manage resources and evaluate the effectiveness of each initiative, VBA risks spending limited resources on initiatives that may not speed up disability claims processes. In response to our December 2012 report, on January 25, 2013, VA published the Strategic Plan to Eliminate the Compensation Claims Backlog. In a December 2012 report, we reviewed the timeliness of VA claims processing and recommended that the Secretary of Veterans Affairs direct the Veterans Benefits Administration to: 1. Develop improvements for partnering with relevant federal and state military officials to reduce the time it takes to gather military service records for National Guard and Reserve sources. 2. Develop improvements for partnering with Social Security Administration (SSA) officials to reduce the time it takes to gather SSA medical records. 3. Ensure the development of a robust backlog reduction plan for VBA’s initiatives that, among other best practice elements, identifies implementation risks and strategies to address them and performance goals that incorporate the impact of individual initiatives on processing timeliness. VA generally concurred with our recommendations, and has taken steps to address the recommendations. For example, VA stated it has recently initiated several interagency efforts to improve receipt of military service records. According to VA, on December 3, 2012, the joint VBA and DOD Disability Claims Reduction Task Force met to begin to evaluate the process to request records, among other issues, with the aim of improving the timeliness of record exchanges between the two agencies. In addition, the National Guard Bureau and the VA recently agreed to create a collaboration group that will examine ways to improve the timeliness and completeness of the records submitted in support of VA benefit claims. Furthermore, VA officials stated that VBA staff are currently meeting with SSA on a weekly basis to develop strategies to improve the records acquisition process and piloting a tool with four VA regional offices to provide VA staff with direct electronic access to SSA medical records. We believe these initiatives are heading in the right direction in order to improve the timeliness of meeting VA requests for SSA medical records and National Guard and Reservists records. VA agreed with our recommendation to develop a robust backlog plan for VBA’s initiatives, and subsequent to our report, published the Strategic Plan to Eliminate the Compensation Claims Backlog, which identifies implementation risks as well as tracks overall performance based on a number of metrics, including processing timeliness. However, this plan does not provide individual performance goals and metrics for all initiatives, which are needed to ensure VA is spending its limited resources on initiatives that are proven to speed up disability claims and appeals processes. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Veterans’ Disability Benefits: Timely Processing Remains a Daunting Challenge. GAO-13-89. Washington, D.C.: December 21, 2012. VA Disability Compensation: Actions Needed to Address Hurdles Facing Program Modernization. GAO-12-846. Washington, D.C.: September 10, 2012. Veterans Disability Benefits: Clearer Information for Veterans and Additional Performance Measures Could Improve Appeal Process. GAO-11-812. Washington, D.C.: September 29, 2011. Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Daniel Bertoni, Director, Education, Workforce, and Income Security Issues, [email protected], 202-512-7215. Improve awareness of VA services and benefits by increasing the timeliness and relevance of on-line information available to veterans, service members and eligible beneficiaries. By September 30, 2013, increase the number of registered eBenefits users from 1.0 million to 2.5 million. While we have not specifically looked at VA’s eBenefits efforts, our past work has identified challenges VA has faced in its efforts to increase awareness of its services and benefits which could be applicable to its ability to achieve this goal. In February 2011, we reported that VA has a variety of activities to reach out to and support veterans and service members who may be eligible for VA education benefits, including the posting of information on its Web site to support those individuals in the process of applying for education benefits. We found that veterans service organizations, school officials, and students receiving VA education benefits had positive feedback for a recent redesign of the GI Bill web site that highlighted the three main steps in applying for Post-9/11 GI Bill benefits. However, we also found that VA did not provide links on the GI Bill Web site to consumer-focused information generated by other entities. In contrast, the Department of Education’s College Navigator site aggregated, for example, information on graduation rates, loan default rates, costs of attendance and available scholarships. Moreover, we found that little was known about the effectiveness of VA’s education outreach and support because VA did not have outcome-oriented performance measures for these activities. For example, while VA’s education program estimates the number of people who view or listen to a particular Post-9/11 GI Bill online, radio, or print advertisement, VA had not determined the extent to which its outreach campaign has been effective in informing or changing the behavior of target audiences. In December 2011, we reported that VA and the Department of Defense had recently developed the eBenefits web portal to provide veterans with customized information on VA benefits and assistance, and how to apply for them, but the portal did not include a direct link to information on enhanced monthly benefits (which increase recipients’ monthly disability compensation or pension payments). A few of our focus group participants—conducted with veterans and their family representatives— commented that they had difficulty finding information about enhanced monthly benefits on VA’s website. Federal website guidelines recommend that navigation procedures to access online information should be simple and that links should be properly labeled to help users obtain desired results. In December 2012, we reported on the status of VBA’s recent efforts to improve disability claims and appeals processing timeliness. In that report, we noted that veterans can learn about the status of their claims in several ways, including the use of eBenefits. However, we did not review veterans’ actual use of eBenefits. In our February 2011 report, to improve VA’s outreach and support for eligible service members and veterans, communication with school officials, and oversight of its education benefit programs, we recommended that the Secretary of Veterans Affairs, among other actions, (1) develop outcome-oriented performance measures for outreach to service members and veterans who are seeking VA education benefits and (2) establish performance measures for the quality of information provided by VA’s toll-free hotline and for the timeliness and quality of its Right Now Web service. VA concurred with these recommendations in commenting upon a draft of the report. With respect to the first recommendation, VA reported that it had deployed an early communication tool to inform service members and veterans about their eligibility for education benefits and, as of April 2013, reported that it was in the first phase of capturing data on the frequency of visits to dedicated website Uniform Resource Locators. VA anticipated it will be able to establish baseline performance measures by the end of July 2013. With respect to measuring the quality of VA’s customer service on its toll-free hotline and its online Right Now Web service, VA has established applicable national performance standards and, in April 2013, VA reported that the standards have been issued to its field. We have requested to review the performance standards prior to considering this recommendation fully implemented, and are awaiting further status updates with regard to the implementation of the first recommendation. Veterans’ Disability Benefits: Timely Processing Remains a Daunting Challenge. GAO-13-89. Washington, D.C.: December 21, 2012. VA Enhanced Monthly Benefits: Recipient Population Is Changing, and Awareness Could Be Improved. GAO-12-153. Washington, D.C.: December 14, 2011. VA Education Benefits: Actions Taken, but Outreach and Oversight Could Be Improved. GAO-11-256. Washington, D.C.: February 28, 2011. Daniel Bertoni, Director, Education, Workforce, and Income Security Issues, [email protected], 202-512-7215. Based on our past work, we commented on each of OPM’s five priority goals for 2012 to 2013: 1. Ensure high quality federal employees. By September 30, 2013, increase federal manager satisfaction with applicant quality (as an indicator of hiring quality) from 7.7 to 8.3 on a scale of 1 to 10, while continually improving timeliness, applicant satisfaction, and other hiring process efficiency and quality measures. 2. Improve performance culture in the Goals-Engagement- Accountability-Results (GEAR) pilot agencies to inform the development of government-wide policies. By September 30, 2013, employee responses to the annual Employee Viewpoint Survey in each of the five agencies participating in a performance culture pilot project will increase by 5 percent or greater on the results-oriented culture index and the conditions for employee engagement index, using 2011 survey results as the baseline. 3. Increase health insurance choices for Americans. By October 1, 2013, expand competition within health insurance markets by ensuring participation of at least two multi-state health plans in the State Affordable Insurance Exchanges. 4. Maintain speed of national security background investigations. Through September 30, 2013, maintain a 40 day or less average completion time for the fastest 90 percent of initial national security investigations. 5. Reduce federal retirement processing time. By July 31, 2013, Retirement Services will have reduced its case inventory so that 90 percent of all claims will be adjudicated within 60 days. For each goal, we also identify our related past reports and provide an update on the status of any open recommendations and matters for congressional consideration that we previously made related to the goal. We also identify a GAO contact for our work related to each goal. Ensure high quality federal employees. By September 30, 2013, increase federal manager satisfaction with applicant quality (as an indicator of hiring quality) from 7.7 to 8.3 on a scale of 1 to 10, while continually improving timeliness, applicant satisfaction, and other hiring process efficiency and quality measures. We have not specifically reported on OPM’s ability to ensure high-quality federal employees by increasing satisfaction with applicant quality. However, our past work has highlighted efforts OPM has undertaken to recruit and maintain a high-quality workforce. Our past work on OPM’s efforts to improve the federal government’s competitiveness in recruiting and maintaining a high-quality workforce has shown that in 2005, and again in 2008, OPM issued guidance on the use of hiring authorities and flexibilities. As we reported in September 2012, in 2006 OPM developed the Hiring Toolkit to assist agency officials in determining the appropriate hiring flexibilities to use given their specific situations, and in 2008 OPM launched an 80-day hiring model to help speed up the hiring process. Also in 2008, OPM established standardized vacancy announcement templates for common occupations, such as contract specialist and accounting technician positions, in which agencies can insert summary information concerning their specific jobs prior to posting for public announcement. In 2012, OPM issued regulations launching the Pathways program in order to make it easier to recruit and hire students and recent graduates and allow for noncompetitive conversion to permanent positions after meeting certain requirements. If successfully implemented, initiatives such as Pathways could help agencies further close critical skills gaps. We narrowed the scope of the human capital high-risk area in February 2011 to focus on this challenge of closing mission critical skills gaps and although progress has been made, the area remains on our recently issued High Risk List in February 2013. In January 2010, we reported on the use of recruitment, relocation, and retention (3R) incentives at the Food and Drug Administration and the oversight provided by Health and Human Services, and how OPM provides oversight to agency 3R programs. We found that these flexibilities were widely used by agencies, and that retention incentives accounted for the majority of these incentive costs. Federal 3R incentives are among the human capital flexibilities intended to help federal agencies address human capital challenges and to build and maintain a high-performing workforce with essential skills and competencies. According to OPM, the 3R incentives are intended to provide agencies with discretionary authority to use compensation other than base pay to help recruit, relocate, and retain employees in difficult staffing situations. Our review of the steps OPM has taken to help ensure that agencies have effective oversight of their incentive programs found that while OPM provided oversight of such incentives through various mechanisms, including guidance and periodic evaluations and accountability reviews, there are opportunities for improvement. In January 2010, we recommended that the Director of OPM require agencies to incorporate succession planning efforts into the decision process for awarding retention incentives and document this requirement for succession planning in their 3R incentive plans. In January 2011, OPM issued proposed regulations to add succession planning to the list of factors an agency may consider before approving a retention incentive for an employee who would be likely to leave the federal service in the absence of the incentive. OPM stated that specifically listing this factor in the regulations will strengthen the relationship between succession planning and retention incentives. OPM anticipates publishing the final regulations this year. High Risk Series: An Update. GAO-13-283. Washington, D.C.: February, 14, 2013. Human Capital Management: Effectively Implementing Reforms and Closing Critical Skills Gaps are Key to Addressing Federal Workforce Challenges. GAO-12-1023T. Washington, D.C.: September 19, 2012. Human Capital: Continued Opportunities Exist for FDA and OPM to Improve Oversight of Recruitment, Relocation, and Retention Incentives. GAO-10-226. Washington, D.C.: January 22, 2010. Robert Goldenkoff, Director, Strategic Issues, [email protected], 202- 512-2757. Yvonne Jones, Director, Strategic Issues, [email protected], 202-512- 2717. Improve performance culture in the GEAR pilot agencies to inform the development of government-wide policies. By September 30, 2013, employee responses to the annual Employee Viewpoint Survey in each of the five agencies participating in a performance culture pilot project will increase by 5 percent or greater on the results-oriented culture index and the conditions for employee engagement index, using 2011 survey results as the baseline. We have not specifically reported on improving the performance culture of the five GEAR pilot agencies—OPM, the Departments of Energy, Housing and Urban Development, and Veterans Affairs, and the U.S. GEAR is an effort to create high-performing organizations Coast Guard.that are aligned, accountable, and focused on results. Our past work has highlighted steps that OPM and agencies should take to improve their performance cultures. In March 2003, we reported that effective performance management systems are not merely used for once or twice-yearly individual expectation setting and rating processes, but are tools to help the organization manage on a day-to-day basis. We identified key practices that create a clear linkage—”line of sight”—between individual performance and organizational success and, thus, transform agency cultures to be more results-oriented, customer-focused, and collaborative in nature. These key practices are: (1) align individual performance expectations with organizational goals; (2) connect performance expectations to crosscutting goals; (3) provide and routinely use performance information to track organizational priorities; (4) require follow-up actions to address organizational priorities; (5) use competencies to provide a fuller assessment of performance; (6) link pay to individual and organizational performance; (7) make meaningful distinctions in performance; (8) involve employees and stakeholders to gain ownership of performance management systems; and (9) maintain continuity during transitions. As the federal government’s human capital leader, OPM must have the capacity to effectively assist agencies and to successfully lead and implement these important human capital management transformations. In January 2007, we reported that to enhance its capacity to do so, OPM is working to transform its own organization from less of a rulemaker, enforcer, and independent agent to more of a consultant, toolmaker, and strategic partner. We recommended that OPM reexamine its agency-wide skills and competencies in light of its updated strategic management document. OPM implemented this recommendation in 2008 by completing an agencywide competency assessment of all mission critical occupations. As reform initiatives move forward, it is increasingly important for OPM to complete this transformation and clearly demonstrate its capacity to lead and implement such reforms. We currently have no open recommendations or matters for congressional consideration related to this priority goal. Results-Oriented Management: Opportunities Exist for Refining the Oversight and Implementation of the Senior Executive Performance- Based Pay System. GAO-09-82. Washington, D.C.: November 21, 2008. Office of Personnel Management: Key Lessons Learned to Date for Strengthening Capacity to Lead and Implement Human Capital Reforms. GAO-07-90. Washington, D.C.: January 19, 2007. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C: March 14, 2003. Robert Goldenkoff, Director, Strategic Issues, [email protected], (202) 512-2757. Yvonne Jones, Director, Strategic Issues, [email protected], (202) 512- 2717. Increase health insurance choices for Americans. By October 1, 2013, expand competition within health insurance markets by ensuring participation of at least two multi-state health plans in the State Affordable Insurance Exchanges. Our past work has not specifically focused on the priority goal of ensuring the participation of at least two multi-state health plans in each insurance exchange. However, we know from prior work that OPM had awarded a contract by early 2011 to provide policy and analytical support for this effort, and in March of 2012 it had issued a notice of proposed rulemaking. The achievement of this goal will require OPM to contract with multiple private health insurance issuers and to coordinate closely with HHS, which is partnering with states to assure an operating Affordable Insurance Exchange in each state by January 1, 2014. Our work on OPM’s role overseeing the Federal Employees Health Benefits Program and its role in implementing the high risk pool program under the Patient Protection and Affordable Care Act are illustrative of OPM’s experience in two activities central to the achievement of this goal. Negotiate and contract with health insurance issuers. Through its oversight of the Federal Employees Health Benefits Program, OPM has long been responsible for selecting, contracting with, and regulating hundreds of health insurance issuers that offer health plans to millions of federal employees, dependents and retirees, as well as negotiating benefits and premium rates, as we reported in December 2002. OPM will likely leverage this experience and these relationships to contract with issuers to offer plans through state exchanges. Coordinate and collaborate with other agencies. In July 2011, we reported on OPM’s recent collaboration with HHS in implementing the new Pre-Existing Condition Insurance Plan (high risk pool) program required under the Patient Protection and Affordable Care Act. Under an interagency agreement, OPM assists with the administration of the program, including reviewing the performance of the health insurance issuer chosen to offer health plans within the federal program, and overseeing its operations on an ongoing basis. We currently have no open recommendations or matters for congressional consideration related to this priority goal. Patient Protection and Affordable Care Act: Contracts Awarded and Consultants Retained by Federal Departments and Agencies to Assist in Implementing the Act. GAO-11-797R. Washington, D.C.: July 14, 2011. Pre-existing Condition Insurance Plans: Program Features, Early Enrollment and Spending Trends, and Federal Oversight Activities. GAO-11-662. Washington, D.C.: July 27, 2011. Federal Employees’ Health Plans: Premium Growth and OPM’s Role in Negotiating Benefits. GAO-03-236. Washington, D.C.: December 31, 2002. John E. Dicken, Director, Health Care, [email protected], 202-512-7043. Stanley J. Czerwinski, Director, Strategic Issues, [email protected], 202-512-6520. Maintain speed of national security background investigations. Through September 30, 2013, maintain a 40 day or less average completion time for the fastest 90 percent of initial national security investigations. OPM’s priority goal of maintaining a 40 day or less average completion time for the fastest 90 percent of initial national security investigations is related to an area we previously designated as high risk. Specifically, in January 2005 we first placed DOD’s security clearance program—which comprises the vast majority of government-wide clearances—on our High Risk List because we identified significant delays in completing security clearances, which sometimes took up to a year to complete. OPM is currently the investigative service provider for the majority of the executive branch, including DOD. We removed the high-risk designation from DOD’s program in February 2011 due to both high-level attention from various executive branch agencies, including DOD, OMB, and the Director of National Intelligence and improvement in the timeliness of DOD clearances, among other things. For example, we found in January 2011 that DOD met the congressionally directed Intelligence Reform and Terrorism Prevention Act of 2004 goal of 40 days for initial investigations throughout fiscal year 2010. This timeliness measure does not include data on periodic reinvestigations. Timeliness data for investigations is OPM’s responsibility as the investigative service provider for DOD. For example, the fastest 90 percent of DOD initial clearance investigations were processed by OPM in an average of 35 days in fiscal year 2010. In addition, in 2010 the Performance Accountability Council (PAC) reported that OPM was meeting investigation timeliness goals for many of the agencies for which it conducts national security background investigations. However, we have not comprehensively reported on the timeliness statistics for all of the national security background investigations conducted by OPM. Instead, our previous work has focused on the timeliness of DOD clearances because DOD’s program was on our High Risk List. While OPM conducts background investigations for most of the federal government, executive branch agencies conduct other phases in the federal government’s personnel security clearance process. For example, the requesting agency determines which positions—military, civilian, or private-industry contracts—require access to classified information and, therefore, which people must apply for and undergo a security clearance investigation. OPM, in turn, conducts these investigations using federal investigative standards and OPM internal guidance as criteria for collecting background information on applicants. Adjudicators from the requesting agencies use the information contained in the resulting OPM investigative reports and consider federal guidelines to determine whether an applicant is eligible for a personnel security clearance. During the time while DOD’s security clearance program was on our High Risk List, the executive branch initiated actions to reform the government- wide security clearance process, in which OPM had a role as the investigative service provider. As part of this government-wide reform effort, Executive Order 13467 established a leadership structure by creating the PAC. The order appointed the Deputy Director for Management at OMB as the chair of the council and designated the Director of National Intelligence as the Security Executive Agent and the Director of OPM as the Suitability Executive Agent. The PAC is responsible for holding agencies accountable for the implementation of suitability, security, and, as appropriate, contractor employee fitness processes and procedures. In turn, the PAC issued a Strategic Framework in February 2010, which set forth a mission and strategic goals, performance measures, a communications strategy, roles and responsibilities, and metrics to measure the quality of security clearance investigations and adjudications. Some of the goals and performance measures developed by the PAC were aimed at addressing the timeliness of initial security clearance investigations. In addition to the timeliness of initial investigations and as the result of our work, members of Congress and federal agencies have expressed concerns about the quality of the background investigations. The leaders of the PAC also committed to measuring the quality of investigations by further developing quality metrics in a memorandum to Congress on May 31, 2010. While OPM’s agency priority goal to measure timeliness is important; it does not capture the competing priority of measuring the quality of the investigations. Finally, the Intelligence Reform and Terrorism Prevention Act required the executive branch to annually report on timeliness of background investigations; however, this requirement expired in 2011, so there is no mechanism to report the timeliness of the end-to-end clearance process—including timeliness of initiation and adjudication phases of the process, the timeliness of investigations that took OPM longer than 40 days to complete, or other security clearance reform- related goals—to Congressional oversight committees. In our May 2009 report, we recommended that the Director of OPM direct the Associate Director of OPM’s Federal Investigative Services Division to measure the frequency with which its investigative reports meet federal investigative standards in order to improve the completeness of future investigation documentation. As of March 2013, OPM has not implemented the recommendation to measure how frequently investigative reports meet federal investigative standards. Instead, OPM assesses the quality of investigations based on voluntary reporting from customer agencies. Specifically, OPM tracks investigations that are (1) returned for rework from the requesting agency; (2) identified as deficient using a web-based survey; and (3) identified as deficient through adjudicator calls to OPM’s quality hotline. In our past work, we noted that the number of investigations returned for rework is not by itself a valid indicator of the quality of investigative work because adjudication officials said they were reluctant to return incomplete investigations in anticipation of delays that would impact timeliness. Further, relying on agencies to voluntarily provide information on investigation quality may not reflect the quality of OPM’s total investigation workload. One of OPM’s customer agencies, DOD developed and implemented a tool known as RAISE to monitor the quality of investigations completed by OPM, However, OPM does not use DOD’s tool. According to an OPM official, OPM is working through the PAC to decide how the executive branch will measure quality government-wide. While the PAC considered using DOD’s RAISE tool, among others, according to the OPM official, they opted to develop another tool that better captures quality. Further, the OPM official stated OPM’s intent to implement that tool once it is developed by the PAC, but did not provide an estimated timeframe for development and implementation. Our prior work noted that in May 2010, leaders of the reform effort provided congressional members with metrics assessing According to officials quality and other aspects of the clearance process.from one of the PAC’s working groups, these metrics were communicated to executive branch agencies in June 2010. RAISE was one tool the reform team members planned to use for measuring quality. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. Brenda S. Farrell, Director, Defense Capabilities and Management, [email protected], 202-512-3604. Reduce federal retirement processing time. By July 31, 2013, Retirement Services will have reduced its case inventory so that 90 percent of all claims will be adjudicated within 60 days. OPM’s efforts to reduce federal retirement processing time have included attempts over 2 decades to modernize its retirement processing system by automating paper-based processes and replacing antiquated information systems. However, these efforts have been unsuccessful due to weaknesses in key management practices. Our previous reviews have identified weaknesses in project management, risk management, organizational change management, system testing, cost estimating, and progress reporting. Specifically, in February 2005, we made recommendations to address weaknesses in the following areas: Project management: OPM had defined major components of its retirement modernization effort, such as data conversion of paper files and development of electronic processes for capture and storage of data. However, it had not identified the dependencies among these efforts, increasing the risk that delays in one activity could have unforeseen impacts on the progress of others. Risk management: OPM did not have a process for identifying and tracking project risks and mitigation strategies on a regular basis. Thus, OPM lacked a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the modernization effort. Organizational change management: OPM had not adequately prepared its staff for changes to job responsibilities resulting from the modernization by developing a detailed transition plan. The absence of such a plan could lead to confusion about roles and responsibilities and hinder effective system implementation. In January 2008, as OPM was on the verge of deploying an automated retirement processing system, we reported deficiencies in management capabilities, and made recommendations to address them: Testing: The results of tests 1 month prior to the deployment of a major system component revealed that it had not performed as intended. These defects, along with a compressed testing schedule, increased the risk that the retirement processing system would not work as intended upon deployment. Cost estimating: The cost estimate OPM developed for the modernization effort was not fully reliable. This meant that the agency did not have a sound basis for formulating budgets or developing a program baseline. Progress reporting: The baseline against which OPM was measuring the progress of the program did not reflect the full scope of the program; this increased the risk that variances from planned performance would not be detected. In April 2009, we reported that OPM continued to have deficiencies in its cost estimating, progress reporting, and testing practices and we made recommendations to address these deficiencies, as well as additional weaknesses in the planning and oversight of the modernization effort. OPM agreed with these recommendations and began to address them, but the agency cancelled its most recent large-scale retirement modernization effort in February 2011. In November 2011, agency officials, including the Chief Information Officer, Chief Operating Officer, and Associate Director for Retirement Services, told us that OPM does not plan to initiate another large-scale effort to modernize the retirement process. Rather, the officials said the agency intends to take targeted steps to improve retirement processing. More recently, in January 2012, OPM released a new plan to improve retirement processing that aims at targeted, incremental improvements rather than a large-scale modernization. Under this plan, the agency expects to eliminate its retirement processing backlog by July 2013 and accurately process 90 percent of its cases within 60 days. To meet this goal, OPM reported that it plans to hire and train 76 new staff to address retirement claims; establish higher production standards and identify potential retirement process improvements; and work with other agencies to improve the accuracy and completeness of the data they provide to OPM for use in retirement processing. However, as we have previously noted in February 2012, the plan does not address improving or eliminating the legacy information systems that currently support retirement processing. Although we have not assessed OPM’s actions or progress toward fulfilling its January 2012 plan, Performance.gov was updated to include information about the agency’s progress in December 2012. For example, the agency reported that it had met its targets for hiring new staff, as well as improving the accuracy and completeness of retirement data other agencies provide to OPM. We currently have no open recommendations or matters for congressional consideration related to this priority goal. OPM Retirement Modernization: Progress Has Been Hindered by Longstanding Information Technology Weaknesses. GAO-12-430T. Washington, D.C.: February 1, 2012. OPM Retirement Modernization: Longstanding Information Technology Management Weaknesses Need to Be Addressed. GAO-12-226T. Washington, D.C.: November 15, 2011. Office of Personnel Management: Retirement Modernization Planning and Management Shortcomings Need to Be Addressed. GAO-09-529. Washington, D.C.: April 21, 2009. Office of Personnel Management: Improvements Needed to Ensure Successful Retirement Systems Modernization. GAO-08-345. Washington, D.C.: January 31, 2008. Office of Personnel Management: Retirement Systems Modernization Program Faces Numerous Challenges. GAO-05-237. Washington, D.C.: February 28, 2005. Valerie C. Melvin, Director, Information Management and Technology Resources Issues, [email protected], (202) 512-6304. 1. Although our report included information about VA’s Strategic Plan to Eliminate the Compensation Claims Backlog in a subsequent paragraph, we revised the cited paragraph to note that VA, in response to our December 2012 report, published the plan on January 25, 2013. 2. We revised the cited paragraph to focus on VA’s Strategic Plan to Eliminate the Compensation Claims Backlog. However, as VA acknowledges in its comments, the plan does not provide individual performance goals and metrics for all initiatives. We continue to believe that without performance goals and measures clearly aligned with each of its initiatives, VA lacks assurance that it is spending its limited resources on proven methods to speed up disability claims and appeals processes. In addition to the above contact, Elizabeth Curda (Assistant Director) and Benjamin T. Licht supervised this review and the development of the resulting report. Virginia Chanley, Karin Fangman, Patricia Norris, Daniel Ramsey, and Dan Webb made significant contributions to this report. Robert Gebhart, Donna Miller, Jessica Nierenberg, and Ulyana Panchishin also made key contributions.
GAO’s work has repeatedly shown that federal agencies must coordinate better to achieve common outcomes. The act established a more crosscutting and integrated approach to achieving results and improving performance, including a requirement that agencies identified by OMB establish APGs. The act directs GAO to review its implementation at several junctures; this report is part of a series doing so. This report (1) examines the extent to which 24 agencies identified by OMB implemented selected requirements related to 102 APGs, and (2) comments on the 21 APGs of five selected agencies, based on prior GAO and IG work, including the status of relevant open recommendations. To address these objectives, GAO reviewed the act’s requirements for APGs, OMB guidance, APG information from Performance.gov and related agency documents; and interviewed OMB officials. GAO selected DHS, HUD, DOT, VA, and OPM for their variety of APG program types and linkage to CAP goals. For each agency, GAO reviewed its past work, as well as that of IGs, related to the APGs and updated the status of open recommendations. For 102 agency priority goals (APGs) for 2012 to 2013 that GAO reviewed, agencies implemented three GPRA Modernization Act of 2010 (the act) requirements. Agencies identified (1) a target level of performance within a 2-year time frame; (2) how their APGs contribute to their strategic goals; and (3) an agency official responsible for achieving each APG. These represent important accomplishments, but information about other requirements is incomplete: Agencies did not fully explain the relationship between APGs and crosscutting efforts. The act directs agencies to identify federal organizations, programs, and activities that contribute to each APG. Agencies identified internal contributors to their APGs, but did not identify external contributors for 34 of 102 APGs. In some cases the APGs appeared to be internally focused; however, in others GAO's work has shown there are external contributors, but none were listed. In addition, the act requires agencies to identify how, if at all, an APG contributes to any cross-agency priority (CAP) goals set by the Office of Management and Budget (OMB). Although 29 of 102 APGs appeared to support a CAP goal, only two described the link. When agencies do not identify external contributors or links to crosscutting efforts, it is unclear whether agencies are coordinating to limit overlap and duplication. Most APGs had performance measures, but many lacked interim targets. The act requires agencies to develop quarterly targets for APGs if they provide data of significant value at a reasonable level of burden. However, OMB's guidance does not fully address this. Without interim targets when appropriate, agencies cannot demonstrate that they are comparing actual results against planned performance on a sufficiently frequent basis to address performance issues as they arise. Agencies did not identify milestones with completion dates for many APGs. The act requires agencies to develop and publish milestones--scheduled events for completing planned actions--for their APGs. However, OMB's guidance does not direct agencies to provide specific completion dates for their milestones. For 39 of 102 APGs, agencies did not provide milestones with clear completion dates for the next quarter or the remainder of the goal period. Without milestones, agencies are unable to demonstrate that they have properly planned for the actions needed to accomplish their goals and are tracking progress. Most agencies did not describe how APGs reflect congressional input. The act directs agencies to describe for each APG how input from consultations with Congress was incorporated. However, only one agency provided a description. Without transparency regarding congressional input, there is less assurance that meaningful consultations with Congress are occurring. GAO commented on all 21 of the APGs from the Departments of Homeland Security (DHS), Housing and Urban Development (HUD), Transportation (DOT), and Veterans Affairs (VA), and the Office of Personnel Management (OPM), based on past GAO and inspectors general (IG) work. The most frequent theme in the comments is that agencies continue to face the long-standing challenge of measuring performance and collecting accurate performance data. GAO makes recommendations to OMB to improve APG implementation by revising its guidance to better reflect interim target, milestone, and CAP goal alignment requirements; and ensure that agencies provide complete information about external contributors to their APGs and describe congressional input on APG development. OMB staff agreed with these recommendations.
The legal requirements of CPFA, particularly the required level of CPFA shipped on U.S.-flag vessels, have changed over time. Figure 1 shows key legislation related to CPFA. USAID, USDA, and MARAD are the primary agencies involved in CPFA. USAID administers Title II of the Food for Peace Act, which responds to emergency needs such as disasters and crises, and targets the underlying causes of hunger and malnutrition through development food assistance programs. In fiscal year 2014, USAID provided an estimated 1 million metric tons of food aid valued at more than $1.3 billion. USDA’s Foreign Agricultural Service (FAS) administers other food aid programs. In particular, the Food for Progress program responds to non-emergency food aid situations by supporting agricultural value chain development, expanding revenue and production capacity, and increasing incomes in food-insecure countries. The McGovern-Dole International Food for Education and Child Nutrition program responds to nonemergency food aid needs by supporting education and nutrition for schoolchildren, particularly girls, expectant mothers, and infants. In fiscal year 2014, USDA provided nearly 192,000 metric tons of food aid, valued at more than $127 million, for Food for Progress; and more than 70,000 metric tons of food aid, valued at more than $164 million, for McGovern-Dole. USDA’s Farm Service Agency serves as the buying agent for all U.S. food aid programs and extends invitations for bids to prospective food commodities sellers and providers of freight services for commodity delivery to overseas ports. The Farm Service Agency’s Kansas City Commodity Office (KCCO) is responsible for procuring food commodities. MARAD is responsible for monitoring federal agencies’ implementation of cargo preference laws, including CPFA. In addition to monitoring compliance, MARAD establishes guideline rates to determine whether U.S.-flag shipping rates are fair and reasonable. See appendix III for details on MARAD’s “fair and reasonable” determinations. DOD and the USCG also have some involvement in CPFA, through various activities detailed below. DOD has programs that involve U.S.-flag vessels that compete to ship food aid under CPFA. For example, the Voluntary Intermodal Sealift Agreement (VISA) is a partnership between the U.S. government and the maritime industry to provide DOD with “assured access” to commercial sealift and intermodal capacity to support the emergency deployment and sustainment of U.S. military forces. During times of war or national emergency, DOD will use commercial sealift capacity, to the extent it is available, to meet ocean transportation requirements. This commercial sealift capacity includes U.S.- and foreign-flag vessels or intermodal capacity to support DOD’s needs. In the event voluntary capacity does not meet DOD contingency requirements, DOD will activate VISA as necessary. VISA participants have committed vessels or intermodal capacity to support DOD contingency requirements during the various activation stages of VISA and in return are afforded priority to meet DOD peacetime and contingency sealift requirements. Vessels participating in the Maritime Security Program (MSP) are required to be enrolled in VISA. MSP is intended to guarantee that certain kinds of militarily useful ships and their crews will be available to DOD in a military contingency. Vessels in MSP will be activated through the VISA program. Currently, MSP provides direct payment of up to $3.1 million per year for up to 60 militarily useful U.S.-flag vessels participating in international trade to support DOD. DOT determines the commercial viability and DOD determines the military usefulness of vessels that seek participation in MSP. Guidance on military usefulness is being updated and is expected to be issued by September 2015. According to DOD officials, the criteria for military usefulness include ship speed, deck strength, and container cargo carriage capacity. DOD, through the United States Transportation Command, and DOT, through MARAD, maintain and operate a fleet of vessels owned by the federal government to meet the logistic needs of the military services that cannot be met by existing commercial service. As of July 2015, the reserve sealift fleet is composed of 61 vessels, 46 of which are MARAD- owned and form part of the Ready Reserve Force, and 15 are DOD- owned vessels in the Military Sealift Command’s Surge Sealift program. DOD notifies the Military Sealift Command and MARAD when it directs activation of vessels in the reserve sealift fleet for contingency operations, exercises, training and testing, and other defense purposes for when commercial sealift is not available or suitable. Such vessels must be fully operational with a complete crew within their assigned readiness status, which varies from 5 to 10 days. Commercial U.S. ship managers provide systems maintenance, equipment repairs, logistics support, activation, manning, and operations management by contract. U.S. mariners are necessary to crew U.S.-flag commercial vessels providing sealift capabilities for DOD needs, as well as the reserve sealift fleet. Crewing of such vessels is a voluntary system for mariners with the necessary qualifications. Mariner qualifications for crewing the reserve sealift fleet include having a USCG merchant mariner credential with the necessary national and international endorsements to crew these vessels, which include Standards of Training, Certification and Watchkeeping (STCW) and endorsement for unlimited tonnage for deck officer positions that include master, chief mate and officer in charge of a navigational watch, and unlimited horsepower for engineer positions. In addition, mariners sailing onboard such vessels, among other things, are required to take specific DOD-approved training matching their types of vessels, cargo, mission requirements, and areas of operation, according to MARAD. The specialized training includes topics such as physical security, antiterrorism, ship survivability, Navy communication systems, naval operations, engineering, and logistics. This training is typically provided by Navy trainers, as well as by union and private trainers. Currently, U.S. maritime labor unions have collective bargaining agreements with vessel operators under contract to the government to crew the reserve sealift fleet, as needed. USCG is responsible for the credentialing of mariners and maintains records on all mariners who hold valid merchant mariner credentials, including data on mariners who may serve on U.S.-flag vessels that support DOD during times of war or national emergencies, among which are vessels that compete to ship food aid under CPFA. The Navy’s Strategic Sealift Officer Program maintains within the Reserve Component of the U.S. Navy a cadre of strategic sealift officers to support national defense sealift requirements and capabilities. Strategic sealift officers can be recalled to active military duty to fill officer positions aboard the reserve sealift fleet if a shortage of qualified civilian mariners exists. However, many of the Strategic Sealift Officers may already be employed as civilian mariners onboard commercial vessels. The Navy and MARAD have agreed that Strategic Sealift Officers may be used to fill officer positions aboard the reserve sealift fleet after all reasonable means to obtain other qualified civilian mariners have been exhausted. USAID and USDA rely on implementing partners to deliver food aid to beneficiaries for emergency and nonemergency purposes. Implementing partners submit a food order proposal designed to meet program objectives of USAID’s and USDA’s food aid programs. USAID and USDA officials review the order to ensure its suitability for the program and country area with regard to the quantity and type of commodity requested. Once approved, the commodity request for food aid is forwarded to KCCO, which collects commodity orders with similar delivery dates for placement on a solicitation. KCCO then issues a solicitation for commodity vendors to offer their products for sale to USDA. Concurrently, administering agencies, working with implementing partners, issue a solicitation with specific freight tender terms and conditions for ocean freight services to deliver these commodities to overseas destinations. For all U.S. in-kind food commodities except USAID’s bulk food aid, ocean carriers and commodity vendors submit offers electronically through the Web Based Supply Chain Management system (WBSCM). KCCO evaluates commodity bids and freight offers according to lowest landed cost (the cost of the commodity plus transportation charges), through WBSCM. USAID and USDA review the ocean freight offers to identify programmatic issues, such as ensuring ocean freight offers meet the tenders’ terms and conditions, ensuring rates are consistent with fair market prices, as well as assessing the offers’ relation with CPFA requirements. USAID and USDA coordinate with KCCO to recommend award of commodity and transportation contracts, and advise the implementing partners to enter into such contracts. U.S.-flag vessels charge higher shipping rates than foreign-flag vessels largely because of their higher operating cost. According to a MARAD study, U.S.-flag vessels face significantly higher cost, including crew cost, maintenance and repair cost, insurance cost, and overhead cost. For 2010, MARAD found that the average U.S.-flag vessel operating cost is roughly 2.7 times higher than its foreign-flag counterpart. MARAD also found that crew cost, the largest component of U.S.-flag vessels’ operating cost, was about 5.3 times higher than that of foreign-flag vessels. While crew cost accounted for about 70 percent of U.S.-flag vessel operating cost, it accounted for about 35 percent for the foreign- flag vessels. National Security Directive 28 directs DOD to determine the requirements for sealift, among other things, and DOT to determine whether adequate manpower is available to meet such requirements. According to DOD officials, DOD has determined the number of vessels, including those in the reserve sealift fleet that would be required to meet its needs under different contingency scenarios and communicated that to MARAD. DOD officials told us that the number of vessels—both commercial and those in the reserve sealift fleet—required to meet sealift capability for DOD needs varies by contingency. However, DOD’s most serious scenario requires a full and prolonged—a period longer than 6 months—activation of the reserve sealift fleet as well as the use of commercial vessels. MARAD analyzes USCG data, vessel information data, and talks to U.S. maritime labor unions to estimate the number of U.S. mariners actively sailing to determine whether sufficient U.S. mariners exist to crew the entire reserve sealift fleet as well as maintain commercial operations. CPFA requirements increased the cost of shipping food aid for USAID and USDA by about 23 percent, or $107 million, over what it would have been had CPFA requirements not been applied during the time period April 2011 through fiscal year 2014. USDA pays higher shipping rates than USAID partly because of the different application of the CPFA requirements between the two agencies. Pursuant to a court order following a law suit filed against USDA, USDA must measure compliance with cargo preference laws for Food for Progress and Section 416(b) programs on a country-by-country basis to the extent practicable unless MARAD revises cargo preference regulations or policy to allow a different method for defining geographic area, or if USDA determines that a change in method is necessary following good faith negotiations on the matter with MARAD. The country-by-country basis is a more narrow interpretation of the geographic area requirement associated with CPFA than what USAID applies. Following the July 2012 reduction in the minimum percentage of food aid to be carried on U.S.-flag vessels from 75 percent to 50 percent, USAID was able to substantially increase the proportion of food aid awarded to foreign-flag vessels, helping to reduce its average shipping rate. In contrast, USDA was only able to increase the proportion of food aid awarded to foreign-flag vessels by a relatively small amount such that it utilized foreign-flag vessels far below the 50 percent allowed by the 2012 law, and its average shipping rate did not decrease. USAID and USDA continue to differ in how they implement CPFA, and they, together with MARAD, have not fully updated guidance for or agreed on a consistent method for agencies to implement CPFA based on geographic area. The cargo preference requirements for food aid increased the total cost of shipping food aid (see table 1). We found that CPFA requirements increased the cost of shipping food aid by 23 percent, increasing the total cost of shipping food aid by $107 million. This increase covers all of USDA’s food aid purchases and USAID’s purchases of packaged food aid from April 2011 through fiscal year 2014. The extra cost to meet the CPFA requirements was $45 million for USAID’s packaged food aid, 16 percent higher than what USAID would have paid if the CPFA requirements were not applied for April 2011 through fiscal year 2014. For USDA’s food aid, the extra cost was $62 million, or 36 percent higher. (Dollars in millions) U.S. Agency for International Development (USAID) The Food Security Act of 1985 raised the CPFA requirement from 50 percent to 75 percent and required DOT to reimburse USAID and USDA for the ocean freight cost associated with the additional 25 percent requirement, and for the portion of the freight cost that exceeded 20 percent of total commodity and freight cost (see table 2). These two reimbursements—Ocean Freight Differential (OFD) and Twenty Percent Excess Freight (TPEF)—ranged from around $50 million to over $100 million a year from fiscal years 2010 to 2012. Agencies used the reimbursement to fund additional food aid programs. After the CPFA requirement was lowered in July 2012, USAID and USDA still incurred the extra cost to meet the requirements but they no longer received any reimbursement. According to a USDA official, it funds about three fewer grant agreements per year after the reimbursements stopped because of the loss of reimbursements. From fiscal years 2009 through 2012, USDA signed an average of 35 new grant agreements a year amounting to around $388 million a year. For fiscal years 2013 and 2014, USDA signed an average of about 20 new grant agreements a year amounting to around $313 million a year. (Dollars in millions) USAID and USDA use different interpretations of how to implement CPFA requirements, which contributed to the substantial differences in shipping rates between them. The Cargo Preference Act of 1954 specified that at least 50 percent of the gross tonnage of U.S. food aid commodities be shipped on U.S.-flag vessels “in a manner that will ensure a fair and reasonable participation of commercial vessels of the United States in those cargoes by geographic areas.” However, neither this act and subsequent laws modifying the CPFA minimum percentage requirement nor the cargo preference regulations promulgated by MARAD define geographic area. In 1998, the United States District Court for the District of Columbia ordered USDA to measure compliance with cargo preference laws for Food for Progress and Section 416(b) programs on a country-by- country basis to the extent practicable unless MARAD revises cargo preference regulations or policy to allow a different method for defining geographic area, or if USDA determines that a change in method is necessary following good faith negotiations on the matter with MARAD. Thus, USDA is required to meet the minimum percentage of food aid carried on U.S.-flag vessels by individual country and for each of its food assistance programs, which are Food for Progress and McGovern-Dole, regardless of the price of U.S. shipping, according to USDA officials. USAID, however, is not bound by the order as it was not a party to the litigation. Instead, USAID interprets the CPFA requirement in a manner that gives it substantially more flexibility. It defines geographic area on a global basis for its packaged food aid. For bulk food aid, USAID uses a modified country basis where it can broaden the interpretation of geographic area to the regional level when it determines that there is limited availability of U.S.-flag vessels for a particular route. For example, USAID defines the region of West Africa, and not individual countries in West Africa, as one geographic area for bulk food aid, giving it greater flexibility and allowing it to better manage its limited resources. USDA shipped a lower percentage of food aid on foreign-flag vessels than USAID, and its percentage on U.S.-flag vessels was higher than the minimum requirements both before and after the 2012 changes in the CPFA requirements. In addition, it did not reduce the percentage on U.S.- flag vessels as much as USAID did after the change in CPFA requirements in July 2012 (see fig. 2). We analyzed data on USAID’s and USDA’s food aid shipments from fiscal years 2009 through 2014 and found that USAID shipped an average of 82 percent of food aid on U.S.- flag vessels before the change and 54 percent after the change. In contrast, USDA shipped an average of 89 percent of food aid on U.S.-flag vessels before the change and 76 percent after the change. According to USDA officials, USDA ships a high percentage of its food aid on U.S.-flag vessels because it has to meet the minimum percentage shipped on U.S.-flag vessels by country, and its shipments are generally too small to be split among multiple vessels. Our analysis found that U.S.- flag vessels carried at least 50 percent of the commodities to 20 out of the 21 countries USDA sent food aid to in 2014 (see fig. 3). In contrast, U.S.- flag vessels carried over 50 percent of the commodities to only 19 out of 31 countries USAID sent food aid to in 2014. When considering the vessel type separately, we found that U.S.-flag vessels carried 100 percent of the food aid shipped on either bulk or liner for 9 out of the 21 countries that USDA sent food aid to in 2014. We examined the countries with 100 percent USDA food aid shipments on U.S.-flag vessels and found all of them received only one shipment of food aid in that year for the particular vessel type. USDA had no choice but to use U.S.-flag vessels when it had only one shipment to a country in a given year because of the way it is legally required to interpret geographic area. For example in 2014, USDA shipped 100 percent of its food aid, consisting of only one shipment each year, to the Dominican Republic using U.S.-flag vessels. When USDA had multiple shipments to a country, such as to Ethiopia in 2014, it was able to send some food on foreign-flag vessels. For USAID, U.S.-flag vessels, either bulk or liner, carried 100 percent of the food aid for 9 out of the 31 countries USAID sent food aid to in 2014. We found that after the changes in the CPFA requirements, the average shipping rate for USAID decreased by around 9 percent, or $21 per metric ton, after controlling for other factors. The shipping rate decreased slightly for USDA, though the decrease is not statistically significant. Since a variety of factors in addition to the changes in the CPFA requirements may affect shipping rates, we developed multivariate regression models that control for other factors that may also affect shipping rates to assess the likely effect of the changes. For detailed discussion of the regression methodology and results, see appendix II. A higher proportion of food aid awarded to foreign-flag vessels and the decrease in shipping rates on foreign-flag vessels likely contributed to the lower shipping rates for USAID after the CPFA requirement change in 2012. Foreign-flag vessels on average charge lower shipping rates than U.S.-flag vessels (see table 3). From April 2011 through fiscal year 2014, we found that U.S.-flag vessels charged $61 ton more than foreign flag vessels for packaged food aid and $55 per metric ton more for bulk food aid. After the CPFA requirement change, foreign-flag vessels participated more in the food aid solicitation. Our estimates using statistical modeling to control for various factors show that the number of bids received for each solicitation increased by three after the 2012 change in the CPFA requirements and that all of the increase was from the increase in bids from foreign-flag vessels. According to USAID officials, after the CPFA requirement change, they have received more foreign-flag bids for some routes previously dominated by U.S.-flag vessels. Results from our regression model also indicates that the shipping rate on foreign-flag vessels decreased by 9 percent for USAID and 7 percent for USDA since the CPFA requirement change in 2012. USAID was able to award more food aid shipments to lower-priced foreign-flag vessels, which led to a statistically significant decrease in its overall shipping rates. On the other hand, USDA was able to increase the proportion awarded to foreign-flag vessels by a relatively small amount and we did not find a statistically significant decrease in its overall shipping rates since the CPFA requirement change in 2012. (Dollars per metric ton) MARAD, USAID, and USDA do not have updated guidance and have not agreed on a consistent method for the agencies to implement CPFA. They signed a memorandum of understanding (MOU) in 1987 for administering CPFA that, among other things, did not provide an agreed- upon definition for geographic area. They signed another MOU in 2009 that relates to the interpretation of vessel service categories but still left the definition of geographic area unaddressed. Officials representing the agencies told us that they could not successfully agree on some cargo preference issues. The three agencies currently use separate agency guidance to interpret application of CPFA requirements, but do not follow a common set of updated interagency guidance. According to MARAD, USAID, and USDA officials, they know about the different process each agency uses to implement CPFA, but these practices are not documented in any interagency guidance. MARAD officials said that they regularly monitor USAID’s and USDA’s compliance with CPFA, based on vessels’ voyage report data and the data that USAID and USDA report on their websites. However, MARAD officials also explained that it is difficult for them to enforce CPFA requirements, noting that they understand that USAID uses a different interpretation of geographic area for CPFA compliance of USAID’s packaged food aid. Although there is the potential for a $25,000 fine per day for each willful violation of the cargo preference requirement, MARAD officials said that even if they found instances of non-compliance, they cannot penalize USAID and USDA, because they are government agencies. Rather, they would have to penalize the implementing partner that ships the commodities for noncompliance, which they have chosen not to do. In past proposals to MARAD for cargo preference rule making, USDA noted that one of the topical areas of most concern to USDA for its food aid shipments is the definition of the term “geographic areas.” USDA had also noted that USDA and USAID needed clarity on the application of CPFA to allow for efficient and effective delivery of food aid. Our prior work found that agencies that articulate their agreements in formal documents can strengthen their commitments to working collaboratively. Specifically related to CPFA, GAO recommended in 2007 and again in 2009 that USAID and USDA work with DOT and relevant parties to expedite updating the MOU between U.S. food assistance agencies and DOT to minimize the cost impact of cargo preference regulations on food aid transportation expenditures and to resolve uncertainties associated with the application of CPFA requirements. The agencies did not fully implement our recommendation; their signed MOU in 2009 did not resolve some uncertainties among agencies, including the definition of geographic area. Pursuant to the terms of the court order requiring USDA to comply with CPFA on a country-by-country basis, an MOU embodying an agreement between USDA and MARAD on a consistent definition of “geographic area” would allow USDA to administer CPFA using a method other than country-by-country. Despite cargo preference, the number of vessels carrying food aid and U.S. mariners required to crew them has declined. However, DOD has met all of its past sealift needs with the existing capacity and has never fully activated the reserve sealift fleet. The number of U.S. mariners qualified and available to serve DOD’s needs under a full and prolonged activation is uncertain. Furthermore, MARAD has not fully assessed the sufficiency of mariners available under a full and prolonged activation. Sealift capability provided by U.S.-flag vessels, including those carrying food aid, has declined. From 2005 to 2014, the number of vessels in the overall oceangoing U.S.-flag fleet declined about 23 percent, from 231 to 179 vessels. In April 2015, MARAD reported that the decrease in available government cargo is the most significant factor contributing to the loss of U.S.-flag vessels. The majority of the decline has been in DOD cargo, the largest source of preference cargo. DOD cargo accounted for approximately three-quarters of preference cargo in 2013. Food aid shipments have also declined. From 2005 to 2013, the amount of U.S. food aid commodities purchased and shipped from the United States by the U.S. government—and therefore subject to cargo preference—declined by 66 percent and the number of U.S.-flag vessels carrying food aid declined by more than 40 percent, from 89 to 53. The number of vessels carrying food aid further declined to 38 in 2014 (see fig. 4). As the number of vessels, including those carrying food aid, has declined since 2005, so has the number of mariners crewing them. MARAD estimated that in fiscal year 2005 there were at least 1,329 positions aboard 66 of the 89 vessels carrying food aid. In fiscal year 2014, the estimate was approximately 612 positions aboard 33 of 38 vessels carrying food aid. Because crew members rotate over the course of a year, MARAD estimates that each position generates approximately two mariner jobs per year. Using MARAD’s estimating procedures, CPFA, therefore, could have supported 1,224 mariner jobs during fiscal year 2014. See figure 5 for sealift capabilities supported by CPFA. CPFA supports some sealift capability by ensuring that a portion of U.S.- flag vessels carry some food aid cargo. We found that without CPFA, most food aid cargo would not be transported on U.S.-flag vessels. USAID and USDA review and approve contracts to ocean carriers based on CPFA requirements and lowest landed costs for the combination of necessary commodities and transportation costs. We found that if CPFA requirements had not been applied, 97 percent of food aid tonnage after the CPFA change would have been awarded to foreign-flag vessels. Even with CPFA, however, the number of U.S.-flag vessels and mariners supported by CPFA has decreased, and the overall contribution of CPFA to sealift is unclear. One intended objective of CPFA is to help ensure a merchant marine— both vessels and mariners—capable of providing sealift in times of war or national emergency. For its sealift capability needs, DOD relies on commercial vessels, including those carrying food aid, and the reserve sealift fleet, which, according to DOD, have been sufficient for its past needs. During times of war or national emergency, DOD can make the decision to use commercial or government-owned sealift capacity to meet ocean transportation requirements. According to DOD officials, DOD pays the shipping costs for its cargo to those commercial vessels that provide capacity. DOD officials also told us that a vessel does not need to be deemed militarily useful to provide sealift capability, and both U.S.- and foreign-flag vessels can be used to provide sealift. As of March 2015, there were 167 oceangoing U.S.-flag vessels that could provide sealift for DOD’s needs. However, vessels that participate in VISA would be afforded the first opportunity to provide sealift capabilities, as they have agreed to provide DOD with assured access to sealift capacity. In March 2015, 99 oceangoing U.S.-flag vessels were enrolled in VISA, 58 of which are MSP vessels that receive a $3.1 million annual payment to support DOD in addition to any shipping costs DOD provides for its cargo. In the event provided capacity does not meet DOD’s needs, DOD will activate the VISA to require additional vessel capacity be made available, as necessary. DOD can also activate the reserve sealift fleet when commercial vessels cannot satisfy military operational requirements, among other reasons. The reserve sealift fleet is composed of 61 vessels, 46 of which are MARAD-owned and form part of the Ready Reserve Force and 15 DOD-owned vessels in the Military Sealift Command’s Surge Sealift program. See figure 6 for the composition of oceangoing U.S.-flag and the reserve sealift fleet. DOD has never activated VISA or the entire reserve sealift fleet to meet sealift capacity needs. DOD requires access to sufficient U.S.-flag capacity to meet the most serious scenario and, according to DOD officials, available vessel capacity—U.S.- and foreign-flag—has historically been sufficient to meet DOD’s needs. While VISA participants have provided sealift capacity for DOD, since the program’s inception in 1997, the VISA program has never been activated. Partial activations of the reserve sealift fleet have been needed to support DOD. For example, according to the Military Sealift Command, in fiscal year 2014, 2 Ready Reserve Force vessels were activated, 1 to support the destruction of Syrian chemical weapon components in the Mediterranean, and another to support a cargo mission for U.S. Central Command; furthermore, 4 of the Military Sealift Command’s Surge Sealift Program vessels were activated to support Navy exercises. According to DOD officials, only during a significant contingency would the entire reserve sealift fleet be activated, and only in the most serious scenario would the entire reserve sealift fleet be activated for a prolonged period of time, in addition to the use of commercial vessels. However, in the past 13 years, including during Operation Iraqi Freedom and Operation Enduring Freedom in Afghanistan, there has never been a time when the entire reserve sealift fleet has had to be activated. Figure 7 shows the number of Ready Reserve Force vessels, a subset of 46 vessels in the reserve sealift fleet, and mariners used for its activation since 2002. MARAD has estimated the number of mariners required to fully crew both the reserve sealift fleet and commercial operations for shorter- and longer-duration surge scenarios as required by DOD. DOD’s most serious scenario envisions a full activation of the entire reserve sealift fleet for an extended period of time. In addition, DOD would require the use of some commercial sealift for sustainment purposes. According to MARAD, an activation of the entire reserve sealift fleet would require a total crew of 1,943 mariners for a 6-month period. However, these vessels maintain a smaller crew at all times; therefore, full activation of the fleet would require finding mariners to complete the necessary crewing levels. For example, in June 2015, there were 645 mariners already serving aboard the fleet, and thus initially activating the full fleet would require an additional 1,298 mariners. If the full fleet is activated for a period longer than 6 months, then the crew would need to be rotated and 1,943 additional mariners would be needed to fill all the positions. Using MARAD’s estimates, the crew to sustain the reserve sealift fleet for 12 months would require 3,886 mariners. In addition, according to MARAD, U.S. mariners are also needed to crew commercial vessels during that same time period, including those providing sealift to DOD. MARAD officials expect all commercial vessels to continue operations during the same period during which the reserve surge fleet is activated and estimated that 9,148 mariners would be needed to crew such vessels. While MARAD estimates that 3,886 mariners are needed to sustain the reserve sealift fleet, it estimates that a total of 13,034 mariners is required to both support the prolonged operation of the entire reserve sealift fleet as well as the operation of commercial vessels during a scenario requiring the prolonged full activation of the reserve sealift fleet. While USCG maintains data on mariner qualifications, the number of mariners potentially available—both actively sailing and willing—to operate the reserve sealift fleet under a full and prolonged activation is uncertain. Number of mariners potentially qualified. USCG data show that the number of mariners potentially qualified to operate the reserve sealift fleet has increased since fiscal year 2008. According to MARAD, the ability to crew and operate the reserve sealift fleet to meet military sealift requirements depends on having sufficient qualified mariners available in time of national emergency. Mariners with such qualifications have taken specific DOD- approved training required to crew vessels in the reserve sealift fleet, according to DOD officials. The pool of mariners able to crew and operate the reserve sealift fleet includes those who have obtained their STCW and unlimited tonnage/horsepower endorsements, according to MARAD. USCG data show that from fiscal years 2008 to 2014, the number of mariners with STCW and unlimited tonnage and horsepower endorsements increased from 37,702 to 54,953 (see fig. 8) Actively sailing mariners. According to MARAD officials, not all mariners with STCW endorsement utilize their credential to sail internationally, in part because of a lack of jobs aboard oceangoing vessels. They told us that the pool of mariners available to operate these vessels is better represented by those who are actively sailing because they are more likely to have their training and qualifications up to date. Further, according to MARAD, when U.S. mariners are not actively sailing, they typically do not maintain their memberships in the U.S. maritime labor unions, which have collective bargaining agreements for crewing the reserve sealift fleet. However, complete, detailed data on actively sailing U.S. mariners are not available. USCG maintains a database on U.S. mariners and their credentials, but USCG officials noted that USCG’s ability to identify actively sailing mariners is limited to the extent to which companies notify USCG of mariners sailing internationally through discharge certificate records. When vessels return from an international voyage, the vessel owner is required to provide discharge certificates for all the mariners aboard to USCG, upon request. As of June 2, 2015, USCG reported that it had received 16,637 certificates of discharge for mariners with STCW and unlimited tonnage/horsepower endorsements that had sailed in the previous 18 months. However, USCG officials estimate that they do not receive all the certificates of discharge, and the number of actively sailing mariners may be higher than this. Mariners who are willing. MARAD officials noted crewing of the reserve sealift vessels is done through a voluntary system, and the number of those who would actually crew these vessels is uncertain. MARAD officials explained that the majority of mariners hold permanent positions aboard a given commercial vessel and volunteering for prolonged employment aboard the reserve sealift fleet would, in most cases, mean that they would forfeit their permanent positions. The mariners would then have to compete for a new commercial position once they finished their time in the reserve sealift fleet, a contingency that would likely decrease their likelihood of volunteering. MARAD developed the Mariner Outreach System (MOS) to monitor, among other things, U.S. mariners’ willingness to crew the reserve sealift fleet. U.S. mariners have the option to consent to be contacted in the event of a national emergency or sealift crisis through the USCG application for merchant mariner credential or the MOS. From 2008 to 2014, about 9,000 mariners, on average annually, have consented to be contacted through the USCG application for merchant mariner credential process every year. For example, during fiscal year 2014, 9,682 U.S. mariners consented to allow MARAD to contact them. While these mariners consent to be contacted, they are not obligated to sail. At the same time, they have shown some willingness to crew the reserve sealift fleet. However, MARAD officials noted that not all of the mariners in MOS have the endorsements and training required to crew vessels in the reserve sealift fleet. Strategic Sealift Officers. In addition to the pool of mariners who could volunteer to support the reserve sealift fleet, Strategic Sealift Officers can be called to duty to fill officer positions aboard the reserve sealift fleet if a shortage of qualified civilian mariners exists. According to a DOD official, as of April 2015, the Strategic Sealift Officer program consisted of 1,973 officers, of whom 1,063 were not actively sailing. However, DOD officials told us, to date, merchant mariners have been sufficient to support sealift capabilities and Strategic Sealift Officers have never been called into duty to crew the reserve sealift fleet. While the USCG database shows over 16,000 potentially qualified and actively sailing mariners, MARAD stated that not all these mariners would be readily available to crew the reserve sealift fleet and maintain ocean commercial operations. As of May 2015, MARAD has estimated that 11,280 U.S. mariners were readily available based on its assumptions and analysis of USCG data. According to MARAD officials, this number is sufficient to support the initial activation of the reserve sealift fleet for 6 months but insufficient to support the prolonged operations of all the vessels, after the initial crew is rotated. MARAD concluded that at least 1,378 more mariners would need to be available to meet the needs of the prolonged operation of the entire reserve sealift fleet as well as the operation of commercial vessels. According to MARAD officials, they expect the shortage to occur for senior officer positions but not at the lower officer positions, such as third mate or third assistant engineer, since the merchant marine academies graduate their students each year with these ranks. MARAD officials acknowledged that there are more mariners qualified to support the operation of the reserve sealift fleet but stated that they may not be available to do so in part because of either their current location or employment, or lack of appropriate experience for a particular officer position. However, MARAD did not reassess the sufficiency of the mariner pool using different assumptions to include a bigger portion of those qualified, such as the more than 1,000 Strategic Sealift Officers who were not actively sailing as of April 2015, or consider mechanisms to reach out to the mariners it excluded in its analysis if there was a full, prolonged activation of the reserve sealift fleet. Figure 9 summarizes the potential mariner supply for a prolonged surge based on USCG data on potentially qualified actively sailing mariners and MARAD’s estimates on those readily available. Given that the number of U.S.-flag vessels has declined since fiscal year 2005, we interviewed 29 stakeholders knowledgeable about CPFA to obtain their views on what policy options, if any, could improve the sustainability of the oceangoing U.S.-flag fleet, including those carrying food aid. The 29 stakeholders we interviewed suggested 27 unique options; 18 of these stakeholders subsequently selected what they believe to be the top 3 options from among those suggested as well as provided comments on a variety of the options. The 27 options ranged from increasing the CPFA minimum requirement from 50 to 100 percent to eliminating CPFA altogether, with the stakeholders’ relationships to the maritime industry highly related to the nature of their selection. Results from stakeholders overall, maritime industry stakeholders, and other maritime stakeholders are presented in the figure below. (See app. IV for a complete list of options and counts—both overall and broken out by stakeholder category.) Figure 10 lists the frequently selected options by stakeholders overall, maritime industry stakeholders, and other maritime stakeholders. Stakeholders. The options that 4 of the 18 stakeholders who selected what they believe to be the top 3 options from among those suggested were to increase cargo preference on all government cargo to 100 percent; increase funding for U.S. food aid programs; increase MARAD’s monitoring and enforcement of its statutory authority, and eliminate the 3- year waiting period imposed on foreign vessels that acquire U.S.-flag registry before they are eligible to carry preference food aid cargo. In addition, there were a number of options that 3 of the 18 stakeholders selected, including reforming tax law that may negatively affect the U.S.- flag fleet, improving the food aid procurement/ supply chain process to include market efficiencies, eliminating CPFA and creating a DOD program to provide military sealift capability, increasing the CPFA minimum requirement from 50 to 75 percent, increasing the CPFA minimum requirement from 50 to 100 percent, reinstating OFD and TPEF reimbursements, subsidizing U.S.-flag fleet vessels, and reforming tort law that may negatively affect the U.S.-flag fleet. Maritime industry stakeholders. The options that 3 of the 7 maritime industry stakeholders selected as being among their top 3 were to increase all cargo preference to 100 percent, increase funding for U.S. food aid programs, and increase MARAD’s monitoring and enforcement of its statutory authority. All 3 of these options were also options that the 18 stakeholders frequently selected, but all 3 options are different from the options that the 11 other maritime industry stakeholders frequently selected. Furthermore, support for these 3 options varied within this category of maritime industry stakeholders. For example, while some maritime industry stakeholders selected a particular option, others commented on the possible negative effects of the same option. In addition, there were 2 options that 2 of the 7 maritime industry stakeholders selected: increasing the CPFA minimum requirement from 50 to 100 percent and enforcing cargo preference requirements for all U.S. government agencies. Other maritime stakeholders. The options that 3 of the 11 other maritime stakeholders selected as being among their top 3 were to reform tax law that may negatively affect the U.S.-flag fleet, improve the food aid procurement/supply chain process to include market efficiencies, eliminate the 3-year waiting period imposed on foreign vessels that acquire U.S.-flag registry before they are eligible to carry preference food aid cargo, and eliminate CPFA and create a DOD program to provide military sealift capability. The option to eliminate the 3-year waiting period was also frequently selected by the 18 stakeholders, but all four options are different from the options frequently selected by the 7 maritime industry stakeholders. Furthermore, support for these 4 options varied within this category of other maritime stakeholders. In addition, there were a number of options that 2 of the 11 other maritime stakeholders selected, including increasing the CPFA minimum requirement from 50 to 75 percent, reinstating the OFD and TPEF reimbursements, determining the minimum number of vessels and mariners needed to sustain the U.S.-flag fleet, subsidizing U.S.-flag fleet vessels, reforming tort law that may negatively affect the U.S.-flag fleet, and not requiring CPFA compliance by country. When selecting their top three options, stakeholders also provided comments on any options, and agency officials later provided comments on the options frequently selected by stakeholders. These comments are summarized below for each of the frequently selected options. Stakeholders from both categories provided comments both supporting and opposing increasing all cargo preference to 100 percent. For example, 1 maritime industry stakeholder commented that an increase could help the U.S. government maintain a strong U.S.-flag fleet, while another commented that too much food aid funding would be lost to U.S.-flag rates, which are generally higher than foreign-flag rates. Similarly, 1 other maritime stakeholder commented that an increase to 100 percent would be preferable, while another commented that the U.S. government should not give carriers with U.S-flag vessels a monopoly and believed these carriers would keep raising their shipping rates. Comments from maritime industry stakeholders were mixed, with 3 supporting the option and 2 not supporting the option. On one hand, 1 maritime industry stakeholder expressed support for the increase as a means of redressing the decline in U.S. government cargo shipped. On the other hand, another maritime industry stakeholder commented that only negative effects were associated with this option. For example, too much food aid funding would be lost to higher shipping rates. According to USAID and USDA officials, there are no potential benefits and only potential negative effects associated with increasing all cargo preference to 100 percent. USAID officials believe that this option would not be beneficial to U.S. food aid programs. According to USAID officials, the potential negative effects would be that with increased cargo preference requirements, food aid programs would experience greater transportation costs and significantly less flexibility in determining the shipping process. USAID officials said that this option would negatively affect the food aid program’s size, and ultimately the programs would ship fewer commodities to fewer places and reach fewer beneficiaries. According to USAID officials, the U.S. government could implement the option of increasing all cargo preference to 100 percent only through a change in the cargo preference statute. According to USDA officials, one of the potential negative effects of increasing all cargo preference to 100 percent is that, with USDA food assistance constrained by the amount of money budgeted for each program, increasing cargo preference to 100 percent would limit the number of shipments because of the higher cost of shipping on U.S.-flag vessels. In addition, there are a limited number of U.S.-flag vessels that participate in food aid shipments and lack of availability of U.S.- flag vessels could cause delays in shipments, breaks in food aid pipelines, and disruption of programs on the ground. Stakeholders from both categories provided comments supporting increasing funding for U.S. food aid programs, and no stakeholders provided opposing comments. Furthermore, stakeholders commented that funding has decreased despite factors such as higher costs and greater food aid needs. For example, 1 maritime industry stakeholder commented that funding for U.S. international food aid programs is currently inadequate, as it has been cut significantly in recent years while certain costs have risen. Similarly, 1 other maritime stakeholder commented that funding has gone down sharply in inflation-adjusted terms over the past years even as the number of disaster-affected peoples worldwide has grown. Maritime industry stakeholders who commented on this option were all in support of it. One maritime industry stakeholder emphasized the multiple interests these programs support, including those of DOD and the U.S. agricultural business. The stakeholder also commented that the programs demonstrate the country’s commitment to helping those in need and supporting foreign policy goals. According to USAID and USDA officials, there are no potential negative effects and only potential benefits of increasing funding for U.S. food aid programs. According to USAID officials, the potential benefit of increasing funding would be that, along with refinements made to cargo preference regulations, more recipients would be reached. However, USAID officials explained that it is difficult to quantify any effect on the U.S.-flag fleet. USAID officials stated they would continue to program food assistance resources in the most appropriate modality for the response, and comply with all applicable legislative parameters. According to USAID officials, the U.S. government could implement the option of increasing funding for U.S. food aid programs through the congressional budget process. According to USDA officials, a potential benefit of increasing funding is that, with the Food for Progress program currently limited because of a $40 million cap on transportation costs, it is difficult to meet the legislated minimum tonnage requirement for the program. The high shipping rates absorbed by the food assistance programs often result in fewer agreements and shipments in any given fiscal year. Any increase in funding for the program would have to be in the form of an increase to the transportation cap to have an impact on the program. USDA officials added that increasing funding for U.S. food aid programs could help to improve the sustainability of the oceangoing U.S.-flag fleet because increased funding would result in increased shipments of food aid, and increased shipments would benefit the U.S.-flag fleet. Stakeholders from both categories provided comments both supporting and opposing increasing MARAD’s monitoring and enforcement of its statutory authority. Maritime industry stakeholders who commented on this option were all in support of it. Two maritime industry stakeholders commented that MARAD has not completed the rulemaking process, with one questioning why it has not been done and the other citing the necessity of doing it. According to USAID and USDA officials, there are no potential benefits and only potential negative effects associated with increasing MARAD’s monitoring and enforcement of its statutory authority. According to USAID officials, the potential negative effects would be a loss of time, control, and flexibility in implementing food aid programs. This ultimately would result in less efficient, less timely programming and distribution, and costs to programs. According to USAID officials, the U.S. government could implement the option of increasing MARAD’s monitoring and enforcement through the interagency rulemaking process. According to USDA officials, a potential negative effect is that it would potentially reduce the role of programming agencies with regard to cargo preference. They said that MARAD likely does not have a complete and thorough understanding of food aid regulations and policies. USDA has consistently maintained compliance with cargo preference and sees no benefit to having MARAD’s increased involvement in monitoring and enforcement. Only 1 other maritime stakeholder provided a comment either supporting or opposing reforming tax law. The single comment was in support of the option, stating that it would be particularly helpful to the international competitiveness of the U.S.-flag fleet. However, 1 maritime industry stakeholder commented that the tonnage tax regime works very well today as long as it is interpreted correctly. Another maritime industry stakeholder stated that U.S. tax law currently alleviates some of the burden of federal income taxes on U.S. vessel owners (although tonnage tax could be improved and U.S. vessel owners remain subject to state, local, and other taxes), but provides no tax relief for U.S. merchant mariners serving on vessels internationally, in contrast to merchant mariners serving on foreign vessels, who generally pay no taxes to any jurisdiction. No agency officials commented on the option of reforming tax law. Stakeholders from both categories provided comments generally supporting improving the food aid procurement/supply chain process. For example, other maritime stakeholders who commented on this option were generally in support of it. One stakeholder commented that there are a variety of ways to improve the process and the options need to be discussed with all parties—implementing agencies, commodity and food product vendors, freight forwarders, shipping companies, labor, and U.S. government agencies—to find the best solutions. According to USAID officials, the potential benefits of improving the food aid procurement/supply chain process to include market efficiencies would be that some cost savings would result from more effective implementation of commercial terms and practices. However, USAID officials anticipate that improving the food aid procurement/supply chain process would have an insignificant effect on improving the sustainability of the oceangoing U.S.-flag fleet unless the barriers to entry were lowered. According to USAID officials, the U.S. government could implement this option at the USAID program and contract levels. USDA officials stated that they welcome improvements to the food aid procurement/supply chain process. However, USDA officials noted that USDA is required to follow the Federal Acquisition Regulation for procurement. Other maritime stakeholders who provided comments on the option of eliminating the 3-year waiting period on foreign vessels that acquire U.S.-flag registry before they are eligible to carry preference food aid cargo generally believed that it would remove a bureaucratic obstacle and could lead to increased competition. However, several maritime industry stakeholders commented that the elimination could lead to vessels flagging in and out of the U.S.-flag fleet whenever convenient. While stakeholders who commented on this option generally supported it, 1 questioned why the period could not be shortened to 1 year. Both USAID and USDA officials generally provided comments supporting eliminating the 3-year waiting period and both commented that a change to current cargo preference statute would be required to do so. According to USAID officials, eliminating the 3-year waiting period could lower freight rates. It also could increase competition and eliminate any existing monopolies. There is a very small pool of U.S.-flag vessel owners who are eligible to participate in the carriage of agricultural food aid commodities. This limits agencies’ selection and flexibility, and leads to inefficient choices of trade that do not conform to commercial practices, such as with combination voyages to ports in Southwest Africa, East Africa, and Southeast Asia. According to USAID officials, eliminating the 3-year waiting period could help to improve the sustainability of the oceangoing U.S.-flag fleet by lowering foreign-flag vessels’ entry barriers, and growing the U.S.-flag fleet. According to USDA officials, more vessels participating as U.S-flag vessels would increase competition and potentially reduce shipping rates. According to USDA officials, to avoid the potential negative effects of eliminating the waiting period, there would need to be some sort of vetting period to ensure that foreign vessels that have acquired U.S.-flag registry are indeed equipped to move food aid cargo. According to USDA officials, eliminating the waiting period could also help to improve the sustainability of the oceangoing U.S.-flag fleet because vessels that have transitioned from foreign to U.S.- flag may be younger and in better condition than some U.S.-flag vessels currently participating in the food aid shipments. MARAD officials explained that there is a cost to flag in and out of the U.S.- flag fleet. Stakeholders from both categories provided comments both supporting and opposing eliminating CPFA and creating a DOD program to provide military sealift capability. Comments from other maritime stakeholders on this option were evenly split. Specifically, 1 stakeholder commented that this was the best idea of all the options, separating food aid from military readiness, while another stakeholder commented that this would be more costly. USAID officials cited potential benefits of eliminating CPFA, and USDA cited both potential benefits and negative effects with this option. According to USAID officials, the potential benefits would be increased flexibility and significant cost savings to the food aid programs. According to USDA officials, the potential benefits would be that competition would increase, U.S.-flag rates would theoretically be lower, and a large number of foreign-flag vessels could participate. The expected reduction in shipping rates would result in an increase to the number of food assistance agreements under the Food for Progress program. Eliminating CPFA would greatly benefit the food assistance programs. According to USDA officials, the potential negative effects of eliminating CPFA and creating a DOD program would be that those U.S.-flag steamship companies that rely on revenue from food aid shipments may suffer financially. In addition, the unfamiliarity of foreign-flag vessels with food aid shipments could be problematic. U.S. food aid programs play an important role in improving food security and alleviating hunger for millions of people around the world. How well USAID and USDA can achieve food aid programs’ goals depends on the effective and efficient use of food aid resources. The elimination of reimbursements to USAID and USDA, which the agencies used to fund food aid programs, further accentuates the importance of effectively using their limited food aid resources for the programs’ goals. Under U.S. law, a minimum proportion of U.S. food aid must be shipped on U.S.-flag vessels to promote both national security and commercial interests. However, because using U.S.-flag vessels is often more expensive than using foreign-flag vessels, a larger portion of the food aid budget must go to shipping costs than if there were no such requirement. Changes in the law in 2012 reduced the U.S.-flag minimum requirement for food assistance from 75 to 50 percent, decreasing the overall shipping cost of food aid, especially for programs administered by USAID. However, USDA has experienced limited savings, because the agency is subject to a court order requiring it to administer cargo preference on a country-by- country basis; USDA’s utilization of foreign-flag vessels was far below the 50 percent allowed by the 2012 law. GAO has twice recommended that key agencies administering CPFA agree on a consistent interpretation of CPFA requirements through an MOU, but the agencies have only addressed the definition of vessel categories and not the definition of “geographic area.” As USDA continues to use a more stringent definition of geographic area when implementing CPFA, it is not able to take advantage of the shipping price decreases that USAID utilizes. Pursuant to the terms of the court order requiring USDA to comply with CPFA on a country-by-country basis, an MOU embodying an agreement between USDA and MARAD on a consistent definition of “geographic area” would allow USDA to administer CPFA using a method other than country-by- country. CPFA ensures that U.S.-flag vessels carry a portion of food aid, but the extent to which CPFA contributes to sufficient sealift capabilities for DOD is unclear. While DOD officials noted that the number of U.S.-flag vessels is sufficient for contingency needs including a full and prolonged activation of the reserve fleet, it is unclear whether there are enough mariners available to fulfill this scenario, particularly in senior officer positions. MARAD has estimated that there is a shortage of qualified mariners available to address a full and prolonged activation of the reserve fleet. However, its estimate does not fully account for all of the potential sources of supply, including reserve naval officers. Without a full understanding of both the need for and potential available supply of mariners under DOD’s most serious scenario, the U.S. government is limited in its capacity to address any potential imbalance. Furthermore, the U.S. government cannot guarantee that the use of food aid programming funds to pay higher U.S.-flag shipping prices under CPFA is achieving the intended goal of maintaining a merchant marine capable of providing sealift capability in time of war or national emergency. While recognizing that cargo preference serves policy goals established by Congress with respect to the U.S. merchant marine, including maintenance of a fleet capable of serving as a naval and military auxiliary in time of war or national emergency, Congress should consider clarifying cargo preference legislation regarding the definition of “geographic area” to ensure that agencies can fully utilize the flexibility Congress granted to them when it lowered the CPFA requirement. GAO recommends that the Secretary of Transportation direct the Administrator of MARAD to study the potential availability of all qualified mariners needed to meet a full and prolonged activation of the reserve sealift fleet. In the study, MARAD should identify potential solutions to address the mariner shortfall if one is still identified. We provided a draft copy of this report to DOT, USDA, USAID, DOD, and USCG for their review and comments. In its written comments, reproduced in appendix V, DOT concurred with our recommendation to study the potential availability of all qualified mariners needed to meet a full and prolonged activation of the reserve sealift fleet. DOT stated that MARAD has been reviewing the adequacy of existing plans to recruit mariner volunteers to crew the full reserve fleet. Furthermore, DOT noted that 13,000 mariners are required to crew all the vessels in the fleet for sustained operations. During the course of our review, MARAD provided the estimated number of mariners required for prolonged full activation as 12,658. When we followed up on the number given in DOT’s written comments, however, DOT noted that the estimated number was 13,034 instead. While MARAD officials outlined some factors and high-level calculations they utilize when computing such an estimate, we could not assess the reliability or accuracy of either estimate because MARAD did not have a final report that documented and presented precise calculations and methods that they used. We therefore were unable to verify the details of their estimates. We noted MARAD’s different estimated numbers of mariners in our report. We also received agencies’ technical comments, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Transportation, Agriculture, and Defense; the Administrator of the Maritime Administration and USAID; and the Commandant of the U.S. Coast Guard. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report examines (1) cargo preference for food aid’s (CPFA) impact on food aid shipping cost and U.S. agencies’ implementation of CPFA requirements, (2) the extent to which the implementation of CPFA requirements contributes to sufficient sealift capacity, and (3) stakeholder views on options to improve the sustainability of the oceangoing U.S.-flag fleet. To address our objectives, we analyzed cargo preference legislation as well as documents, guidance, and data on CPFA provided by the U.S. Agency for International Development (USAID), the U.S. Department of Agriculture (USDA), and the Department of Transportation’s (DOT) Maritime Administration (MARAD). We also interviewed USAID, USDA, and MARAD officials about each agency’s role and responsibilities regarding CPFA, the processes each agency uses to implement CPFA, the cost implications of such requirements on food aid programs, and the impact of such requirements on cargo preference objectives. To determine the CPFA requirements’ impact on food aid shipping cost, we analyzed food aid procurement data for both USAID and USDA from April 2011 through fiscal year 2014, including some bulk food commodities and all packaged food commodities and shipment data for April 2011 through fiscal year 2014. With the exception of the detailed discussion in appendix II, we use the term “solicitation” to mean solicitation line. Each agency announces solicitations for bids to ship food aid. Each solicitation includes a line for a specific amount of a specific commodity to be procured for a specific food aid program. For example, a recent solicitation for USAID freight included one line for 51,710 metric tons of sorghum for the World Food Program’s food aid program in Sudan. During this time period, CPFA requirement levels changed from 75 to 50 percent. We examined the number of total U.S.-flag and foreign- flag bids per solicitation. We analyzed USDA and USAID bid data to estimate the cost difference between U.S.- and foreign-flag vessels and the CPFA requirements’ effect on shipping awards. We used regression analysis to identify the impact of the changes in the CPFA requirements on the cost of shipping U.S. food aid. For detailed discussion of our methodology and results, see Appendix II. To examine the extent to which the implementation of CPFA requirements contributes to sufficient sealift capacity, we reviewed the Department of Defense’s (DOD) documentation related to the Voluntary Intermodal Sealift Agreement (VISA) and the Maritime Security Program (MSP) and interviewed DOD officials about the sealift capability and military usefulness of the vessels in the U.S.-flag fleet. We also analyzed MARAD’s data on U.S.-flag vessels, including those carrying food aid cargo and the number of mariner positions aboard such vessels. We interviewed MARAD officials to understand the reliability of the data provided. According to MARAD officials, MARAD receives information directly from operating companies related to vessel specifications, as well as crewing requirements. Such information is stored in the Mariner Outreach System (MOS). Information stored in MOS was used to provide vessel crewing information for commercial and reserve sealift vessels. Further, MARAD officials used the Cargo Preferences Overview System, which records bills of lading, to identify those vessels carrying food aid cargo. We found the U.S.-flag vessel and mariner positions data sufficiently reliable for the purposes of this report. Furthermore, we obtained U.S. merchant mariner credential data available through the U.S. Coast Guard’s (USCG) Merchant Mariner Licensing and Documentation System and interviewed USCG officials to understand the number of U.S. mariners with Standards of Training, Certification and Watchkeeping (STCW) and unlimited tonnage/horsepower endorsements. The Merchant Mariner Licensing and Documentation System stores information on merchant mariners’ credentials and endorsements. We found overall credential and endorsement information to be sufficiently reliable for the purposes of this report. We also obtained data on the number of actively sailing mariners from USCG. The data received represents mariners with STCW and unlimited tonnage/horsepower endorsements for which USCG obtained discharge certificates during the last 18 months. However, according to officials we interviewed it likely underrepresents the number of mariners that sailed during this period because discharge certificates may not have been received for all mariners that have sailed. Further, USCG indicated that while it had checks in place to avoid double counting of mariners that filed multiple discharge certificates during this period, there was also a possibility of some double counting of the mariners that sailed more than once during this time period. We are presenting this number, with its limitations, to help place MARAD’s estimate of the number of available mariners into some context. We also interviewed MARAD officials to understand the crewing process for the Reserve Sealift Fleet and the number of U.S. mariners available to support DOD needs in time of national emergency. MARAD officials provided an estimate of the number of available mariners. MARAD officials told us that they estimated the number of mariners needed to ensure that the entire Reserve Sealift Fleet was able to conduct prolonged operations while full commercial operations continued by calculating the number of positions on each vessel, and comparing this sum with the estimate of available mariners. During the course of our review, MARAD provided the estimated number of mariners needed as 12,658. In commenting on our draft report, however, DOT noted that the estimated number was instead 13,034. However, these estimates, as well as the estimates of the number of the available mariners, are of undetermined reliability because we were only partially able to assess them. While MARAD officials outlined the data sources and key factors they considered when making these estimates, they reported that they did not have a final report that documented and presented precise calculations and methods that they used, and we were therefore unable to examine the details of their estimates. We determined that while we could comment on MARAD’s stated rationale and basic approach to estimating the sufficiency of the number of mariners, we could not assess the accuracy of MARAD’s estimates on the number of mariners available or the number of mariners required. However, we are reporting this number to provide context for our findings on the data sources that MARAD used and the key factors that they considered. To obtain stakeholder views on options to improve the sustainability of the U.S.-flag fleet, we conducted semistructured interviews and requested follow-up documentation from a nongeneralizable sample of 29 stakeholders knowledgeable about CPFA issues. We created an initial list of stakeholders using internal knowledge of CPFA. We then added more stakeholders based on interviewee responses to our question on whom else they thought we should speak with, considering, among other factors, how often others suggested we meet with them, the representation of their subcategory, and their location. For example, we conducted site visits at ports that we selected based on their ranking compared with that of other ports in terms of net commodity weight (metric tons) of food aid as well as the presence of stakeholders in the area. We categorized these stakeholders into maritime industry stakeholders—those stakeholders that self-identified as brokers, carriers, freight forwarders, and mariners—and other maritime stakeholders— those stakeholders that self-identified as academia, commodities, freight forwarders, implementing partners or nongovernmental organizations (NGO), ports, and trade associations. The intent of our semistructured interviews was to have stakeholders identify any options to improve the sustainability of the oceangoing U.S.- flag fleet, including food aid–carrying U.S.-flag vessels. During these interviews, we orally explained to stakeholders that our review is focused only on the oceangoing vessels of the U.S.-flag fleet and our review is on the topic of cargo preference for food aid. Cargo preference for food aid (CPFA) is sometimes referred to as agricultural cargo preference (ACP), but for the purposes of this report, we refer to it as CPFA. We did not specifically ask stakeholders to consider the effect that options may have on food aid. We consolidated interview responses to create a comprehensive list of 27 policy options for stakeholder comment. To ensure the accuracy of our consolidation effort, we internally reviewed our coding and reconciled any discrepancies. As a follow-up to our semistructured interviews, we sent the list of 27 options to all 29 stakeholders, requesting that they select what they believe to be the top 3 options from among those suggested and provide any comments. Twenty of the 29 stakeholders we interviewed responded to this follow-up request. Eighteen of these stakeholders (7 maritime industry stakeholders and 11 other maritime stakeholders) selected what they believe to be the top 3 options from among those suggested and provided comments on a variety of the options, while 2 of these stakeholders (both other maritime stakeholders) responded that they would not provide their views because they did not favor any of the options. We tallied the stakeholder follow-up responses to determine the options frequently selected by all stakeholders and maritime industry stakeholders compared with other maritime stakeholders. A complete list of all options suggested by stakeholders and how often stakeholders from the two categories of stakeholders selected each option can be found in appendix IV. In addition, we tallied the stakeholder follow-up responses to identify any overlap among selections made by the two categories of stakeholders. Our discussion of stakeholders’ selected options is based on what they selected as their top 3 options. Because of this, when we report that a certain number of stakeholders selected an option, it does not necessarily mean that the remaining stakeholders did not support the given option. Rather, it means that those stakeholders did not select it as one of their top 3 options. While we received 20 of 29 responses to our second phase of data collection on options and this set of responses is neither a complete universe of the selected stakeholders nor a sample generalizable to the full population of stakeholders, we present the results of our analysis to identify general tendencies in the policy options preferred by a set of important stakeholders knowledgeable about CPFA issues. To obtain agency officials’ views on stakeholders’ suggested options, we requested officials from the Department of Defense (DOD), DOT, USDA, USAID, and USCG submit any views in writing on the options frequently selected by stakeholders. We conducted this performance audit from October 2014 to August 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We analyzed the effect of the changes in cargo preference for food aid (CPFA) requirements in 2012 on the number of bids from ocean freight carriers and on food aid shipping rates. In July 2012, the Moving Ahead for Progress in the 21st Century Act of 2012 reduced the minimum required level of food aid to be shipped on U.S.-flag vessels from 75 to 50 percent and eliminated the Great Lakes Set-Aside, which required that at least 25 percent of Title II packaged food aid tonnage be shipped out from Great Lakes ports each month. The act also eliminated the Maritime Administration’s reimbursement to the U.S. Department of Agriculture (USDA) and the U.S. Agency for International Development (USAID) for the ocean freight differential resulting from the CPFA requirements. Because the CPFA requirements require that a certain percentage of U.S. food aid be shipped on U.S.-flag vessels, some foreign-flag carriers may have been deterred from bidding on some solicitations knowing that they would be unlikely to win the shipping award. This would be the case, especially when 25 percent of Title II packaged food aid tonnage was allocated to Great Lakes ports. According to a 2010 study, food aid shipped through Great Lakes ports was mostly shipped on lower-priced foreign-flag vessels because the portion allocated to the Great Lakes ports was made without consideration of the vessel’s flag as mandated by Section 17 of the Maritime Security Act of 1995. Most of the remaining 75 percent of Title II packaged food aid would then be required to be shipped on U.S.-flag vessels. The lower percentage required to be shipped on U.S.-flag vessels may encourage foreign-flag carriers to participate in the bidding process, especially for certain routes for which they did not previously participate. In addition, more competition and higher participation from foreign-flag carriers after the CPFA change may lead to lower overall shipping rates because foreign-flag vessels on average charge lower shipping rates. Our hypothesis is that after the relaxation of the CPFA requirements in July 2012, the number of bids from foreign-flag vessels would increase and overall food aid shipping rates would decrease. We developed statistical models to assess the effect of the July 2012 changes in the CPFA requirements, after controlling for a variety of factors that may affect the number of bids received for each solicitation and food aid shipping rates. However, there are several limitations to this methodology. We collected data on all of USDA’s food aid shipments and USAID’s packaged food aid shipments. However, similar data on USAID’s bulk food aid shipments were not available and our results cannot be generalized to those shipments. In order to address this limitation, we analyzed the award data, when feasible and appropriate, which include all shipments of food aid. We used a particular set of changes in the CPFA requirements to provide some insight into the effect of the CPFA requirements. However, these results cannot be generalized to other potential changes in the CPFA requirements. For example, our results cannot be generalized to show the effects of eliminating all CPFA requirements or of requiring all U.S. food aid to be shipped on U.S.-flag vessels. We controlled for a variety of factors that can affect the number of bids and the food aid shipping rates; however, it is still possible that some other factors we cannot control for drive the effect we observe. We conducted some sensitivity analysis to help address this limitation. We collected data on shipping awards and bids from USDA’s Kansas City Commodity Office’s (KCCO) Web Based Supply Chain Management System (WBSCM). The data covered USAID’s packaged food aid and all of USDA’s food aid shipments from April 2011 through fiscal year 2014. KCCO implemented WBSCM in April 2011. USAID does not use WBSCM to procure shipping for its bulk food aid. The data on bids include information on all bids submitted by carriers to ship USAID’s packaged food aid and all of USDA’s food aid from April 2011 through fiscal year 2014. For each bid, the data include information on the carrier that submitted bids, the date of submission, the name of the vessel, the vessel type (bulker, liner, or tanker), the commodity, the implementing partner, the recipient country, the destination and discharge port, and the quantity of the order. We conducted our analysis at the level of each solicitation line and constructed the data on bids to count the number of bids submitted for each solicitation line. Each agency announces solicitations for bids to ship food aid. Each solicitation includes a line for a specific amount of a specific commodity to be procured for a specific food aid program. For example, a recent solicitation for USAID included one line for 51,710 metric tons of sorghum for the World Food Program’s food aid program in Sudan. We also included the information on the solicitation line such as the commodity, the implementing partner, the recipient country, the destination, and the quantity of the line. For the date of the solicitation, we used the date the first bid was submitted. The data on shipping awards includes two sets of data. One set is the bids that were actually awarded the shipping contract when KCCO applied the CPFA requirements. The second set is the bids that would have been awarded the shipping contract had the CPFA requirements not been applied. Both sets of data include information on the quantity of the solicitation line, the type of commodity, the agency (USAID or USDA), the implementing partner, the recipient country, the destination, the quantity allocated to each awarded bid, the type of vessel, and the total shipping cost. Each solicitation line may be split into more than one shipment. For example, half of a solicitation line may be shipped on U.S.-flag vessels and the other half on foreign-flag vessels. Again, we conducted our analysis at the level of each solicitation line and constructed both sets of data on shipping awards so that for each solicitation line, we calculated information such as the quantity allocated to U.S.- and foreign-flag vessels, the awarded cost of shipping on U.S.- and foreign-flag vessels, and the awarded shipping rate on U.S.- and foreign-flag vessels. We merged the data on bids and the two sets of data on shipping awards for each solicitation line. When merging the two sets of data on shipping awards, only 75 (4 percent) of 1,712 solicitation lines did not match. When merging the two sets of data on shipping awards with the data on bids, 676 (28 percent) of the 2,388 solicitation lines from the data on bids did not match the data on shipping awards. According to KCCO officials, the 676 solicitation lines that did not have a match with the data on shipping awards could be partly due to solicitation lines that were not awarded. All solicitations lines from the data on shipping awards had a match in the data on bids. For each set of data, we created a dummy variable which equals to 0 for any solicitation line before the CPFA requirements change and 1 for after. Data source. For the analysis of the number of ocean freight bids carriers submitted, we used the data on bids, which included 2,388 solicitation lines. Even though not all of these solicitation lines would eventually be awarded in the data on shipping awards, we focused on this larger data set because prior to the award process, the carriers cannot predict which solicitation lines would be awarded. Therefore, the number of bids is better captured by the full set of data on bids. Summary statistics. Comparing the average number of bids before and after the changes, we found that the overall number of bids increased from around 7 to around 10 bids (see table 4). The average number of bids from U.S.-flag bids remained roughly unchanged and the average number of bids from foreign-flag vessels increased from around 2 to around 5. We also compared the characteristics of the solicitations lines before and after the changes in the CPFA requirements to describe how USDA and USAID food aid programs may have changed. These included the composition of food aid by commodity type, the month of the solicitation line, the implementing partner, and the destination. We found few differences in the implementing partner and the type of commodity before and after the changes. However, we did find differences in the destination and the month of the solicitation line. For example, the percentage of solicitation lines destined for Port-Au-Prince, Haiti, decreased from 8 to 3 percent of solicitation lines after the changes in the CPFA requirements. The differences in these characteristics before and after the change emphasized the importance of controlling for these characteristics when comparing the number of bids before and after the changes. In our example of Port-Au- Prince, a solicitation line with food aid destined there was correlated with a higher number of bids. And since the percentage of solicitation lines destined there decreased after the change, the number of bids may have decreased due to this. However, if we observed a decrease only in bids, one may have erroneously attributed the decrease to the changes in the CPFA requirements if we did not control for the destination. Regression model and results. To compare the number of bids for each solicitation line before and after the changes in the CPFA requirements, we estimated ordinary least square (OLS) regressions using the following equation. We estimated this equation for solicitation line i where Change is a dummy for before and after the CPFA changes, X is a set of solicitation line characteristics including the tonnage, implementing partner, destination of the food aid, and the month the first bid was submitted for the solicitation line, and 𝜀𝜀𝑖𝑖 is the error term. The dependent variable, y is the number of bids for each solicitation line i. Using this model specification, we found that the total number of bids increased by 3.43 and the number of bids from foreign-flag vessels increased by 3.09 (see table 5). The increase in the number of bids from U.S.-flag vessels was not statistically significant. We also found that the number of foreign-flag vessels and foreign-flag carriers bidding for each solicitation line increased by 1.58 vessels and 1.22 carriers. We found that the number of U.S.-flag vessels and carriers bidding decreased by 0.26 vessels and 0.60 carriers. This suggests that some of the increase in the number of foreign-flag bids may be from more carriers and vessels participating. The results from the OLS regressions are robust to the inclusion of different sets of control variables. We do not control for a dummy for whether the solicitation line was for bulk or packaged food aid and for the type of commodity because controlling for them did not significantly improve the explanatory power of the model. Potential limitations and sensitivity analysis. The main limitation with our methodology is that we cannot control for all factors that may affect the number of bids from U.S.-flag and foreign-flag vessels. For example, the number of U.S.-flag vessels available to ship food aid declined during our study period from April 2011 through fiscal year 2014. However, we did not include data on the trends in the vessels in our analysis. So even though the changes of the CPFA requirements may have increased the number of bids from U.S.-flag vessels, the decline in the number of U.S.- flag vessels may have decreased the number of bids, thereby nullifying any increases from the changes in the CPFA requirements. Since the number of U.S.-flag vessels available for food aid was not included in our analysis, we may have erroneously found that the changes did not change the number of bids from U.S.-flag vessels. Figure 11 adds validity to our methodology in identifying the effect of the changes of the CPFA requirements on the number of bids from foreign-flag vessels. We estimated 39 OLS regressions, 1 for each month between May 2011 and July 2014, as the dummy variable so we could test for a break point. For each month, we created a dummy variable set to 0 for any solicitation line before that month and to 1 for any solicitation line after that month. We estimated 39 separate OLS regressions with each regression controlling for 1 month dummy variable in addition to the tonnage, implementing partner, and destination of the food aid, and the month the first bid was submitted for the solicitation line. Figure 11 shows the coefficient and the 99 percent confidence interval around the coefficient on the month dummy variable from each of the 39 OLS regressions. We found that the largest coefficient is for July 2012, the month of the changes in the CPFA requirements. These results show that the largest increase in the number of bids from foreign-flag vessels occurred in July 2012, the month of the CPFA changes. This suggests that the changes in the CPFA requirements had a larger effect on the number of bids from foreign-flag vessels than any other changes that may have happened between from April 2011 through fiscal year 2014. We estimated the OLS regression models using data from 6 months before and after the changes; we found even larger and statistically significant increases in the number of bids, especially from foreign-flag vessels. Finding the increase in the number of bids from foreign-flag vessels using data for this shorter period builds more confidence in the robustness of our results. Data source. To estimate the shipping rates of the winning bids before and after the changes in the CPFA requirements, we analyzed data on shipping awards, which included 1,712 solicitation lines. Summary statistics. We found that the average overall shipping rates declined slightly from $258 per metric ton to $252 per metric ton after the changes (see table 6). For U.S.-flag vessels, the average shipping rates increased from $290 to $309 per metric ton after the changes. The average shipping rates remained unchanged after the changes for foreign-flag vessels. We also compared the characteristics of the solicitations lines before and after the changes in the CPFA requirements. These characteristics included the composition of the food aid by the type of commodities, agency, the month of the solicitation line, the implementing partner, and the destination. We found few differences in the implementing partner and the type of commodity before and after the changes. However, we did find differences in the destination and the month of the solicitation line. These results are consistent with those from the data on bids since the data on awards is a subset of the data on bids and represents 72 percent of the solicitation lines in the data on bids. Regression model and results. To compare the shipping rates for each solicitation line before and after the changes in the CPFA requirements, we estimated OLS regressions using the following equation. We estimated this equation for solicitation line i where Change is a dummy for before and after the CPFA changes; X is a set of solicitation line characteristics including the tonnage, implementing partner, and destination of the food aid, and the month the first bid was submitted for the solicitation line; and 𝜀𝜀𝑖𝑖 is the error term. y is the natural logarithm of the shipping rate for each solicitation line i. Using this model, we found that the overall shipping rate decreased by around 6 percent (see table 7). We also found that the shipping rates on foreign-flag vessels decreased by around 8 percent. The shipping rates on U.S.-flag vessels did not change. The results from the OLS regressions are robust to the inclusion of different sets of control variables. The estimates range from a decrease of around 6 percent to a decrease of around 8 percent. We do not control for a dummy for whether the solicitation line was for bulk or packaged food aid and for the type of commodity because controlling for them do not increase the adjusted R squared by much. Compared with USDA, USAID is more flexible in applying the CPFA requirements. The percentage of food aid shipped on U.S.-flag vessels declined much more for USAID than for USDA after the changes in the CPFA requirements. Consistent with this difference between agencies, the shipping rates for USAID decreased, while those for USDA did not (see table 8). The overall shipping rate decreased by around 9 percent for USAID, likely because of the increased use of foreign-flag vessels after the change and the 9 percent decrease in the shipping rates on those foreign-flag vessels. The shipping rate on U.S.-flag vessels increased by around 5 percent for USAID. For USDA, the overall shipping rate and the shipping rate on U.S.-flag vessels were unchanged, while the shipping rate on foreign-flag vessels decreased by around 7 percent. Potential limitations and sensitivity analysis. The main limitation with our methodology is that we cannot control for all factors that may affect shipping rates. For example, the overall commercial shipping rates could have declined during the period of our data from April 2011 through fiscal year 2014. Some of the decrease in shipping rates could have been due to other factors that cause the overall decline instead of the changes in the CPFA requirements. While we did not obtain data on commercial shipping rates, we found two results that may ameliorate this limitation. Figure 12 adds validity to our methodology in identifying the effect of the changes of the CPFA requirements on USAID’s shipping rates. We estimated 39 OLS regressions, 1 with each month between May 2011 and July 2014 as a dummy variable. For each month, we created a dummy variable set to 0 for any solicitation line before that month and to 1 for any solicitation line after that month. We estimated 39 separate OLS regressions with each regression controlling for 1 month dummy variable in addition to the tonnage, implementing partner, and destination of the food aid, and the month the first bid was submitted for the solicitation line. Figure 12 shows the coefficient and the 99 percent confidence interval around the coefficient on the month dummy variable from each of the 39 OLS regressions. We found that the smallest coefficients are for July and August 2012. These results show that the largest decline in USAID’s shipping rates occurred in July 2012, the month of the CPFA changes. This suggests that the changes in the CPFA requirements had the larger effect on USAID’s shipping rates than any other changes that may have happened between from April 2011 through fiscal year 2014. When we estimated the OLS regression models only for the period 6 months before and after the changes, we found even larger and statistically significant decreases in the shipping rates on foreign-flag vessels. For this shorter period, there was likely to have been less change in other factors such as overall commercial shipping rates. For USDA, the decrease in the overall shipping rate became larger and statistically significant, likely because of the large decrease in the shipping rates on foreign-flag vessels. However, for USAID, the decrease in the overall shipping rate was not statistically significant, likely because of the larger standard errors. Finding the decrease in the shipping rates on foreign-flag vessels for this shorter period builds some more confidence in the robustness of the result. The Cargo Preference Act of 1954 requires civilian federal government agencies to ship on U.S.-flag vessels only to the extent that such vessels are available at “fair and reasonable rates.” The fair and reasonable provision helps ensure that U.S.-flag vessels do not overcharge federal agencies required to ship on U.S.-flag vessels. The Maritime Administration (MARAD) will find a rate to be fair and reasonable if it is less than or equal to MARAD’s estimate of the cost of the voyage in question plus a reasonable profit. MARAD calculates fair and reasonable rates for ships chartered to carry shiploads of bulk and packaged agricultural commodities. Rates are also determined for bulk agricultural commodities carried by liner-service vessels. For other cargoes carried on liner-service vessels, conference rates are paid, which MARAD maintains are inherently fair and reasonable. MARAD makes a separate cost estimate for each voyage that it is asked to investigate. It bases its estimate on operating cost information supplied annually by the ship owner and certified by a corporate officer and on information specific to the voyage in question. Additionally, MARAD factors the return trip into the cost of the voyage. MARAD assumes that the vessel will return empty of cargo. If the vessel does carry cargo on the return trip, it must report this to the shipper agency, and if requested by the shipper agency MARAD will make an adjustment to the fair and reasonable rate. MARAD also allows for a reasonable profit on a 5-year running average derived from transportation companies in the Fortune 500 as well as the U.S. corporate sector in general. Currently, this profit factor is about 19 percent. MARAD requests ship owners to supply the following cost information each year: normal operating speed; daily fuel consumption at normal operating speed; daily fuel consumption while in port; type of fuel used; total capitalized vessel costs, for example, cost of vessel acquisition; vessel operating cost information for the prior calendar year; and number of vessel operating days for the vessels for the prior calendar year (this information is used to determine daily operating cost). Additionally, MARAD collects the following information for each voyage for which a fair and reasonable rate is calculated: port expenses for ports the vessel is scheduled to visit—for example, fees for pilots and custom charges; cargo expenses—for example, fees for stevedores and off-loading equipment, and canal expenses—for example, fees for tolls. Table 9 lists 27 options to improve the sustainability of the oceangoing U.S.-flag fleet, including food aid–carrying U.S.-flag vessels, derived from semistructured interviews with a nongeneralizable sample of 29 stakeholders knowledgeable about cargo preference for food aid (CPFA) issues. For each option, we list the number out of 18 stakeholders who selected the option as one of their top three options. For each option, we also list the number out of 7 maritime industry stakeholders who selected the option as one of their top three options and the number out of 11 other maritime stakeholders who selected the option as one of their top three options. For a detailed description of our scope and methodology for these semistructured interviews, see appendix I. For a summary of the key results from our semistructured interview and follow-up effort, see figure 10 in this report. In addition to these earlier observations, it is important to note that our analysis of stakeholder selection of options to improve the sustainability of the oceangoing U.S.- flag fleet, including food aid–carrying U.S.-flag vessels, reveals instances of overlap to some degree among options selected by maritime industry stakeholders and other maritime stakeholders. Specifically, 10 of the 27 options were selected by 1 or more stakeholder from each of the two categories. These 10 options were (1) increase CPFA minimum requirement from 50 to 75 percent, (2) increase CPFA minimum requirement from 50 to 100 percent, (3) eliminate the 3-year waiting period imposed on foreign vessels that acquire U.S.-flag registry before they are eligible for carriage of preference food aid cargo, (4) reinstate Ocean Freight Differential and Twenty Percent Excess Freight reimbursements, (5) increase all cargo preference to 100 percent, (6) subsidize U.S.-flag fleet vessels, (7) reform tort law that may negatively affect the U.S.-flag fleet, (8) increase the Maritime Administration’s monitoring and enforcement of its statutory authority, (9) increase funding for U.S. food aid programs, and (10) provide multiyear funding for U.S. food aid programs. However, no options were selected by 2 or more stakeholders from each of the two categories. Finally, 5 of the 27 options were not selected by any stakeholders from either category as part of their top 3 options. These 5 options were (1) give priority berthing rights to U.S.-flag fleet vessels; (2) reform U.S. shipping standards to better align with international standards; (3) eliminate mariner nationality requirements for U.S.-flag fleet vessels; (4) harmonize customs duties, especially for NAFTA countries, and bill of lading so shipped cargo is treated the same as cargo moving on land; and (5) increase cooperation between the U.S. Agency for International Development (USAID) and the U.S. Department of Agriculture (USDA). In addition to the contact named above, Judith Williams (Assistant Director), Ming Chen (Assistant Director), Fang He, Justine Lazaro, Victoria Lin, and Marycella Mierez made key contributions to this report. The team benefited from the expert advice and assistance of Carol Bray, Martin de Alteriis, David Dornisch, and Mark Dowling.
Cargo preference laws require that a percentage of U.S. government cargo, including international food aid, be transported on U.S.-flag vessels according to geographic area of destination and vessel type. One intention is to ensure a merchant marine—both vessels and mariners—capable of providing sealift capacity in times of war or national emergency, including a full, prolonged activation of the reserve fleet. The CPFA percentage requirement has varied over the years, and was reduced from 75 to 50 percent in 2012. Among other objectives, this report examines (1) CPFA's impact on food aid shipping cost and U.S. agencies' implementation of CPFA requirements and (2) the extent to which the implementation of CPFA requirements contributes to sufficient sealift capacity. GAO analyzed agency documents and bid data from April 2011 (when the food procurement database was implemented) through fiscal year 2014, and interviewed agency officials as well as maritime industry stakeholders. Cargo preference for food aid (CPFA) requirements increased the overall cost of shipping food aid by an average of 23 percent, or $107 million, over what the cost would have been had CPFA requirements not been applied from April 2011 through fiscal year 2014. Moreover, differences in the implementation of CPFA requirements by the U.S. Agency for International Development (USAID) and U.S. Department of Agriculture (USDA) contributed to a higher shipping rate for USDA. Following the July 2012 reduction in the minimum percentage of food aid to be carried on U.S.-flag vessels, USAID was able to substantially increase the proportion of food aid awarded to foreign-flag vessels, which on average have lower rates, helping to reduce its average shipping rate. In contrast, USDA was able to increase the proportion of food aid awarded to foreign-flag vessels by only a relatively small amount because it is compelled by a court order to meet the minimum percentage of food aid carried on U.S.-flag vessels by individual country, a more narrow interpretation of the geographic area requirement than what USAID applies. Despite GAO's past recommendations, U.S. agencies have not fully updated guidance or agreed on a consistent method for agencies to implement CPFA, which would allow USDA to administer CPFA using a method other than country-by-country. U.S. Agency for International Development's (USAID) and U.S. Department of Agriculture's (USDA) Cost of Cargo Preference for Food Aid (CPFA) Requirements, April 2011 through Fiscal Year 2014 (Dollars in millions) CPFA's contribution to sealift capacity is uncertain, and available mariner supply has not been fully assessed. While CPFA has ensured that a portion of U.S.-flag vessels carry some food aid cargo, the number of vessels carrying food aid and U.S. mariners required to crew them has declined. The available pool of sealift capacity has always met all of the Department of Defense's (DOD) requirements, without the full activation of the reserve sealift fleet. DOD's most serious scenario would require a full and prolonged—a period longer than 6 months—activation of the reserve sealift fleet as well as the use of commercial vessels. The Maritime Administration (MARAD) estimated that 3,886 mariners would be needed to crew the reserve surge fleet and 9,148 mariners to crew commercial vessels. MARAD estimated that at least 1,378 additional mariners would be needed to satisfy a full and prolonged activation, including the crewing of commercial vessels. However, the actual number of U.S. mariners qualified and available to fulfill DOD's most serious scenario is unknown and MARAD has not fully assessed the potential availability of all qualified mariners to satisfy a full and prolonged activation. Recognizing that cargo preference serves statutory policy goals, Congress should consider clarifying CPFA legislation to define “geographic area” in a manner that ensures agencies can fully utilize the flexibility Congress granted to them when it lowered the CPFA requirement. The Secretary of Transportation should direct the Administrator of MARAD to study the potential availability of all qualified mariners needed to meet a full and prolonged activation of the reserve sealift fleet; DOT agreed with this recommendation.
Mr. Chairman and Members of the Committee: I am pleased to be here today to assist the Committee in its review of the draft strategic plans of the five federal regulators of depository institutions. These consultations are a step in implementing the Government Performance and Results Act of 1993 (GPRA or Results Act) whose purpose is to reduce the cost and improve the performance of the federal government. Mr. Chairman, you asked that we provide analyses and observations about the agencies’ draft strategic plans, including the strengths and weaknesses of each plan, the extent to which the agencies are experiencing particular challenges that face regulatory agencies in their attempts to measure performance, and any suggestions we might have for improvements in these draft plans before they are finalized and submitted to Congress in September. As its title indicates, the Results Act’s focus is on results. In crafting the Act, Congress recognized that congressional and executive branch decisionmaking had been severely handicapped in many agencies by the absence of the basic underpinnings of well-managed organizations. These agencies lacked clear missions; results-oriented performance goals; well-conceived agency strategies to meet those goals; and accurate, reliable, and timely program performance and cost information to measure progress in achieving program results. In recent years, Congress has established a statutory framework for addressing these long-standing challenges and for helping Congress and the executive branch make the difficult trade-offs that are necessary for effective policymaking.Improving management in the federal sector will not be easy, but the Results Act can assist in accomplishing this task. period of 1997 through 2002. The Results Act requires agencies to consult with Congress and solicit the input of others as they develop these strategic plans. Beginning with fiscal year 1999, executive agencies are then to use their strategic plans to prepare annual performance plans. These performance plans are to include annual goals linked to the activities displayed in budget presentations as well as to the indicators the agency will use to measure performance against the results-oriented goals. Agencies are subsequently to report each year on the extent to which these goals were met, provide an explanation if these goals were not met, and present the actions needed to meet any unmet goals. Congress can use the Results Act to provide the vital information that it needs to better make decisions. The congressional consultations on agencies’ strategic plans provide an important opportunity for Congress and the executive branch to work together to ensure that agencies’ missions are focused, goals are results-oriented and clearly established, and strategies and funding expectations are appropriate and reasonable. One of the reasons we are here today is to provide our perspective on these plans. We note that, although these strategic plans are not due until September, each agency we reviewed had prepared a draft plan. Overall, we found that each agency had made an effort to adhere to the Results Act, and we recognize that agency officials are still in the process of updating and revising the draft plans. addressed major management challenges and included indications of interagency coordination. On the basis of our review of the draft plans, we found that each plan contained most of the components required by the Results Act. Three of the draft plans had all six components, and two draft plans had five of the components. In general, the draft plans reflected the statutory authorities and responsibilities of the federal regulators with respect to the institutions and matters within their jurisdictions. On the whole, the draft plans showed little evidence of interagency coordination. Our analysis of individual plan components showed that the draft plans had mission statements that broadly defined the purpose of the agency and goals and objectives that were somewhat results-oriented and appropriate to the agency’s mission. The content of other components varied across agencies. For example, some agencies had useful discussions of approaches and strategies to achieve the goals and objectives, while others could have benefitted from more discussion of the resources needed. Each agency discussed key external factors but only one discussed how those factors would affect the achievement of its goals. None of the plans discussed how the external factors would be addressed. In general, two sections were most in need of improvement. Each agency could strengthen its section on the relationship between strategic and annual goals by explicitly discussing the link between these two types of goals. Also, each agency could improve its section on how program evaluations were used and a schedule for future evaluations. Due to the complex set of factors that determine regulatory outcomes, measuring the impact of a regulatory agency’s programs will be a difficult challenge going forward. However, the use of program evaluations both to derive results-oriented goals and to measure the extent those goals are achieved is a key part of the process. components, we used our May 1997 guidance for congressional review of the plans as a tool. To determine whether the draft plan contained information on interagency coordination and addressed management problems we had previously identified, we relied on our general knowledge of each agency’s operations and programs and the results of our previous work. The requirements of the Results Act and OMB guidance indicate that the following factors should be addressed within the six components of strategic plans: (1) The comprehensive mission statement should be brief and define the basic purpose of the agency and focus on core programs and activities. (2) The description of general goals and objectives should contain general goals and objectives for the major functions and operations of the agency, elaborate on how the agency is carrying out its mission, contain a number of outcome-type goals, and be stated in a manner that allows a future assessment to be made on whether the goals are being achieved. (3) The description of how the general goals and objectives will be achieved is to include discussion of operational processes, staff skills, and technologies as well as the human, capital, information, and other resources that are needed to achieve the goals and objectives and outline how the agency will communicate strategic goals throughout the organization and hold managers and staff accountable for achieving these goals. (4) A strategic plan is to describe how the performance goals included in the agency’s annual performance plans are related to the goals and objectives in its strategic plan. (5) A strategic plan is to identify and discuss key factors external to the agency and beyond its control, which could occur during the time periods covered by the plan and significantly affect the agency’s achievement of its strategic goals. The plan is to briefly describe each key external factor, indicate its link with a particular strategic goal or goals, and describe how the factor could affect the achievement of the goals. (6) Program evaluations—objective and formal assessments of the results, impact, or effects of a program or policy—are to include assessments of the implementation and results of programs, operating policies, and practices. The plan’s program evaluation section should briefly describe program evaluations that were used in preparing the strategic plan; evaluation methodologies, scopes, and issues addressed; and a schedule for future evaluations. As shown in figure 1, each of the agencies’ draft plans that we reviewed contained most of the six required components of the Results Act. Our assessment of whether the plans’ components met the requirements of the Act follows the figure. comment on whether the documents had outcome-oriented measures and performance goals or whether they could measure achievement of intended objectives. The draft plan did not include a schedule for future program evaluations, outlines of methodologies used, descriptions of evaluation scope, or details about particular issues to be addressed. merger of the Bank Insurance Fund and the Savings Association Insurance Fund and the resolution of differences in statutory and regulatory rules governing banking and thrift industries. Also, FDIC’s draft plan reviewed several internal factors, including those related to financial accountability, organizational, and human resource issues. However, the link between key internal and external factors and particular goals and objectives was not described, and it was unclear whether and how key factors would influence goal achievement. FDIC’s draft plan did not describe how program evaluations were used and a schedule for future evaluations. FDIC cited a quarterly performance reporting process, GAO and Inspector General reports, cost-benefit analyses, surveys of stakeholders, and other processes. However, the extent to which these reports and processes had been or would be used to develop or revise goals and objectives is unclear. Also, although the plan made reference to quarterly and ongoing program evaluation, the draft plan did not include a schedule for future program evaluations. OCC’s draft plan discussed all six components required by the Results Act. The time frame covered by OCC’s draft plan was from 1997 to 2002. The draft plan’s mission statement broadly defined the basic purpose of the agency, which is to charter, regulate, and supervise national banks. The goals and objectives were results-oriented and seemed appropriate to meet the agency’s mission. For instance, one of OCC’s goals was to improve the efficiency of bank supervision and reduce the burden on banks by streamlining supervisory procedures and regulations. The approaches and strategies to achieve the goals and objectives, while not under the section labeled “description of how general goals and objectives are to be achieved,” were discussed to some extent in the draft plan under the seven objectives that OCC designed to meet its four goals. The objectives generally described the processes needed to meet goals. However, most of the objectives did not include a description of the resources OCC will need to meet the objectives and goals. The draft plan outlined performance goals to be included in the annual performance plan as an effort to address the component relationship between strategic goals and annual performance goals. For instance, the draft plan listed some output-related performance measures. However, the plan did not relate these measures to the goals. In addition, the plan lacked specific performance measures for some of the goals, such as promoting competition and ensuring fair access to financial services. The draft plan identified some key external factors that could affect the achievement of the goals and objectives, and also provided a link as to how these factors might affect the achievement of the goals and objectives. These factors included the following: industry consolidation, electronic money and banking activities, and competitive environment changes. The draft plan included a section on program evaluation, but the plan neither discussed how evaluations were used nor included a schedule for future evaluations that outlined the general methodology used, a timetable, or the scope of evaluations. In its draft plan, OTS discussed five of the six components required by the Results Act. The time frame covered by OTS’ draft plan was from 1997 to 2002. The draft mission statement stated that OTS was to effectively and efficiently supervise thrift institutions; to maintain the safety, soundness, and viability of the industry; and to support industry efforts to meet housing and other community credit and financial services needs. The draft goals and objectives appeared to lay out a general strategy to meet the agency’s overall mission. However, the draft plan did not always state the goals and objectives in a way to allow for a future assessment of whether the goals would be achieved. For instance, OTS stated that one way to meet its goal to “improve credit availability by encouraging safe and sound lending in those areas of greatest need,” was to “measure the degree to which the defined tasks of the OTS Community Affairs Program are met in any given year.” Yet, the draft plan neither clearly stated what tasks were to be performed nor linked how the accomplished tasks could ensure that the overall credit availability could be improved in the areas of greatest need. Although the draft plan identified general approaches and strategies to achieve the goals and objectives, it did not describe the resources required to achieve each goal and objective. Also, the draft plan did not establish time frames to accomplish each goal and objective. The relationship between strategic goals and annual performance goals was not specifically discussed in the draft plan. However, the relationship between strategic goals and annual performance goals was described in a separate performance plan. For instance, to achieve the goal to “maintain and enhance a risk-focused . . . approach to supervising thrift institutions,” the performance plan suggested ways in which OTS could improve the value and consistency of examinations. The performance plan also identified measures to accomplish these tasks. The draft plan discussed three key external factors that could affect OTS’ accomplishment of its goals: (1) the performance of the U.S. economy, (2) the status of legislation to modernize the financial services industry, and (3) major interindustry consolidations. However, OTS did not link each factor to a particular goal or discuss how each factor might affect OTS’ success in meeting its goals and objectives. The draft plan discussed how program evaluations were used in preparing the strategic plan. Program evaluations were used to establish goals and objectives, but there was no schedule for future evaluations. United States Supreme Court facing NCUA, and the possibility that credit unions may lose their congressionally mandated tax-exempt status. The draft plan stated that a dramatic downturn in the economy could have a negative impact on components of its annual performance plan, although it did not explain how key external factors could specifically affect NCUA’s achievement of its goals and objectives. Finally, the draft plan indicated that a loss in its court challenge or a loss of its tax-exempt status could have a negative impact on the safety and soundness of the nation’s credit unions. Although the draft plan had a section entitled “Program Evaluations,” it neither discussed how program evaluations were used to develop the strategic plan nor did it contain a schedule for future program evaluations. Instead, the section contained information on the performance measures linked to the strategic goals and objectives. Generally speaking, each draft plan reflected the statutory authorities and responsibilities of the federal regulator with respect to institutions and matters within its jurisdiction. This reflected the comprehensive nature of federal regulation of insured depository institutions and the nation’s financial system. The federal regulatory agencies are charged with chartering or otherwise certifying the fitness of an institution to conduct business and (1) examining, (2) supervising, and (3) otherwise regulating institutions with respect to a broad range of complicated matters that include safety and soundness, consumer protection, and credit access. In addition, the Board is responsible for monetary policy and the nation’s payments system. There is a potential for various coordination problems among the Board, FDIC, OCC, OTS, and NCUA, yet only one of the draft plans indicated that coordination issues had been considered. All of these agencies have similar oversight responsibilities for developing and implementing regulations, conducting examinations and off-site monitoring, and taking enforcement actions for those institutions that are under their respective purview. We previously have reported that regulators, banking officials, and analysts believe that the multiplicity of regulators has resulted in inconsistent treatment of banking and thrift institutions in examinations, enforcement actions, and regulatory decisions. In our November 1996 report, we also noted that Congress and regulatory agencies have taken some actions to improve interagency coordination. For instance, Congress created the Federal Financial Institutions Examination Council (FFIEC) in 1979—comprised of the Board, FDIC, OCC, OTS, and NCUA—to promote consistency among these agencies, primarily in the area of financial examinations. In addition, since June 1993, the Board, FDIC, OCC, and OTS have operated under a joint policy statement that was designed to improve coordination and minimize duplication in examination. However, the regulatory agencies’ draft plans that we reviewed did not discuss how they planned to coordinate with each other in the future and only one mentioned the need for future coordination. Moreover, the draft plans did not refer to the possibility of future coordination activities involving FFIEC. Although mentioned in the draft plans of some of the affected regulators, there are a number of common external factors that will present issues to these agencies. Because these issues are significant and likely to affect each of the agencies to some extent, their strategic plans could be more useful to the agencies, Congress, and other stakeholders if these challenges were more fully discussed in the plans. The issues include electronic innovation, new approaches to supervisory oversight, pending legislation on financial modernization, and consolidation in the financial services industry. For example, electronic innovation, both in the way transactions are conducted and in the way information is transmitted, represents a regulatory challenge for all five regulators. The regulators have generally adopted a wait-and-see approach to this policy because they do not want to interfere with the pace or determine the direction of change. However, deciding if and when this policy should be altered and how regulation might be applied to electronic “banking” represents a major challenge. adding an additional risk factor and increasing the emphasis on the quality of risk management. Another issue facing all three bank regulators and OTS is the potential impact of legislation currently being considered to modernize the financial services industry. Provisions currently being considered by Congress could have far-ranging impact on each of the regulators. For example, the elimination of the federal thrift charter and merger of OTS with OCC could affect all three bank regulators and OTS in terms of (1) workload, if thrifts are allowed to choose between a federal or state charter, and (2) supervisory focus, if the balance sheet structure of thrifts remains different from that of banks. Ongoing financial consolidation, in part related to interstate banking and branching could also have important implications for the structure of the bank regulators. Each of these regulators has traditionally had a regional structure that has more or less evolved over time. As more and more banks become multiregional or even national institutions, the old regional structure may become less relevant and a fundamental shift in geographical focus may be in order. OCC has recently announced a reorganization, which is at least partially due to the anticipated effects of interstate banking. The Federal Reserve System, due to its broader mandate and unique structure, faces its own set of challenges. For example, an increased use of electronic payments in services provided to the Department of the Treasury and other agencies may result in realignments or reductions in certain staff at particular reserve banks. These and other challenges may, in turn, raise questions about the structure of the Federal Reserve System, such as the size, number, and location of the Federal Reserve banks. In enacting the Results Act, Congress realized that the transition to results-oriented management would not be easy. The difficulties in moving to results orientation could be especially difficult for a regulatory agency. We analyzed a set of barriers facing certain regulatory agencies in their efforts to implement the Results Act in a June 1997 report. These barriers included the following: (1) problems collecting performance data, (2) complexity of interactions and lack of federal control over outcomes, and (3) results realized only over long time frames. At least to some extent, each of these barriers is also applicable to federal regulators of depository institutions. For example, in focusing on the safety and soundness of depository institutions any measure (such as the average capital to risk-based asset ratio or the number of failed institutions) would be largely determined by overall economic conditions, rather than any particular regulatory intervention strategy. Finding approaches that could effectively disentangle regulatory intervention from the myriad of other forces influencing outcomes represents a difficult challenge for all of these agencies as they pursue results-oriented measurement. Long time lags between actions and possible results could also be an issue for regulators of depository institutions. Historically, there has been a considerable time lag between the time that a financial institution makes a questionable underwriting decision and the time that a loan goes bad and finally affects earnings and eventually bank capital, thereby potentially threatening the institution’s existence. Effective intervention by a regulator may also not show a result, at least not by many standard measures, for an extended period. This argues for constructing sophisticated measures that can account for long time lags and that are evaluated over a period longer than a year. Mr. Chairman, one of the most difficult challenges facing these agencies, as implementation of the Results Act proceeds, will be separating a program’s impact on the agency’s objectives from the impact of external factors that are often outside the program or agency’s control. Although developing performance measures or evaluating program impact is difficult in these situations, it is important that agencies make efforts toward that end. We note that all the agencies’ plans we reviewed had one section that consistently was in need of development: the discussion of how program evaluations were used to establish goals and how such evaluations might be used in the future. a potential source for researching and developing innovative methods for measuring results. Any new methods or research approaches developed by one agency could also be useful to others because, at least in the areas of supervision and regulation, there are many similarities in the activities undertaken by these agencies. This concludes my prepared statement. I would be pleased to respond to any questions you or other members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the draft strategic plans of the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, Office of Thrift Supervision, and the National Credit Union Administration. GAO noted that: (1) on the basis of its review of the draft plans, GAO found that each plan contained most of the components required by the Government Performance and Results Act; (2) three of the draft plans had all six components, and two draft plans had five of the components; (3) GAO's analysis of individual plan components showed that the plans had mission statements that broadly defined the purpose of the agency and goals and objectives that were somewhat results-oriented; (4) some agencies had useful discussions of approaches and strategies to achieve the goals and objectives, while others could have benefited from more discussion of the resources needed; (5) each agency discussed key external factors but only one discussed how those factors would affect the achievement of its goals; (6) none of the draft plans discussed how the external factors would be addressed; (7) in general, two sections were most in need of improvement; (8) each agency could strengthen its section on the relationship between strategic and annual goals by explicitly discussing the link between the two types of goals; (9) also, each agency could improve its section on how program evaluations were used and a schedule for future evaluations; (10) because of the complex set of factors that determine regulatory outcomes, measuring the impact of a regulator agency's programs will be a difficult challenge going forward; and (11) however, the use of program evaluations both to derive results-oriented goals and to measure the extent those goals are achieved is an important part of the process.
To help fulfill its role as the nation’s health protection agency, HHS’s CDC conducts and supports research, including prevention research. Prevention research includes applied public health research that develops and evaluates health promotion and disease prevention and control strategies that are community- and population-based. Through legislation enacted in 1984, Congress authorized, and CDC later established, the PRC program to fund health promotion and disease- prevention research. The legislation mandated that the PRCs to be funded be located at academic health centers capable of providing a multidisciplinary faculty with expertise in public health, relationships with professionals in other relevant fields, graduate training and demonstrated curricula in disease prevention, and a capability for residency training in public health or preventive medicine. The PRCs, the first of which were funded in 1986, also serve as demonstration sites for the use of new and innovative applied public health research and activities for disease prevention and health promotion. The PRC program is administered by CDC’s NCCDPHP. CDC makes its PRC awards through a competitive process; there are currently 26 PRCs, located in 24 states, funded for fiscal years 2014 through 2019. Funded PRCs are able to compete for SIPs, which were created by CDC in 1993 to provide supplemental funding to the PRCs to design, test, and disseminate effective applied public health prevention research strategies. According to CDC, eligibility for SIPs is limited to PRCs because the centers are “uniquely positioned to oversee, coordinate, and perform applied public health research that promotes the field of health promotion and disease prevention research due to their established relationships with multidisciplinary faculty and community partners.” Subject matter experts (SME) within CDC sponsoring units propose potential SIPs each year, depending on unit needs—e.g., particular research gaps that have been identified—and available funding. After being approved by the leadership of the sponsoring unit, the sponsoring units’ proposals are reviewed internally by NCCDPHP and others before being included in a SIP FOA. The SIP FOA is assembled by NCCDPHP’s extramural research group, and when complete, is posted publicly on the grants.gov website. SIP applications are subject to an external peer review process, as well as a secondary internal review by CDC officials. SIP awards are generally made on the last day of the fiscal year. Sponsoring units fund both individual SIPs, which are awarded to one or more PRCs to work independently on a particular research topic, and thematic network SIPs, which are awarded to multiple PRCs to work collaboratively on a research agenda related to a specific health issue, such as cancer prevention or brain health. In fiscal years 2014, 2015, and 2016, CDC announced the availability of 51 SIPs—of which 43 were funded. The 43 funded SIPs resulted in a total of 76 awards to 22 of the 26 PRCs. (See appendix 1 for detailed information on the SIPs awarded in fiscal years 2014 through 2016, including information on SIPs awarded by sponsoring unit, SIP funding by PRC, and a complete listing of all SIPs awarded during this period.) CDC publicly discloses information on SIP awards on its website. Specifically, CDC has a PRC project database on its website that, as of July 2017, included information on SIPs from fiscal years1999 through 2015, as well as other information related to the PRCs. The project database includes the SIP number, project title, principal investigator, PRC funded, and the CDC sponsoring unit. It does not include the amount of the funding. In general, CDC officials we spoke with told us they will choose the SIP mechanism when seeking to fund prevention research that is community- based and would benefit from having access to a multidisciplinary group of researchers. Specifically, CDC officials from most of the sponsoring units we spoke to told us that they use the SIP mechanism when community participation is important to the research. For example, one sponsoring unit used a SIP for the development and testing of an integrated comprehensive communication strategy to promote vaccination for the human papilloma virus in the United States. The SIP was focused on developing strategies that engaged local or regional health systems, community-based organizations, and state health departments as key community partners, in order to enhance the acceptability of the vaccination among parents with vaccine-eligible children and to increase the likelihood that a provider would recommend the vaccination. CDC officials from one sponsoring unit also told us they use the SIP mechanism when they need to engage community leaders and members of the public in order to answer the research questions. For example, CDC used the SIP mechanism to fund a research project focused on enhancing the knowledge, skills, and capacity of community health advocates and leaders from community-based organizations, with the goal to provide participants with the skills necessary to assess local community health needs in order to improve community health. According to CDC, an important aspect of the research was the evaluation of whether there was an increase in the skills and leadership capacities of participants and their influence on local improvements in their communities. In addition, CDC officials told us that they choose the SIP mechanism to conduct research when seeking to access researchers who have established partnerships with diverse population groups across the country. PRCs are located across the United States and are expected to have cultivated relationships with their local communities. For example, one sponsoring unit used a SIP to fund research on the barriers to colorectal cancer screening among South Central Asian immigrants, primarily Indians and Pakistanis, who have been shown to have low screening rates. The purpose of the research was to inform the development of culturally relevant strategies to increase colorectal screening. As such, to be awarded the SIP, a PRC had to demonstrate that it had established relationships within the South Central Asian community and an ability to recruit from these populations. In contrast, CDC officials explained that they would not choose the SIP mechanism if the desired research would be better suited for an entity other than an academic health center. For example: Officials from one sponsoring unit told us that a different mechanism was used for a project testing obesity prevention and management strategies because the project required working directly with health care service providers in the community, such as federally qualified health centers. Officials from another sponsoring unit chose not to use a SIP for a project to evaluate vaccine impact on recurrent respiratory papillomatosis (a disease in which tumors grow in the respiratory tract). This research was being conducted by the providers who care for the patients, as opposed to academic researchers. CDC officials provided examples of other instances when they would not use a SIP, and instead choose another mechanism to support the desired research. Specifically, officials from one sponsoring unit told us that when conducting a research project related to cervical cancer, they did the work through a contract in order to allow the SME to direct the research protocol, which included collaboration with organizations that maintained cancer data, as well as the deliverables and timeline for completion of the work. Sponsoring unit officials also told us that they will not choose the SIP mechanism when the research they want to fund is not focused on public health prevention, such as when the research is clinical or laboratory based, or when the timing of the research does not align with the PRC funding cycle (e.g., a longitudinal study or a study that would cross two PRC funding cycles). CDC officials told us that CDC SMEs’ relationships and collaboration with experts in the field—including federal and nonfederal experts—help inform the development of the research funding opportunities made available through SIPs, as well as other mechanisms. In our prior work, we found that interagency collaboration, which can include information sharing and communication among federal experts, may reduce the likelihood of unnecessary duplication. According to officials from all of the sponsoring units we spoke with, SMEs have developed collaborative relationships with federal and nonfederal experts in their fields; these relationships develop through SMEs’ participation in workgroups, advisory committees, and joint projects, as well as through informal interactions at in-person meetings and conferences. (See table 1 for examples of the workgroups and advisory committees in which CDC SMEs participate.) These interactions, as well as the SME’s review of the scientific and nonscientific literature, are used to determine gaps in knowledge and inform the research proposed to be funded through SIPs or other mechanisms. CDC SMEs’ collaboration with federal and nonfederal experts may lead to the development of specific research projects that are funded through a SIP. For example: SMEs from one sponsoring unit identified a series of research gaps that existed within skin cancer prevention while working on the Surgeon General’s Call to Action to Prevent Skin Cancer Report with multiple federal agencies, including the Food and Drug Administration, the National Institutes of Health’s (NIH) National Cancer Institute, the Environmental Protection Agency and the Office of the Surgeon General. CDC developed a SIP to address one of these gaps— assessing the knowledge, attitudes and beliefs about skin cancer in order to develop and test communication strategies for skin cancer prevention, specifically for adults aged 18 to 49. SMEs from another sponsoring unit participate in the Interagency Coordinating Committee on the Prevention of Underage Drinking with 15 federal partners, including the Substance Abuse and Mental Health Services Administration, Federal Trade Commission, Department of Justice, and NIH’s National Institute on Alcohol Abuse and Alcoholism. According to CDC, this group meets regularly to discuss their work and efforts to address prevention of underage drinking. Based on this collaboration and the review of other resources, including research projects and reports by committee members, CDC SMEs determined there was a gap in knowledge related to monitoring youth exposure to alcohol marketing on the Internet, and developed and awarded a SIP in 2014 to address this issue. In 2011, CDC SMEs hosted an expert conference—including federal and nonfederal researchers, health care providers, and representatives from advocacy groups—to discuss how patients and health care providers can communicate effectively before, during, and after prostate cancer screenings. Recommendations from this conference resulted in a 2014 SIP focused on the development of a multimedia decision aid to help patients and their family members understand treatment options after a positive prostate cancer diagnosis. Collaboration with experts may also increase the resources available for a specific research project. For example, officials from the CDC unit that sponsors the National Cancer Prevention and Control Research Network SIP told us that the network is jointly funded with NIH’s National Cancer Institute. This joint funding allows for an expanded pool of resources, and officials stated that the network is able to achieve more than any individual PRC could achieve on its own. CDC officials told us that the knowledge gained through coordination and information sharing also mitigates the potential for duplication of research efforts. For example, officials from one sponsoring unit told us that as a part of their ongoing discussions with NIH on sexually transmitted disease prevention, they learned of research NIH was conducting that was related to a SIP that CDC planned to fund. CDC decided not to fund the SIP and instead entered into a joint funding arrangement with NIH to address its research needs. The joint funding arrangement allowed CDC and NIH to expand the number of sites involved in the research NIH already had underway. The main advantage of limiting eligibility for SIPs to the PRCs is the ability to rapidly initiate high-quality research, due to the infrastructure and relationships the PRCs have in place, according to officials from CDC, outside organizations, and PRCs. For example, officials we spoke with from one PRC told us that they typically learn from CDC if their application for a SIP has been successful in August of a given year, and awards are made at the end of September, with the expectation that the research should begin shortly thereafter. These officials added that because many SIPs are only providing funds for one or two years, there is no time to waste in getting the research up and running. Officials from one outside organization noted that this faster turnaround in getting the research started can result in faster publication of results. Officials from CDC and others told us that PRCs have infrastructure in place to do multidisciplinary research, which an official from one outside organization told us includes the ability to manage federal funds and recruit study participants. This infrastructure contributes to the speed with which PRCs can start a SIP. For example, officials from one CDC sponsoring unit told us that a PRC’s existing infrastructure means that it is not starting from scratch when it conducts research through a SIP. In addition, a representative from one outside organization said that it is a good use of federal resources to continue to invest in federally supported infrastructure—as in the case of offering supplemental funding to PRCs in the form of SIP awards. In addition to research infrastructure, officials also told us that PRCs have established relationships with community partners. One outside organization told us that these relationships are particularly important for prevention research, which often involves working with populations who may be reluctant to participate in research. Research has confirmed the need for community engagement when studies include disadvantaged groups. Specifically, a systematic review of the literature on strategies for increasing participation of disadvantaged groups in research concluded that researchers need to operate via community partnerships, because they can increase trust among the study population. Similarly, one sponsoring unit told us about the importance of the PRCs’ credibility in the neighborhoods where the research is being done. Given this, for some SIPs, the FOA explicitly requires PRCs to outline their partnerships with community organizations in their applications or notes that descriptions of these relationships will be considered when applications are scored by reviewers. For example, the fiscal year 2015 FOA for a SIP that was to identify means for increasing screening rates for breast and cervical cancer in Muslim women asked that applicants “describe and provide evidence (such as supporting letters and publications) of sufficient institutional, community and other necessary support for carrying out this project.” In addition, officials from CDC and outside organizations noted that there are benefits of having eligibility limited to entities that have already been vetted, such as an increased likelihood of the research being successful. To become a PRC, the eligible academic institutions must go through a competitive peer review process, through which they are evaluated based on their ability to contribute to improved community and population health, impact public health programs and practice, and advance the field of public health promotion and disease prevention, among other things. Because of this vetting, an official from one outside organization told us that PRCs are likely to be successful in their work—mitigating the risk faced were an award to be made to an unknown entity. An official from another outside organization noted that CDC sponsoring units work closely with the PRCs on implementing the SIPs, and it is beneficial that the PRCs are “known entities” who have established relationships with CDC. Officials from CDC and outside organizations identified a few potential disadvantages to limiting eligibility for SIPs, including the potential for reduced access to expertise outside of the PRCs and the risk of being unable to conduct desired research at the desired time. Specifically, officials from outside organizations told us that there may be reduced access to the expertise of researchers from other universities or other entities. CDC officials and officials from PRCs said that PRCs have the ability to bring in the expertise from outside the PRC institution through subcontracts with other entities, which could help alleviate this concern. For example, for a SIP on skin cancer prevention messaging, the PRC officials at the University of Pennsylvania told us they had multiple subcontracts, including one with a researcher at another university with expertise on indoor tanning. Officials from two outside organizations stated that there is a missed potential for innovation and new approaches if new entities are not eligible for SIPs or added to the pool of PRCs. Although there is a competition for PRCs every 5 years, the PRCs have been fairly stable in recent years—2 new PRCs were added in fiscal year 2014, while more than half (15 of 26) of the current PRCs have been continuously funded by CDC for at least 15 years. Because SIPs are only announced once per year and are awarded at the end of the fiscal year, one other potential disadvantage of SIPs’ limited eligibility is the risk that CDC’s desired research may not be conducted, or may not be able to be conducted at the desired time. If the research project is ultimately not funded through the SIP, there is not sufficient time within the fiscal year to pursue an alternative mechanism. CDC sponsoring unit officials described instances where they did not receive any applicants or did not receive enough qualified applicants for individual SIPs and thus were not able to fund a SIP or made fewer awards than planned. In fiscal years 2014 through 2016, 5 of 51 SIPs included in the FOAs were ultimately not funded because they received no applications or the applications received were not a good fit for the desired project. For an additional 2 SIPs, the sponsoring units funded fewer awards than planned because they did not receive enough quality applications. We provided a draft of this report to HHS for review. HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of the Department of Health and Human Services, the Director of the Centers for Disease Control and Prevention, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Information on Centers for Disease Control and Prevention (CDC) Special Interest Projects (SIPs) Tables 2, 3, and 4 below present data on SIPs awarded in fiscal years 2014 through 2016. In addition to the contact named above, Michelle B. Rosenberg (Assistant Director), Julie T. Stewart (Analyst-in-Charge), and Romonda McKinney Bumpus made key contributions to this report. Also contributing were Sam Amrhein and Jacquelyn Hamilton.
CDC, an agency within HHS, created the SIP program in 1993 as a supplemental funding mechanism to support health promotion and disease-prevention research being done at its PRCs. Currently, there are 26 PRCs. In fiscal years 2014 through 2016, CDC awarded more than $40 million for SIPs. SIP topics vary from year to year but are to be aligned with public health priorities, such as the Healthy People 2020 Objectives—HHS's 10-year national objectives for improving Americans' health. SIPs are sponsored and primarily funded by CDC organizational units, referred to as sponsoring units. House Report 114-195 included a provision for GAO to review the SIP program. This report describes (1) what research CDC chooses to fund through the SIP mechanism, and (2) what have been identified as advantages and disadvantages of SIP eligibility being limited to PRCs. GAO reviewed documents from CDC and analyzed CDC data on SIPs awarded in fiscal years 2014 through 2016. GAO also interviewed CDC officials, including officials from 5 of the 10 sponsoring units that together accounted for over 90 percent of SIP funding during this time period, officials from 4 PRCs with varying experience with SIPs, and 4 organizations with knowledge of prevention research. The Centers for Disease Control and Prevention (CDC) uses the Special Interest Project (SIP) mechanism to fund community-based prevention research that would benefit from a multidisciplinary group of researchers. SIPs are supplemental funding awards that focus on topics of interest or gaps in knowledge or research and can also support the development of state and local public health interventions and policies. SIPs are only available to CDC's Prevention Research Centers (PRC)—selected academic health centers at universities with schools of public health or medical schools with residency programs in preventive medicine. CDC officials said that they would choose the SIP mechanism when the research they want to fund is intended to involve community-based organizations or members of the community. They also use SIPs when they seek access to researchers who have established partnerships with diverse population groups across the country. They would not choose the SIP mechanism when the research they want to fund is not focused on public health prevention, including research that is clinical or laboratory-based; would be better suited for an entity other than an academic health center; or would be better funded through a contract to allow CDC to direct the research protocol. CDC's collaborations with experts in the field—including those at other federal agencies—help to inform its development of the research funding opportunities offered through SIPs. For example, CDC officials use information they learn through participation in multiagency workgroups and advisory committees to identify gaps in knowledge that SIP funding could help to address. CDC officials also stated that this collaboration can also help to avoid potential duplication of research. The key advantage of SIPs being limited to PRCs is the ability to rapidly initiate research, according to officials with whom GAO spoke—including officials from CDC, PRCs, and organizations with knowledge of prevention research. Factors cited as contributing to this ability included the research infrastructure and community relationships already established at the PRCs. Officials from CDC and outside organizations also identified a few potential disadvantages to limiting eligibility for SIPs, including the potential for reduced access to expertise from researchers or others who are not affiliated with the universities in which PRCs are located, although some noted that PRCs may bring in outside expertise through subcontracts with other entities. The Department of Health and Human Services (HHS) provided technical comments on a draft of this report, which GAO incorporated as appropriate.
Defense’s civilian personnel community provides Defense managers with the personnel management services and support needed to accomplish their missions, including recruitment, job classification, position management, training, career development, and benefits administration. Traditionally, the military services and Defense agencies have managed their civilian personnel service delivery organizations and systems through local civilian personnel offices located at or near military bases and installations all over the world. During the past 5 years, Defense has been attempting to reduce personnel management costs through the following actions. (1) Reducing the number of civilian personnelists. Personnelists provide face-to-face assistance to civilian employees, answering questions about such issues as life insurance, health insurance, and position classification. They process paperwork for new hires, promotions, awards, and a wide variety of personnel actions and assist in training, benefits administration, management/employee relations, recruitment, and staffing. In 1994, Defense reported that a single personnelist served about 67 employees. Defense’s goal was to reduce the number of personnel staff to the point where one personnelist served 88 employees by the year 2001 and 100 employees by the year 2003. As of June 30, 1998, Defense reported that it had cut 1,700 personnelists and had achieved a ratio of 1 personnelist to 77 employees. (2) Improving personnel management processes. To help increase the personnelist-to-civilian employee ratio, Defense is attempting to improve and automate its personnel management business processes. For example, it has automated and improved processes for (1) developing, tracking, and monitoring all personnel actions, (2) handling injury compensation claims, and (3) estimating retirement eligibility and benefits. It has acquired an automated tool called RESUMIX, which helps personnelists analyze resumes of people applying for a position with Defense. It is also developing an interactive voice response system that enables employees to use a Touch-Tone phone to change selected data in their own personnel records. (3) Creating regional centers. Defense is creating regional centers that will specialize in selected personnel management functions and reducing the number and size of local offices. It anticipates that specialization of labor within the regions combined with improved business processes will reduce operating costs. As of September 30, 1998, the Army had established all 10 of its planned regions, the Navy had established 7 of 8 planned regions, the Air Force had established its 1 region, and the Defense agencies participating in this initiative had established all 3 of their planned regions. Table 1 further illustrates the changes in personnel management that will occur through Defense’s improvement initiative. At the beginning of this effort, Defense components operated a number of personnel management information systems that assisted in all aspects of personnel operations, such as developing position classification documents; preparing vacancy announcements; and processing appointments, reinstatements, transfers, promotions, retirements, and terminations. These systems were redundant and not interoperable, and Defense believed that they were antiquated. To modernize this environment, Defense eliminated the duplicative systems and used the Air Force civilian personnel management information system, located in San Antonio, Texas, to do all personnel processing. This legacy system meets Defense-unique personnel management requirements; is able to process Defense’s large-scale workload successfully; and because it operates in one location, it can be maintained by CDA personnel with experience in operating and protecting systems. However, Defense believed that there were a number of significant shortfalls with this mainframe system and, therefore, the system should be replaced with a new COTS system. For example, according to Defense the legacy system relied on outdated technology for its database structure, file update, and retrieval; manpower resources and costs needed to develop and maintain the system were extensive; the system required duplicative data entry; the system could only be accessed by personnelists—it could not be easily modified to provide access to civilian employees so that they could review and make prescribed changes to their own benefit, insurance, and other personnel-related data; modifications reflecting improvements in business processes were difficult to make; and the system was not Year 2000 compliant. As a result, Defense acquired a COTS product from Oracle Corporation. In contrast to the legacy system, which operated on two 1970s era mainframes, the new system will operate in a distributed, networked environment at regional and local offices. According to Defense, the system will enable any authorized civilian employee with a personal computer to directly access the system and to perform prescribed personnel-related operations or management tasks, can be easily modified to reflect improvements in business processes, will cost less to maintain and operate, and will be Year 2000 compliant. However, because the Oracle product was originally designed for use in the private sector, it did not satisfy all federal and Defense-unique requirements for personnel management. For example, it could not process federal personnel forms, such as the standard personnel action form (Form 52). It did not address the federal General Schedule for salaries, Defense’s demonstration projects for pay banding, or the Defense-unique salary schedule for tens of thousands of foreign nationals who work for the Department overseas but do not get the same salaries or benefits as American employees. It did not have DOD-unique data for security and mobilization. In addition, it did not directly interface with Defense’s existing payroll system. As a result, the product needed to be modified and/or enhanced before it was deployed. The Civilian Personnel Management Service (CPMS), which was established in 1993 to provide departmentwide leadership for the civilian personnel business area, is responsible for managing the new system. CPMS acquired the system using an indefinite delivery, indefinite quantity (IDIQ) DOD contract under which Oracle Corporation was a participating vendor. Defense components are responsible for purchasing and maintaining hardware to support the new system. CPMS has assigned the Air Force Central Design Activity (CDA) responsibility for managing technical modifications to the system under the contract. According to CPMS, the system is currently in the test phase. Once system qualification tests are completed, the system will be deployed to four tests sites during January and February 1999. The Air Force Operational Test and Evaluation Center (AFOTEC) will then evaluate the test results to ensure that the system meets user needs in an operational environment. Deployment to the remaining sites is expected to begin in late 1999 and end by March 2000. DOD officials stated that this schedule is likely to slip at least 2 months to ensure that the system is fully tested and meets user needs before it is fully deployed. The cost of Defense’s personnel initiative is estimated to be $1.2 billion over its estimated 15-year life cycle (fiscal years 1995 through 2009), of which Defense reports that over $300 million has been spent through the end of fiscal year 1998. These totals are itemized in table 2. Answer: Defense considered only a narrow range of alternatives for improving personnel operations before deciding to regionalize personnel centers. This left the Department without assurance that it was pursuing the most cost-effective and beneficial approach. After it decided to regionalize, Defense did not follow a sound process for selecting regions, it did not require services and agencies to base their decisions on data-driven analyses. Consequently, the analyses of the services and agencies were inconsistent, each considering different factors in choosing regions and none included a formal cost/benefit analysis. This process resulted in the wide disparity in the number of regions chosen, and it left Defense without the objective data needed to determine whether any of the choices were optimal. Before embarking on a major, costly initiative to improve personnel management, sound practices call for examining a range of improvement options, including those that would radically change the current way of doing business. For example, in addition to, or instead of regionalizing, Defense could have considered (1) outsourcing its personnelist computer operations or all of its civilian personnel management services, (2) integrating its personnel/payroll management systems, (3) creating regions that cross-service between agencies and the military services, (4) consolidating local personnel offices that are near each other to provide face-to-face services to multiple bases or installations out of the same office, and/or (5) centralizing all, or portions of, civilian personnel management in DOD. By thoroughly considering these and other choices, Defense would have ensured that the most cost-effective and beneficial alternative was chosen before deciding to invest $367 million in the project and that any systems acquired or developed would support the most efficient and effective business processes. Defense did not examine all of these promising alternatives. Instead, it considered only the possibility of outsourcing computer operations with the National Finance Center. This option was determined to be infeasible.Defense did not analyze other alternatives, including cross-servicing, integrating payroll/personnel systems, collocating personnel offices, DOD-wide management of personnel operations, or outsourcing all of its personnel operations. In addition, once it decided on regionalization, Defense did not follow a sound process for selecting the regions. For example, Defense did not require the services and agencies to base their selections on data-driven analyses. In fact, the services were allowed to select whichever and as many regions as they wanted as long as they achieved at least a 1 to 88 personnelist-to-civilian employee ratio. Consequently, the services considered different factors in choosing their regions. However, none based their selections on a thorough cost/benefit analysis. This resulted in the wide disparity in the number of regions chosen, as the following examples illustrate. The Army and the Navy considered the distance between regions, proximity to the installations they serviced, and coverage across time zones as well as some costs associated with establishing and operating regions and transferring personnel. After considering these factors, the Army selected 10 regions and the Navy selected 8. It was decided that the regions would be responsible for about 60 percent of the work while local offices would be responsible for about 40 percent. Neither the Army or the Navy conducted cost/benefit analyses in making their decisions. Nor did they consider the costs of personnel work processes or the relationship between per capita servicing costs and region size. Because it had already demonstrated that it could reduce overhead and technology costs and facilitate standardization in service and business processes by collocating the civilian personnel center with its military center, the Air Force decided to use a single Air Force personnel center to serve all of its personnel. The Air Force decided that its local offices would continue to be responsible for about 53 percent of the work. While Defense allowed the services wide latitude in choosing their regions, it directed that its agencies be serviced by three regional offices. The two largest agencies—the Defense Finance and Accounting Service and the Defense Logistics Agency—were directed to establish their own regions and the Washington Headquarters Service was directed to serve as a regional personnel office for the smaller agencies. The Defense Finance and Accounting Service selected the location for its regional center based on the fact that it had already started to regionalize personnel operations there. The Defense Logistics Agency selected the location for its regional center after considering the location and space availability of its depots. However, neither conducted formal cost/benefit analyses in choosing their regions or considered the cost of personnel work processes and the relationship between per capita servicing costs and region size. CPMS officials cited several reasons for taking this approach. First, they pointed out that CPMS had no authority to require the services and agencies to base their decisions on thorough, data-driven analyses or, in fact, to require that they adopt any standard personnel system or approach at all. At the same time, they noted that the military services had a vested interest in maintaining the status quo and had the independent budget authority to see that the status quo was preserved. Second, Defense lacked basic cost and performance data for examining options, including data on the cost of personnel work processes and the relationship between per capita servicing costs and region size. Third, the agency was directed in 1994 to implement the Office of the Secretary of Defense’s (OSD) recommendations quickly, i.e., to reduce the number of personnelists to a ratio of one personnelist to every 88 civilian employees by fiscal year 1998. CPMS officials held that this did not allow time to develop objective data and rigorously examine alternatives. The 1 to 88 goal was later extended to the year 2001. Fourth, CPMS officials stated that because most of the costs for performing personnel functions are for personnelists, and systems, facilities, and operations constitute relatively smaller costs, as long as it achieved the 1 to 88 ratio, Defense would accrue significant cost savings regardless of the number of regions selected. Nevertheless, several of the alternatives Defense ignored offered the opportunity to achieve far greater savings while streamlining personnel operations, as the following examples illustrate. By consolidating some or all of its personnel management, Defense could reduce the numbers of staff that perform duplicative overhead functions. As of June 1998, there were 886 people performing civilian personnel management and oversight functions at component headquarters and major command levels at a cost of about $63 million annually.Furthermore, if Defense had centralized management of departmentwide personnel operations, it could take a departmentwide perspective in deciding which local offices and which regions should be consolidated. Cross-servicing could have enabled Defense to further consolidate regional offices and reduce duplicative overhead costs. Some Defense components have already found this alternative to be beneficial. The military services, for example, are doing some cross-servicing with employees in remote locations and the Washington Headquarters Service is servicing the smaller Defense agencies as well as some federal agencies, including the Office of Personnel Management. Additionally, having local personnel offices service multiple bases or installations could further reduce duplicative overhead costs. Integrating payroll and personnel systems could have helped Defense reduce system operation and maintenance costs as well as further streamline and improve personnel and payroll management business processes. In fact, after considering the potential benefits of this alternative and its feasibility, the Defense Science Board recommended it as a solution for military personnel in 1996. While it may have required more time and greater management commitment to change Defense practices, the potential for substantially greater savings and efficiencies should have compelled Defense to perform a rigorous analysis of all alternatives and to select the one proven most cost effective. Answer: Defense did not adequately apply the three requirements of the Clinger-Cohen Act of 1996 we reviewed which are designed to maximize the value of major investments. While the act was passed after Defense initiated its development of DCPDS, the act’s requirements reflect basic and widely accepted principles of sound system acquisition management. Similar practices are also called for by Defense’s own system acquisition regulations and guidelines, Office of Management and Budget (OMB) guidance, and other legislative requirements effective at the time DCPDS decisions were made, including the Government Performance and Results Act of 1993, the Federal Acquisition Streamlining Act of 1994, the Paperwork Reduction Act of 1995, and the Chief Financial Officers Act of 1990. The Clinger-Cohen Act requires federal agencies to focus on the results achieved through information technology investments while streamlining the federal information technology (IT) procurement process. Specifically, this act introduces much more rigor and structure into how agencies approach the selection and management of IT projects. Although the act was passed after Defense decided to develop a new personnel management system, its principles are based on practices that are widely considered to be integral to successful IT investments. We examined whether Defense applied the following three requirements of Clinger-Cohen, which are designed to maximize the value of a major investment such as DCPDS. (1) Agency heads should analyze the missions of the agency and, based on the analysis, revise the agency’s mission-related and administrative processes, as appropriate, before making significant investments in IT supporting those missions. (2) Investments should be selected based on objective data, including quantitatively expressed projected net, risk-adjusted return on investment, and specific quantitative and qualitative criteria for comparing and prioritizing alternative information system projects. (3) Agency heads should ensure, through the use of performance measurements, that mission-related benefits are defined and assessed for all IT investments. Defense did not reengineer its personnel processes before investing in the new system. Before initiating development, CPMS and the individual services conducted an extensive effort to identify and document the preproject business processes at the local offices. Most of the improvements they made to these operations were minor. For example, they developed automated tools to help personnelists analyze resumes and to track civilian employee costs. However, for the most part, these initiatives did not involve radical or major changes to existing processes. As noted in the previous section, Defense considered only the option for outsourcing computer operations and failed to consider other alternatives that had the potential to provide significantly greater benefits, such as integrating personnel and payroll systems, centralizing personnel management, or cross-servicing. Because Defense did not examine these options, there is no evidence that the personnel management system acquired will support the most effective way of doing business or provide optimal return on investment. Costs, benefits, and returns on investments were not adequately analyzed before Defense acquired the Oracle package. Defense informally surveyed the potential market of COTS products and selected products from PeopleSoft, Inc., Integral Software Systems, Inc., and Oracle Corporation for evaluation. In evaluating these products, a DOD team considered various characteristics of the software products, including functionality, technical merit, and cost. However, Defense did not perform a rigorous analysis of costs, benefits, and returns on investments for these products before deciding to acquire the Oracle product, nor did it rigorously analyze the other available commercial products or the possibility of continuing to use the legacy system. The importance of developing complete and accurate analyses of the costs/benefits and returns of system alternatives is underscored by several governmentwide requirements in addition to the Clinger-Cohen Act. For example, OMB’s Circular A-130, Management of Federal Information Resources, calls on agencies “to conduct benefit-cost analyses to support ongoing management oversight processes that maximize return on investment and minimize financial and operating risks for investments in major information systems and on an agencywide basis.” Likewise, Supplement to OMB’s Circular A-11 (July 1997), Part 3, Capital Programming Guide Version 1.0, and OMB Bulletin No. 95-03, Planning and Budgeting for the Acquisition of Fixed Assets, state that “the planning for fixed asset acquisitions should be based on a systematic analysis of expected benefits and costs.” Because Defense did not perform these analyses, it does not know if it chose the best system. Once an alternative is selected, Defense regulations require that an economic analysis be prepared to compare the selection against the status quo. This analysis establishes baseline life cycle costs, estimates benefits for the new system, and calculates expected return on investment. However, Defense did not perform an economic analysis before acquiring the new system. In addition, the analysis that Defense performed after the initiative was underway did not separate the costs and benefits of the system from costs and benefits associated with cutting personnel and regionalizing. As a result, Defense still does not know if it chose the best business process alternative. To measure how the Oracle product supports its personnel administration mission, CPMS developed four major mission performance measure categories to be collected by each service and Defense agency. These categories included (1) servicing ratio, (2) customer satisfaction, (3) process cycle time (e.g., how long it takes to process a specific personnel action, such as filling an opening or promoting an employee), and (4) regulatory compliance (i.e., whether personnel paperwork complies with applicable laws and regulations). The military services and Defense agencies then developed several detailed measures within the categories, and CDA and CPMS developed several information technology or system-level measures to measure DCPDS’ contribution to the mission area, including process cycle time and system response time. However, because military services have not agreed on two fundamental definitions, they will not be able to calculate these measures consistently and compare measures across services. First, the military services could not agree on how to define the start and end date for the process of filling a position or whether certain personnel actions (rejecting a list of qualified job applicants, for example) would be considered as part of the process for filling a position. Second, they could not agree on a common definition of “paperwork errors.” Because the military services are not using common definitions, some critical performance measures will not be comparable across DOD. In addition, Defense does not have baseline performance information on how long it takes to fill a position and the accuracy of personnel paperwork. As a result, it will not be able to accurately assess whether the system has improved mission performance in these areas or by how much. Answer: DCPDS is not a duplicate of OPM’s Employee Express system. OPM’s Employee Express system is designed to be used in conjunction with existing personnel and payroll systems of the agencies. It does not perform all basic personnel and payroll functions. Instead, it allows employees to interface with the existing personnel and payroll systems. For example, Employee Express enables a federal civilian employee to use a Touch-Tone phone or personal computer connected to the Internet to make changes to certain data in his/her automated personnel/payroll records. The new DCPDS system is to eventually replace existing DOD personnel systems. It is intended to support the full range of core functional requirements needed by Defense for an automated human resources management system, including position management and classification, recruitment and staffing, personnel action administration, benefits administration, labor-management and employee relations, work force development, and retention and reporting. These requirements are defined in a November 1997 study by the Human Resources Technology Council, an inter-agency group associated with the President’s Management Council and chaired by the Office of Personnel Management. Although Defense civilian employees will not be able to use the Employee Express system to make changes to DCPDS data, Defense plans to add employee express-type features at a later date that will allow changes to be made using a Touch-Tone phone or personal computer connected to the Internet. Answer: Defense leadership was aware that the COTS package it acquired would need to be substantially modified in order to support federal and Defense-unique personnel requirements although the full extent of the modification was not known. According to the Acquisition Program Manager, Oracle had orally agreed not to charge Defense for the modifications it was making to the system because it believed it could market the package to other federal agencies after it was “federalized.” Answer: Defense has not identified and mitigated significant risks associated with its acquisition. Specifically, as discussed below, Defense does not yet know (1) if the modifications will satisfy DOD needs and provide required functionality and performance, (2) how it will handle future system modification, (3) how it will maintain the system, (4) how it will protect sensitive data in the system, and (5) how it will ensure the continuity of core civilian personnel operations in the event of Year 2000 failures. Defense has no assurance that the modified product being developed by Oracle will meet all its needs. It does not know whether Oracle can provide all required functionality and performance or deliver it on time. Although Defense worked closely with Oracle to define requirements and test the changes that were made to the COTS package, it acquired the system before these modifications were completed and before the modified product could be tested. As a result, Defense faces the risk that the system it has already acquired may not meet all its requirements. This risk could have been avoided by waiting for Oracle to produce the “federalized” product and thoroughly testing it before purchasing it. Compounding the risk that the system will not meet Defense requirements is the fact that Defense has not secured the legal right to modify and upgrade the package it has acquired. CPMS obtained a software licensing agreement for 3 years (with an option to extend to 8 years) that provides for Oracle to correct programming errors found in its product. However, the agreement does not require Oracle to provide upgrades to DOD’s modified product at the same time and at the same cost as it provides upgrades to its private sector commercial product. As a result, Defense has no assurance that Oracle will make future versions of the software available to Defense at a reasonable cost or make future needed modifications at a reasonable cost, so that its version of Oracle product will not become obsolete. In addition, the agreement does not specify whether Oracle will make DOD-required modifications to its customized product, or how much Oracle will charge for such work. CPMS has not taken several actions which are essential to ensuring that the system is adequately maintained. First, CPMS has not yet developed agreements between the DCPDS partners that define each partner’s responsibility for systems, operations, maintenance, and security. Whereas the legacy system was centrally maintained, the military services and Defense agencies will be responsible for maintaining the new system hardware and related local area networks. It is critical that CPMS develop agreements with its DCPDS partners to ensure effective, efficient, and secure systems operations and maintenance. Second, CPMS has not yet established a configuration control board comprised of DCPDS users to assist in deciding what changes need to be made to the system once it is deployed and to prioritize change requests. As noted in Defense’s Program Manager’s Guide to Software Acquisition Best Practices, configuration management is vital to the success of any software effort because it prevents uncontrolled, uncoordinated changes to shared project software and products (documentation and test results, for example). Third, CPMS has not decided who will provide technical assistance to the personnel sites operating the system. CDA currently performs this function; however, CPMS has not decided whether to continue using CDA after deployment or to outsource this function. Fourth, CPMS has not yet developed agreements with DCPDS interface partners, which include the Office of Personnel Management and DOD agencies responsible for payroll, security, and manpower systems. As noted in Defense’s Program Manager’s Guide to Software Acquisition Best Practices, interfaces constitute essential elements of the system but are not completely controlled by the developer. As a result, the guide recommends that explicit written agreements with interface partners be developed to ensure that the partners clearly understand their roles and responsibilities. It is even more difficult to protect the new system and its data than it is to protect the legacy system and its data. Whereas the mainframe-based legacy system operated in one location and was maintained by CDA personnel with experience in protecting information systems, the new system will be distributed to 22 centers and many local offices where staff have little or no experience in providing the type of security required for DCPDS. Furthermore, both systems are vulnerable to outside computer attacks since they use an unsecure telecommunications network to transmit data. According to our Executive Guide: Information Security Management,there are five key principles for managing these types of risks that were identified by studying private and government organizations with reputations for having good information security programs. First, organizations should assess their risks and determine their security needs. Second, they should establish a central management focal point for security issues. Third, they should implement appropriate policies and related controls. Fourth, they should promote security awareness. Fifth, they should continually monitor and evaluate policy and control effectiveness. An important factor in effectively implementing these principles is linking them in a cycle of activity that helps ensure that information security policies address current risks on an ongoing basis. A security risk assessment was performed for the new system, a central security focal point was established, and some effective measures were implemented, including a software application that can identify and notify appropriate officials of unauthorized or suspicious attempts to access personnel data and produce summary audit reports highlighting unauthorized access attempts. However, Defense has not implemented appropriate departmentwide or DCPDS-specific security policies and related controls nor effectively promoted security awareness as indicated by the following examples of identified weaknesses which have increased both the legacy and modern system’s vulnerability to computer attacks. Defense officials, including the Deputy Secretary of Defense, believe that encryption technology is necessary to maintain the secrecy and integrity of data that is transmitted over Defense’s unsecure networks. Encryption involves the transformation of original text (also known as plaintext or cleartext) into unintelligible text (also known as ciphertext). However, the Defense Information Systems Agency (DISA), which is responsible for establishing computer security standards for the Department, has not established a standard encryption approach for sensitive but unclassified Defense data. In the absence of these standards, CPMS is planning to acquire a package for encrypting DCPDS data. As other organizations do the same, DOD may be faced with managing multiple, incompatible encryption products and approaches. The military services and Defense agencies recognize that firewalls, which are hardware and software components that check all incoming network traffic and block unauthorized traffic, are also essential to protecting sensitive data and have begun installing them. However, DISA has not established standards to ensure a consistent level of protection and to ensure that computer systems protected by firewalls can still communicate with each other. During our review, we identified several sites that were not maintaining adequate physical security over computer resources, indicating a lack of security awareness at the local level. For example, at two of the four local personnel offices we visited, the door to the computer room was unlocked. At one of these offices, one of the computer room’s walls consisted of a row of standard metal filing cabinets, offering little obstruction to the room even if the door had been locked. At a third local office, the computer room was collocated with the office’s paper shredder, to which the personnel office staff were given unsupervised access. Also, the network communications room at one of the local offices was unlocked and personnel office staff were given unsupervised access to the room. Additionally, at one of the four regional offices we visited, the network communications room door was unlocked and tied open. Further, our review identified fire protection deficiencies at four offices—three local offices and one regional office. Specifically, the four offices did not have automatic fire detection equipment in or near the computer room. Our review identified problems with disaster recovery procedures and planning for the regional and local offices. For example, we observed inadequate data backup and recovery procedures at one of the four regions visited. In this regard, the draft DCPDS Trusted Facilities Manual, dated February 2, 1998, noted that Defense had not resolved basic disaster recovery planning issues for DCPDS such as, “what data to backup, how often that data will require backup, the method of backup, and testing to ensure the backup has been accomplished successfully.” Additionally, the military services had not completed service-level or site-specific disaster recovery plans for their regional and local personnel offices. As of July 1998, CDA had drafted guidelines for the services and agencies to use in developing disaster recovery plans, but it did not have complete data on the number of regional and local offices that had finalized and tested site-level disaster recovery plans. After discussions on this issue, CDA began requiring all sites to provide these plans before becoming operational. However, neither CPMS nor CDA have determined how the plans will be tested or whether CDA will periodically verify that the disaster recovery plans are updated. The Year 2000 computing problem is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” to represent 1997, in order to conserve electronic data storage and reduce operating costs. With this two-digit format, however, the Year 2000 is indistinguishable from 1900, or 2001 from 1901, etc. As we reported earlier this year, the impact of computer failures resulting from the problem could be widespread, costly, and potentially disruptive to military operations. Year 2000 problems could adversely affect Defense’s ability to train civilian personnel, administer benefits, recruit staff, and handle management/employee disputes. However, Defense has not fully mitigated this risk. We compared Defense’s efforts to correct the Year 2000 problem to criteria detailed in our Year 2000 Assessment Guide. This guide advocates a structured approach to planning and managing an effective Year 2000 program though five phases: (1) raising awareness of the problem, (2) assessing the extent and severity of the problem and identifying and prioritizing remediation efforts, (3) renovating, retiring, or replacing systems, (4) validating or testing corrections, and (5) implementing corrected systems. We and OMB established a schedule for completing each of the five phases, including requiring agencies to complete the assessment phase by August 1997 and the renovation phase by September 1998. Our Assessment Guide also identifies other dimensions to solving the Year 2000 problem, such as identifying interfaces with outside organizations, specifying how data will be exchanged in the Year 2000 and beyond, and developing contingency plans to ensure that core business functions can be performed even if systems fail. As further detailed in the following sections, while Defense is making good progress in renovating the legacy system and ensuring that the new system is compliant, it has not yet ensured that its external interfaces will be remediated or developed effective contingency plans. Defense has nearly completed renovation work on its legacy system, according to the Acquisition Program Manager, and release/deployment is planned for December 1998. In addition, in August 1998, Defense finalized a Year 2000 test plan for the legacy system. However, Defense does not yet have interface agreements that specify changes to date formats and how and when conflicts will be resolved with its data exchange partners. Because noncompliant interfacing partners can introduce Year 2000-related errors into compliant systems, our Assessment Guide recommends that agreements with interface partners be established in the assessment phase in order to allow enough time for resolving conflicts. Until these agreements are in place, Defense will not have assurance that partners are working to correct interfaces effectively or promptly. In addition, Defense has not developed adequate business continuity and contingency plans for the legacy system. To mitigate the risk that Year 2000-related problems will disrupt operations, our guide, entitled Year 2000 Business Continuity and Contingency Planning, recommends that agencies perform risk assessments and develop and test realistic contingency plans to ensure the continuity of critical operations and business processes. Business continuity and contingency plans are important because they identify the manual or other fallback procedures to be employed should systems miss their Year 2000 deadline or fail unexpectedly in operation. Business continuity and contingency plans also define the specific conditions that will cause their activation. In order for these plans to be effective, our guide recommends that, among other things, agencies analyze business process composition and priorities, dependencies, cycles, and service levels, and most important, the business process dependency on mission-critical information systems. The results of this analysis should be used to assess the cost and benefits of contingency alternatives and to identify and document contingency plans and implementation modes. These plans should define roles and responsibilities for contingency operations and provide a master schedule and milestones. Defense recently developed a contingency plan for the legacy system, but this plan is perfunctory and does not meet the minimum criteria defined in our Business Continuity and Contingency Planning guidance which OMB has adopted as a standard for federal agencies. Specifically, the plan only states that if the legacy system fails, critical personnel actions will be prepared using one of three other commercial software packages. The plan does not provide a description of the resources, staff roles, procedures, and timetables needed for its implementation. And there is no evidence that Defense (1) assessed and documented risks posed by external systems and the public infrastructure, (2) defined the minimum acceptable level of outputs and services for each core business process, or (3) assessed the costs and benefits of contingency strategy alternatives. The steps detailed in our guide are integral to helping agencies to manage the risk of potential Year 2000-induced disruptions to their operations. For example, the civilian personnel business area depends on information and data provided by other Defense and federal agencies whose systems can introduce Year 2000 problems into DCPDS. It also relies on services provided by the public infrastructure, which are susceptible to Year 2000 problems that could disrupt personnel operations—including power, water, and voice and data telecommunications. Until business continuity and contingency plans are developed that focus on this chain of critical dependencies, Defense will not be able to ensure that it can maintain the basic functionality of its core civilian personnel operations. Since the new system already has a four-digit year field, it does not require renovation. Defense has obtained certification of Year 2000 compliance on all applications in the new system and completed Year 2000 tests on the system. However, CPMS has not identified all system interfaces or developed agreements with its interface partners. In addition, while CPMS recently developed a contingency plan, this plan is cursory. It only states that if the modern system fails, Defense will revert to using the legacy system for critical personnel actions. It is not based on a business impact analysis nor does it describe resources, staff roles, procedures, and timetables needed for its implementation. As stressed above, even if the modernized system is compliant, Defense’s civilian personnel management operations are at risk because of dependencies on external systems and the public infrastructure. Therefore, until it develops specific interface agreements and contingency plans that focus on critical dependencies, it will have no assurance that it can prevent Year 2000-related disruptions to critical personnel operations. Because Defense did not consider alternatives, such as centralizing personnel functions, restructuring its regional and/or local offices to serve multiple agencies and services, or integrating payroll/personnel systems, its current regionalization approach may not be optimal. Defense lacked cost and performance data to analyze the options and it faced resistance from Defense components. While it may have required more time to develop needed data and greater management commitment to changing Defense business practices, the potential for substantially greater savings and efficiencies should have persuaded Defense to perform a rigorous analysis of all alternatives and to select the one proven most cost effective. Additionally, because Defense did not adequately estimate and evaluate costs, benefits, and returns, there is not adequate assurance that its decision to replace the legacy system with the Oracle COTS package is optimal. Furthermore, Defense does not know whether modifications to the Oracle product will satisfy its needs, how it will maintain the system, how it will protect sensitive data in the system, or how it will ensure the continuity of core civilian personnel operations in the event of Year 2000 failures. Despite this uncertainty, Defense reports having already spent about $300 million on developing the system and establishing the regional offices and plans to spend hundreds of millions of dollars more to operate and support DCPDS and the regions. Before Defense starts to deploy the new system beyond test sites, we recommend that the Secretary of Defense rigorously evaluate all business and system alternatives to providing personnel services as envisioned by the Clinger-Cohen Act, and, using this data and the system test results, select the most cost beneficial business and system alternative and develop and implement a transition plan for that alternative. Specifically, business alternatives considered should include (1) use of regions and local offices to serve specific agencies or services, (2) use of regions or local offices to serve multiple agencies and services, (3) centralizing all or parts of personnel management operations that currently operate at component headquarters and major commands, (4) integrating DOD’s civilian personnel and payroll management systems, (5) outsourcing civilian personnel computer operations, (6) outsourcing all civilian personnel management services, and (7) acquiring other commercially available products. In analyzing commercially available products, we recommend that Defense consider the costs, benefits, and returns-on-investment of all commercially available products that support personnel management. We also recommend that the analysis of commercially available products consider technical risks, including whether each available product can support Defense’s needs and whether each one can be modified in the future at a reasonable cost. In evaluating the range of business alternatives consideration should be given to the substantial investment that has already been made in the current approach. Regardless of the business and system alternative selected, we recommend that Defense optimize it by collecting, analyzing and using reliable cost and performance data and making improvements. We also recommend that, regardless of the chosen approach, Defense take the following actions to mitigate technical, security, and Year 2000 risks. To ensure that the system is adequately maintained and that modifications are carefully controlled, Defense should (1) develop agreements with system partners and interface partners to define responsibility for system operations, maintenance, and security, (2) establish a configuration control board comprised of system users to assist in deciding on which changes need to be made to the system, prioritizing change requests, and ensuring that changes are correctly made, (3) assign clear responsibility for providing technical assistance to Defense components. To ensure that sensitive personnel data are adequately protected, Defense should (1) assess its risks and determine security needs, (2) define and implement appropriate policies and related controls, including standards for encrypting data and firewalls, (3) promote security awareness at all sites maintaining the system, and (4) continually monitor and evaluate policy and control effectiveness. To mitigate Year 2000 risks, Defense should (1) establish interface agreements that clearly specify date format changes, time frames for these changes, and processes for resolving conflicts, (2) refine business continuity and contingency plans to ensure that they consider risks posed by external systems and infrastructure; assess the costs and benefits of alternative contingency strategies; and describe resources, staff roles, procedures, and timetables needed for implementation of the plan, and (3) test contingency plans to ensure that they are capable of providing the desired level of support to the agency’s core business processes and can be implemented within a specified period of time. The Acting Assistant Secretary for Force Management Policy provided written comments on a draft of this report, which are reprinted in appendix I. He concurred with all five of our recommendations and agreed to evaluate recommended alternatives as Defense proceeds with its regionalization and modernization efforts. In concurring with our recommendations, however, Defense questioned our use of the Clinger-Cohen Act of 1996 as criteria for evaluating civilian personnel system decisions since these decisions were made before the act took effect. We used the Clinger-Cohen Act to evaluate Defense’s decisions because the act’s requirements reflect basic and widely accepted principles of sound system acquisition management. Similar practices are also called for in OMB Circulars A-11 and A-130, the Chief Financial Officers Act of 1990, the Government Performance and Results Act of 1993, the Federal Acquisition Streamlining Act of 1994, and the Paperwork Reduction Act of 1995—all of which were applicable in some manner to Defense’s decisions in this effort. Moreover, Defense was required to follow such practices by its own system acquisition regulations and guidelines. Finally, during the course of our review, Defense officials responsible for DCPDS told us that they were attempting to follow Clinger-Cohen Act principles in developing the system. Appendix I provides our detailed responses to Defense’s views on our recommendations and findings. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; Senate Committee on Governmental Affairs; Subcommittee on Defense, Senate Committee on Appropriations; House Committee on Armed Services; Subcommittee on Defense, House Committee on Appropriations; and Senate and House Committees on the Budget; the Secretary of Defense; the Senior Civilian Official of the Office of the Assistant Secretary of Defense for Command, Control, Communications and Intelligence; the Under Secretary of Defense (Comptroller); the Acting Assistant Secretary of Defense for Force Management Policy; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions about this report, please call me or Carl Urie, Assistant Director at (202) 512-6240. Other major contributors of this report are listed in appendix III. The following are GAO’s comments on the Department of Defense’s letter dated January 11, 1999. 1. Although the Clinger-Cohen Act was not in existence when DOD made the initial decisions in developing the modern DCPDS, it has been in effect since 1996 and should have been applied to all decisions made subsequent to its enactment. Further, OMB Circulars A-11 and A-130 existed prior to the initial decisions related to DCPDS and included basic principles of sound system acquisition management. In addition, several acts that were in effect when the initial decisions were made contain requirements similar to those outlined in the Clinger-Cohen Act relating to improved information technology management in the federal government. For example (1) the Government Performance and Results Act of 1993 (GPRA) requires federal agencies to set strategic goals, measure performance, and report on accomplishments, (2) the Federal Acquisition Streamlining Act of 1994 (FASA), Title V, requires agencies to define cost, schedule, and performance goals for federal acquisition programs (including information technology projects) and to monitor these projects to ensure that they remain within prescribed tolerances, (3) the Paperwork Reduction Act of 1995 (PRA) emphasizes achieving program benefits and meeting agency goals through the effective use of information technology, and (4) the Chief Financial Officers (CFO) Act of 1990 focuses on the need to improve financial management and reporting practices of the federal government, which is critical for knowing an information technology project’s actual costs and for computing accurate returns on investment. Finally, Defense’s own system acquisition regulations and guidelines, in existence at the time Defense made the initial decisions in developing the modern DCPDS, include requirements similar to those outlined in the Clinger-Cohen Act related to basic principles of sound system acquisition management. 2. Before embarking on an improvement approach for its civilian personnel mission area, Defense performed cost and performance analyses which indicated the Department’s civilian personnel servicing ratios could be improved significantly. However, because these analyses did not fully consider the costs and benefits of numerous alternative business and systems approaches for improving the servicing ratios, the Department may not have selected the most cost-effective improvement approach. 3. We revised the report to delete specific information on the scoring criteria used in the DCPDS procurement. 4. While Defense reports that it has already consolidated some civilian personnel functions at component headquarters and major commands and reduced staff by 23 percent, in June of 1998, there were still 886 people performing civilian personnel management and oversight functions at component headquarters and major command levels at a cost of about $63 million a year. Given that the Civilian Personnel Management Service performs the same management and oversight functions as component headquarters and major commands, there are substantial opportunities for further consolidation and staff reduction. 5. The A-76 study includes some but not all promising alternatives. While it will evaluate outsourcing civilian pay operations, it will not consider outsourcing personnel operations or integrating personnel and payroll systems. Furthermore, while Defense considered the possibility of outsourcing personnel computer operations in 1994, it lacked the cost and performance data necessary to sufficiently analyze this approach. 6. While it is important for Defense components to develop comprehensive metrics to measure the timeliness and value of regional service center work, they must also standardize these metrics so that meaningful comparisons can be made across the Department. The components must also collect baseline data that define the current operations so that Defense can determine whether new systems and business strategies are achieving predicted cost and performance improvements. 7. If implemented effectively, the site-by-site risk assessments and other actions Defense is taking should help address the security concerns identified in this report. However, to maximize protection over DCPDS data, Defense still needs to establish departmentwide standards on encryption and firewalls. 8. Although CPMS has interface agreements with the owners of major external interfaces for the legacy DCPDS system, those agreements have not been adequately updated to include Year 2000 issues. Specifically, the agreements do not define agreed upon date formats, nor describe how problems with data exchanges will be resolved. Further, as of the completion of our review, CPMS had not identified the system interfaces or developed agreements with its interface partners for the modern DCPDS. 9. Defense plans to complete interface agreements by April 1999 and contingency plans by May 1999 and to begin testing contingency plans by June 1999. However, the Office of Management and Budget and GAO’s Year 2000 guidance recommend that agencies develop interface agreements and realistic contingency plans during the assessment phase, i.e., by August 1997, in order to minimize the risk of Year 2000 problems. To analyze how Defense determined the number and locations for civilian personnel regional service centers and why there is a wide disparity in the number of regional centers among the services, we interviewed Office of the Secretary of Defense, military service, and Defense agency officials and reviewed guidance mandating regionalization, the services’ and Defense agencies’ regionalization studies, and their rationale for determining the number and location of regions. Where appropriate, we interviewed officials from CPMS, the military services, and the Washington Headquarters Service to understand perspectives regarding regionalization plans and status of regionalization actions. We visited five regional centers, toured the facilities, and interviewed numerous officials. These five centers were Ft. Riley, Kansas; Aberdeen Proving Ground, Maryland; Silverdale, Washington; Randolph AFB, Texas; and Washington, D.C. To assess whether Defense is applying the Clinger-Cohen Act in overseeing, managing, and developing DCPDS, we compared Defense’s actions taken on DCPDS to the investment principles included in the act. We reviewed GAO, OMB, and Defense best practices guidance for implementing the Clinger-Cohen Act and reviewed other Defense policies and guidance for developing and implementing information systems. We analyzed selected major studies of information technology and personnel management matters in Defense, including studies by Coopers & Lybrand, a consulting organization and the Defense Science Board, prior GAO studies of major defense information systems projects, and selected Defense Office of Inspector General reports. We interviewed appropriate Defense and OMB representatives familiar with personnel legislative requirements and officials responsible for the development and oversight of DCPDS, including officials from CPMS, the Major Automated Information System Review Council (MAISRC), the Under Secretary of Defense/Comptroller, the Comptroller’s Program Analysis and Evaluation (PA&E) unit, and service and agency staff responsible for regionalization, and DCPDS program management. To determine whether DCPDS duplicates the Employee Express System available through the Office of Personnel Management (OPM), we reviewed documentation Defense prepared justifying the need for DCPDS and Defense documentation reviewing the Employee Express System. We requested that OPM review and comment on Defense’s rationale for not using the Employee Express system; we requested that Defense respond to OPM’s comments; and we analyzed both Defense’s and OPM’s positions on this issue. In addition, we contacted representatives of six other federal organizations that were developing new civilian personnel systems and were not using the Employee Express system to determine their rationale. To determine whether (1) Defense’s civilian personnel management requirements are sufficiently different to require extensive modification of the commercial-off-the-shelf software (COTS) application which Defense selected as the foundation for developing DCPDS and (2) Defense leadership was aware of the extent and cost of modifications that would be needed, we interviewed the Functional and Acquisition Program managers and their staff as well as representatives of the Oracle Corporation to solicit information on the selection, acquisition, and modification of the Oracle COTS product. To assess whether Defense identified and mitigated the risks associated with the major modifications, we interviewed CDA officials to determine Defense’s actions to date, including those planned, in process, and completed to address mitigating risks in overseeing, managing, and developing DCPDS. We reviewed pertinent regulations, studies, and documentation, including the technical risk analysis, configuration management plan, testing plans, and the Department’s Program Manager’s Guide to Software Acquisition Best Practices. As requested, we determined whether Defense used this guide in overseeing, managing, and developing DCPDS. In assessing security risks, we reviewed Defense’s Deployment, Concept of Operations, Encryption, Security Support, and Contingency Plans. We reviewed Defense directives and regulations on computer security, including Regulation 5000.2-R, dated March 23, 1998, Directive 5200.28, dated March 21, 1998, and Military Standard 498, dated December 1994. In addition, we assessed the physical security threats at four local and four regional offices, through interviews and observations. In assessing Year 2000 risks, we reviewed the Year 2000 plans for the legacy and modern systems and we compared these plans to our own Year 2000 Assessment Guide. We conducted our review from August 1997 through July 1998 in accordance with generally accepted government auditing standards. George L. Jones, Evaluator-in-Charge David R. Solenberger, Senior Evaluator Denise M. Wempe, Senior Evaluator Karl G. Neybert, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts to reduce the costs associated with civilian personnel management, focusing on: (1) how DOD determines the number and locations for civilian personnel regional service centers and why is there a wide disparity in the number of regional centers among the services; (2) whether DOD is applying the investment principles of the Clinger-Cohen Act in overseeing, managing, and developing the Defense Civilian Personnel Data System (DCPDS); (3) whether DCPDS duplicates the Office of Personnel Management's (OPM) Employee Express System: (4) whether DOD leadership is aware of the extent and cost of the needed modifications to the commercial-off-the-shelf (COTS) software applications; and (5) whether DOD identified and mitigated the risks associated with the major COTS modifications. GAO noted that: (1) DOD's current initiative can potentially improve civilian personnel operations and achieve cost savings; (2) however, because the Department has not examined other business process alternatives that could have potentially achieved even greater savings and process efficiencies, there is no assurance that this is the best alternative for civilian personnel operations; (3) before embarking on its costly initiative to improve personnel management, DOD examined two alternatives: (a) outsourcing personnel computer operations to the Department of Agriculture's Finance Center; and (b) regionalizing personnel centers; (4) DOD determined that it would take the National Finance Center about 6 years to prepare for transferring computer operations and that some new functionality built into its legacy system would be lost; (5) however, DOD did not examine several other potentially effective alternatives, including: (a) continuing to centralize all or parts of its personnel management operations to reduce duplicative layers of oversight at the components and ensure more consistent operations DOD-wide; (b) integrating its personnel and payroll management systems; (c) restructuring its regional offices to serve multiple components rather than perpetuating regional offices dedicated to only one component; (d) restructuring local personnel offices to serve multiple bases or installations (they now serve only one base or installation); and (e) outsourcing all civilian personnel operations to the private sector; (6) these alternatives are feasible and may have helped DOD to achieve even greater savings and efficiencies than the current approach; (7) in addition, the Defense Science Board determined that integrating payroll and personnel systems was a viable and cost beneficial option for military personnel; (8) the Civilian Personnel Management Service (CPMS) officials who were responsible for the personnel initiative said that they did not consider these business processing alternatives because: (a) CPMS did not have authority to require the military services and DOD agencies to adopt such approaches; (b) DOD did not allow sufficient time to rigorously examine alternatives; and (c) DOD lacked basic cost and performance data needed to study the alternatives; (9) after it decided on its approach, DOD did not follow a sound process for selecting regions; (10) DOD did not adequately consider a full range of technical options before deciding to replace its legacy system with the Oracle COTS product; and (11) after DOD acquired the Oracle system, it did not mitigate critical technical risks.
The CMIA is critically important to the federal government’s efforts to promote accountability in the use of federal grant funds. Currently administered by Treasury’s Financial Management Service (FMS), the CMIA is the cornerstone of cash management policy for federal grants to the states. Specifically, the CMIA requires the Secretary of the Treasury, along with the states, to establish equitable funds transfer procedures so that federal financial assistance is paid to states in a timely manner and funds are not withdrawn from Treasury earlier than they are needed by the states for grant program purposes. The act requires that states pay interest to the federal government if they draw down funds in advance of need and requires the federal government to pay interest to states if federal program agencies do not make program payments in a timely manner. According to Treasury regulations implementing the CMIA, funding techniques for federal financial assistance to the states should be efficient and minimize the exchange of interest between federal agencies and the states. Various funding techniques can be agreed to between Treasury and the states, including cash advance funding, whereby the federal program agency transfers the actual amount of federal funds to a state prior to the day the state actually pays the funds out of its own account. The limit on such cash advance funding is 3 business days prior to payout. Before the terrorist attacks of September 11, 2001, the Department of Justice (Justice) managed several grants designed to enhance the capability of state and local first responders to handle incidents involving nuclear, biological, and chemical terrorism. Since 1999, these programs have grown dramatically. In March 2003, responsibility for these grant programs shifted to DHS, and they continued to grow. Initially, DHS provided some of these grants directly to local government entities; however, the requirements were changed so that grants were awarded first to states and then passed through to local governments and other subgrantees. Despite increased funding, many local governments—cities in particular—complained that they were not receiving the funds that they expected and could not disburse them as fast as they wanted. In response to complaints about delays in the disbursement of first responder grants, on March 15, 2004, the Secretary of the Department of Homeland Security established the HSAC Task Force on State and Local Homeland Security Funding. The task force’s objective was to examine the homeland security grant funding process and provide recommendations to expedite the flow of homeland security funds to those responsible for preventing and responding to acts of terrorism. The task force recommended, among other things, that Congress exempt certain DHS homeland security grants for fiscal year 2005 from the CMIA in order to allow funds to be provided to state and municipal entities up to 120 days in advance of expenditure. The task force indicated that more flexibility was needed in providing grant funding to first responders because, in some instances, the 3-day time frame for receiving grant funds prior to making payments was insufficient to prevent municipal jurisdictions from having to make payments to vendors in advance of receiving the DHS grant funds. In other cases, the municipal jurisdictions required cash on hand in their municipal treasuries prior to commencing the procurement process. Subsequent to the task force’s recommendations, Congress exempted for fiscal year 2005 certain DHS first responder grant programs from the provision of the CMIA that limits the extent to which grantees can hold federal funds prior to payout by requiring federal agencies and states to minimize the time elapsing between transfer of funds from Treasury and payment by the states. In fiscal year 2006, this exemption was made permanent. Importantly, the CMIA exemption only pertains to the requirement to minimize the time elapsing between transfer of funds from Treasury and payments for program purposes. The CMIA exemption did not exempt certain first responder grant programs from the other provisions of the CMIA which address interest payments and accountability. To implement the CMIA exemption, DHS’s Program Guidelines and Application Kit for the Fiscal Year 2005 Homeland Security Grant Program (HSGP) and guidance for certain other homeland security first responder grants state that grantees and subgrantees will be permitted to draw down funds up to 120 days prior to expenditure or disbursement. For the majority of the grant programs, the guidance requires all federal funding to go to state grantees prior to being passed through to local government and other subgrantees, and requires both grantees and subgrantees to place funds received in an interest-bearing account. The guidance states that both grantees and subgrantees must pay interest on funding advances in accordance with federal regulations. In addition, according to the guidance, state grantees are subject to the interest requirements of the CMIA and its implementing regulations. The guidance states that interest under the CMIA will accrue from the time federal funds are credited to a state account until the time the state pays out the funds to a subgrantee or otherwise for program purposes. In January 2006, DHS’s Preparedness Directorate issued its Financial Management Guide. The guide is intended to be used as a financial policy reference for all fiscal year 2006 and future first responder grants. Consistent with DHS’s fiscal year 2005 guidance for the HSGP and certain other first responder programs, the guide states that grant recipients may elect to draw down funds up to 120 days prior to expenditure or disbursement and that state grantees are subject to the interest requirements of the CMIA. The guide further states that all local units of government must account for interest earned on federal grant funds and remit such interest to appropriate federal agencies. To assess whether the CMIA provision that limits the extent to which grantees can hold federal funds before payout, prior to its exemption for certain DHS first responder grants in fiscal year 2005, had prevented first responders from receiving DHS grant funds when such funds were needed, we interviewed key officials from 13 SAAs. These SAAs involved states from most geographic areas of the country and, when taken together, were awarded about 40 percent of DHS’s first responder grants that were subject to the CMIA exemption in fiscal year 2005. In addition, we interviewed key officials and obtained and analyzed pertinent documents from nine national associations which represent state and local governmental entities including the National Governors Association and the U.S. Conference of Mayors. We also reviewed the key report issued by the HSAC Task Force, A Report from the Task Force on State and Local Homeland Security Funding, and reports issued by DHS’s Inspector General. To identify key fiscal and accountability implications associated with the CMIA exemption for certain DHS first responder grant programs, we reviewed the CMIA and Treasury’s implementing regulations, the CMIA exemption for certain first responder grants, DHS’s program guidance for those grants, and GAO’s prior report covering the implementation of the CMIA. In addition, we interviewed key officials and obtained and analyzed pertinent documents from DHS, Treasury, OMB, and Justice, all of which are responsible to varying degrees for administering or overseeing the implementation of the CMIA or various aspects of DHS’s first responder grant programs. We also reviewed OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations, and OMB’s 2005 and 2006 Compliance Supplements, which comprise the current key guidance used by auditors to conduct single audits covering federal grant programs. Our work was performed in accordance with generally accepted government auditing standards from August 2005 through July 2006. In responding to a draft of our report, DHS stated that it would take the recommendations made in the draft report under advisement and would provide a detailed response to appropriate congressional committees and OMB approximately 60 days after release of the report, consistent with the reporting requirements of 31 U.S.C. Part 720. Treasury stated that it agreed with our conclusion that the requirements of the CMIA did not prevent grantees from receiving grant funds when needed. Both Treasury and OMB staff provided technical comments that have been addressed as appropriate in this report. We found no substantial evidence that the CMIA provision that limits the extent to which grantees can hold federal funds before payout, prior to its exemption for certain first responder grants in fiscal year 2005, had prevented first responders from receiving DHS grant funds when such funds were needed. Specifically, the majority of SAAs we contacted did not cite the CMIA as a contributing factor to first responder funding delays, and the National Governors Association, U.S. Conference of Mayors, and other associations we contacted did not provide information that demonstrated that the CMIA prevented local governments and other subgrantees from receiving first responder grant funding when they needed it. In addition, according to a report prepared by the HSAC Task Force, numerous factors, only one of which was related to the CMIA, have been responsible for first responder funding delays. Importantly, as we reported in February 2005, a major challenge in managing first responder grants is balancing two goals: minimizing the time it takes to distribute grant funds to state and local first responders, and ensuring appropriate planning and accountability for the effective use of grant funds. DHS’s approach to striking this balance has been evolving from experience, congressional action, and feedback from states and local governments. In March 2006, DHS reported that the CMIA exemption had been used only to a minimal extent and, according to a DHS official, DHS is meeting with SAAs and local governments to determine the impacts, if any, of the CMIA exemption on first responder grant funding. Of the 13 SAAs we contacted to determine whether the CMIA had prevented first responders from receiving DHS grant funds when such funds were needed, officials from six of these agencies told us that their state agency had experienced delays in getting first responder funds to subgrantees; however, most characterized the delays as not serious. Only one state agency official attributed the delays directly to the CMIA. According to that official, under the funding technique for the CMIA that was agreed to between the state and Treasury for fiscal year 2004, the state was to be reimbursed by the federal government for eligible grant- related expenditures. However, certain smaller subgrantees, such as volunteer fire departments, did not have the financial resources to purchase specialized equipment with their own funds and then wait for reimbursement from the state. The official stated that, contrary to the agreement with Treasury, the state began advancing federal funds to the subgrantees to enable them to purchase the equipment. Generally, however, the SAA officials were more apt to tie delays in operations related to first responders to factors other than the CMIA. For example, officials of six of the SAAs noted that delays in the use of the funds have been due directly to certain local governments not having the manpower to deal with the large influx of grant funding that was experienced in the wake of the terrorist attacks. In addition, officials of six of the SAAs stated that state and local requirements related to purchase authorizations caused delays in getting goods and services delivered to first responders in a timely manner, and officials of six of the SAAs cited vendor problems as causing such delays. None of the officials from the nine national associations representing state and local governments we contacted provided information that demonstrated that the CMIA prevented first responders from receiving DHS grant funds when such funds were needed. For example, an official from the National Governors Association stated that the association did not take a position on whether the CMIA impacted funding for first responders. Rather, he stated that funding delays are often caused by local procurement procedures and acquisition approval requirements of local government subgrantees. The official cited one case where a local government could not spend first responder funds for a major purchase until the city council voted and approved the purchase. He emphasized that such local approval requirements and processes can take several months. The official also stated that funding delays have resulted from local government subgrantees being unaware of DHS’s requirement that all equipment be included on DHS’s approved equipment listings prior to acquisition. In addition, according to an official from the U.S. Conference of Mayors, which was a leading proponent of the CMIA exemption for first responder grants, delays in first responder grant funding have resulted primarily from the many, sometimes conflicting, state and local requirements that local government subgrantees have to meet to receive grant funds. The official stated that the conference supported an exemption from the CMIA for first responder grants and that this support was driven primarily by an expectation that relaxing the requirements for funds transfers between the federal government and the states would lead to overall improvements in addressing local first responder needs. However, the official said that the conference does not have evidence that the requirements of the CMIA have created specific funding delays for first responders, or that the CMIA exemption has improved grant funding for first responders. In June 2004, the HSAC Task Force issued its report on state and local homeland security funding. According to the report, there is no single issue or level of government that has been responsible for delays in first responder funding. The report stated that the reimbursement requirement of the CMIA is problematic for many, particularly cash-strapped municipalities; however, the report does not address how the CMIA exemption will mitigate such problems at this level as the CMIA applies only to funds transfers to the states. Moreover, the report discusses numerous factors other than the CMIA that contribute significantly to funding delays. Specifically, according to the report, the need for state, county, municipal, and tribal entities to rapidly procure and deploy homeland security-related equipment can conflict with state and municipal buying regulations that encourage a deliberate process of acquisition of budgeted necessities at the lowest possible price. Furthermore, many state and local governments lack the purchasing power to obtain the goods and services in a timely fashion. In addition, the report stated that the lack of national standards guiding the distribution, tracking, and oversight of homeland security-related grant funds contributed to delays in disbursement. The report also emphasized that state and local governments are often overwhelmed and understaffed to deal with the complex grant system and have not put the necessary infrastructure in place to deal with the increased workload associated with first responder grant funding. Finally, the report cited unavoidable equipment backlogs and vendor delays as causing delays in first responder grant funding. In February 2005, we reported that DHS’s approach to striking a balance between, on one hand, minimizing the time it takes to distribute grant funds to state and local first responders, and on the other hand, ensuring that appropriate planning and accountability for the effective use of grant funds has been evolving from experience, congressional action, and feedback from states and local governments. We emphasized that, as DHS continues to administer its first responder grant programs, it will be important DHS to listen and respond fully to the concerns of states, local governments, and other interested parties to ensure that there is adequate collaboration and guidance for moving forward. In March 2006, DHS reported that grantees and subgrantees have used the CMIA exemption and DHS’s 120-day cash advance funding provision only to a minimal extent. According to a DHS official, DHS’s new OGO, which began operations in October 2005, is in the process of meeting with SAAs and local governments to discuss the CMIA exemption and cash advance funding. OGO has conducted several regional financial management training conferences with SAAs and local representatives and has attended other similar forums that bring these same stakeholders together. In addition, OGO’s Monitoring Program Plan for fiscal year 2006 includes at least 20 states and territories, and OGO plans to include the remaining states and territories in the near future. According to the DHS official, through its discussions and monitoring efforts, OGO intends to determine whether the CMIA exemption actually poses a problem or conversely creates an opportunity for first responders in their ability to obtain and use grant funds when needed. In addition, OGO is seeking to identify the significant issues behind the drawdown and disbursement, or lack of such, of DHS grant funds. These issues may involve legislative, procurement, programmatic, timeliness, and jurisdictional concerns. Finally, OGO is attempting to assess the impact the CMIA exemption could have on DHS if states were to use it extensively. According to the official, if grantees and subgrantees began using the CMIA exemption and DHS’s 120-day cash advance funding provision, it would present oversight difficulties for DHS. DHS’s OGO’s concern about the potential use of the CMIA exemption and DHS’s 120-day cash advance funding provision and the oversight difficulties extensive use of these provisions could entail is warranted. Specifically, the large number of state grantees and local government and other subgrantees that are eligible for cash advance funding resulting from the CMIA exemption and DHS’s 120-day cash advance funding provision, combined with the differing interest requirements for states, local governments, and nonprofit organizations, could create potential oversight challenges for DHS. Currently, DHS does not have policies and procedures to meet the oversight challenges of tracking cash advance funding and associated interest liabilities for first responder grants. Moreover, Treasury, in its administration of the CMIA, does not receive information pertaining to specific advances for such grants. While state single audits can be an important oversight tool for cash advance funding, they are not designed to replace program management’s oversight responsibilities. Further, those audits may not cover all first responder grants because of the grants’ relatively small dollar amounts, and single audit guidance does not include all grants for which DHS’s 120-day cash advance funding applies. In addition, it is important to emphasize that cash advance funding, which is available on a case-by-case basis for first responder grants independent of the CMIA exemption and DHS’s 120-day cash advance funding provision, would allow DHS to focus its oversight efforts on specific grantees and subgrantees that can demonstrate a need for such funding. Regardless of whether cash advance funding for first responder grants is made available under the CMIA exemption and DHS’s 120-day cash advance funding provision or on a case-by-case basis, it is critical for DHS to provide proper oversight of cash advance funding to help ensure that associated interest liabilities due to the federal government are accurately recorded by grantees and subgrantees and promptly paid. DHS is faced with potential oversight challenges regarding cash advance funding for homeland security first responder grants resulting from the large number of state grantees and local government and other subgrantees and the fact that interest liabilities and payment responsibilities vary for states, local governments, and nonprofit organizations. Specifically, according to DHS, for fiscal years 2005 and 2006, the initial years for which the CMIA exemption and DHS’s 120-day cash advance funding provision have been in effect, DHS has awarded in total about $5.5 billion of first responder grants to the 50 states, the District of Columbia, and 5 U.S. territories. Further, DHS required a minimum of 80 percent of certain grants to be passed through by the states to numerous city, county, local government, and other subgrantees. For example, for fiscal year 2005, at least 80 percent of the funding for UASI grants was allocated to 50 urban areas, and 124 distinct jurisdictions were to receive at least 80 percent of the funding for MMRS grants. According to DHS’s guidance, 120-day cash advance funding for homeland security first responder grants was available to all eligible state grantees and local government and other subgrantees. Further, interest liabilities associated with cash advance funding depend upon the size of the grant as well as whether the recipient is a state, local government, or nonprofit organization. Specifically, state interest liabilities and payment responsibilities are governed by Treasury’s implementing regulations for the CMIA. Under these regulations, interest liabilities for relatively large grants that meet the requirements for being classified as major programs are typically settled as part of Treasury’s annual interest exchange with the states and U.S. territories using the interest rate set forth in the regulations. The interest liabilities and payment responsibilities of local government and nonprofit organizations are governed by regulations covering these entities. In general, local government and nonprofit entities are required to make periodic interest payments to HHS’ Division of Payment Management Services. According to DHS officials, policies and procedures do not exist to track and report on specific cases of cash advance funding to state grantees including associated interest liabilities. Moreover, the officials stated that DHS would not be able to readily determine the extent to which state grantees advance funds to local government and other subgrantees and the interest liabilities that should accrue to the subgrantees as a result of such advances. According to DHS’s Financial Management Guide, the state grantee is responsible for all aspects of preparedness grant funding, including cash management, accounting, and financial recordkeeping by the subgrantee. DHS officials emphasized that DHS relies on the states for management and oversight of grant funds, recognizing that the states rely, in part, on the single audits of grantees and subgrantees to help ensure proper accountability over cash advance funding including associated interest liabilities. Treasury’s FMS manages the CMIA program; however, its roles and responsibilities in this capacity do not include obtaining information regarding specific funding advances for homeland security first responder grants made to states or the related state interest liabilities. Under Treasury’s implementing regulations for the CMIA, states and FMS must enter into Treasury-State Agreements (TSA) that outline, by major program, the funding technique, including cash advance funding if applicable, the states will use to draw down funds from the federal government. Each year, the states and U.S. territories submit reports to FMS indicating the cumulative interest liabilities calculated for major grant programs covered under their respective TSAs. Based on input from the federal agencies and the states and territories, FMS makes a final determination on each of the state and territory interest liability claims and then calculates net interest liabilities using the interest rate defined in Treasury’s implementing regulations and conducts the annual interest exchange with the states and territories. According to Treasury officials, the vast majority of homeland security first responder grants were not included in the TSAs for fiscal years 2005 and 2006. Generally, to be included in the TSA, a grant program should be considered a major program by meeting the dollar thresholds which are set forth in Treasury regulations. Importantly, for grants not included in the TSAs, FMS does not receive any information about the grant, including whether states received cash advance funding for the grant or whether states incurred any associated interest liabilities. Moreover, according to Treasury officials, FMS has no oversight responsibilities of cash advances and associated interest liabilities involving local government and other subgrantees, regardless of whether the specific grants are major or nonmajor. The officials emphasized that FMS relies, primarily, on state single audits to provide oversight for CMIA-related activities, including interest liabilities associated with cash advance funding. In our January 1996 report on the implementation of the CMIA, we concluded that FMS’s plans to emphasize the use of results of single audits as a means of overseeing state activities and enforcing the CMIA requirements should improve the act’s effectiveness and help alleviate any concerns about administrative burden. Similarly, single audits, if performed adequately, can be a tool to enhance DHS’s oversight of first responder grant funding including cash advance funding and associated interest liabilities. However, such audits are not designed to replace program management’s oversight responsibilities and may not cover all first responder grants due to the grants’ relatively small dollar amounts. For single audits, auditors use OMB’s Circular No. A-133 Compliance Supplement, which provides an invaluable tool to both federal agencies and the auditors in establishing the important provisions of federal grant programs. The supplement enables federal agencies to effectively communicate items that they believe are important to understanding the legislative intent, as well as promoting successful program management. As such, the supplement requires constant review and update. DHS is responsible for working with OMB to ensure that audit guidance contained in the supplement that is applicable to its programs is complete and updated. For fiscal year 2005, the supplement included guidance covering DHS’s 120-day cash advance funding provision; however, the only programs cited were SHSP and LETPP, even though the CMIA exemption and the 120-day cash advance funding provision applied to numerous other homeland security first responder grants. OMB’s most recent compliance supplement, dated March 2006, expanded the guidance for the 120-day cash advance funding provision to include HSGP grants awarded for fiscal years 2005 and 2006. However, the supplement still does not include all of the programs for which the CMIA exemption and the 120-day cash advance funding provision apply. Specifically, the supplement does not include, among others, the Port Security Program, the Rail and Transit Security Program, the Intercity Bus Security Program, or the Trucking Security Program. According to an OMB representative, certain first responder grant programs were not included in the compliance supplements because they were not, at the time, considered major programs. However, DHS officials stated that DHS recognizes the importance of alerting auditors to the CMIA exemption and the 120-day cash advance funding provision for all of its first responder grants. As such, these officials stated that DHS intends to notify OMB that the 120-day cash advance funding provision used to implement the CMIA exemption applies to all grant programs administered by DHS’s Office of Grants and Training so that such information can be included in OMB’s 2007 Compliance Supplement. It is important to note that even with comprehensive guidance for auditors, single audits are at best only a tool for program management oversight of grant funding. Such audits are not intended to replace program management’s overall responsibility for establishing and maintaining internal control to achieve the objectives of effective and efficient grant operations, reliable grant reporting, and compliance with applicable laws and regulations. Further, single audits may not always cover all homeland security first responder grants received by the audited entity, as only the larger and inherently riskier programs are typically subject to review as part of the overall audit. Treasury’s regulations implementing the CMIA are intended to provide Treasury and states flexibility and latitude in funding grant programs. Specifically, according to Treasury, the CMIA requires states to time their drawdown of federal funds in a way that minimizes the time between receipt of the funds and payments for federal program purposes. For cash advance funding, this is defined by regulation as not more than 3 business days prior to the date of disbursement of the funds. However, according to Treasury officials, if it can be demonstrated that there is a program need for funds more than 3 days, or even 120 days, in advance of payment, a funding arrangement that allows for such cash advance funding would not be inconsistent with the CMIA and its implementing regulations. In other words, the CMIA does not prohibit such flexibility to be provided on a program by program, or case-by-case, basis. Moreover, cash advance funding arrangements made by a state can be extended to the state’s subgrantees. Specifically, under the Uniform Administrative Requirements for Grants and Cooperative Agreements to State and Local Governments, grantees must monitor cash drawdowns by their subgrantees to assure that they conform substantially to the same standards of timing and amount as apply to advances to the grantees. As such, cash advance funding arrangements made by the states for specific programs that have a demonstrated need for cash advance funding in excess of the 3-day rule, can apply to local government subgrantees on an as-needed, or case-by-case, basis as determined by the state. Therefore, under Treasury regulations implementing the CMIA and other applicable regulations, cash advance funding for homeland security first responder grants can be allowed on a case-by-case basis independent of the CMIA exemption and DHS’s 120-day cash advance funding provision. We found no substantial evidence that the CMIA funds transfer requirements, prior to the exemption for certain first responder grants in fiscal year 2005, prevented first responders from receiving DHS grant funds when such funds were needed. However, DHS’s current efforts to monitor state grantees should help to identify problems, if any, associated with the CMIA and the CMIA exemption, as well as other issues that impact grant administration and first responders’ ability to receive and use DHS grant funds when needed. Going forward, these efforts should also enable DHS to determine the extent to which cash advance funding for first responder grants will likely be needed. This is important because DHS lacks the policies and procedures necessary to provide adequate oversight of cash advance funding, regardless of whether the cash advance funding is made widely available under the CMIA exemption and DHS’s corresponding 120-day cash advance funding provision, or on a case-by- case basis as allowed under Treasury regulations implementing the CMIA. Such oversight is critical to ensure that interest due to the federal government associated with cash advance funding is accurately recorded and promptly paid. We make seven recommendations to improve oversight of cash advance funding and associated interest liabilities for homeland security first responder grants. Specifically, we recommend that the Secretary of the Department of Homeland Security direct the Executive Director of the Office of Grants and Training to complete ongoing monitoring efforts involving state grantees that receive DHS first responder grant funding and use information obtained from such monitoring to identify the significant issues that have resulted in delays in the drawdown and disbursement of DHS grant funds; determine the impact of the CMIA exemption on first responders in their ability to obtain and use grant funds to meet program needs; assess the impact the CMIA exemption and DHS’s 120-day cash advance funding provision could have on DHS’s ability to provide adequate oversight if state grantees and local government subgrantees were to use them extensively; determine whether case-by-case cash advance funding provides a reasonable alternative to the CMIA exemption and DHS’s 120-day cash advance funding provision; and based on the results of the monitoring efforts, take appropriate actions, which could include making either legislative or operational recommendations, to improve first responders’ ability to receive and use DHS grant funds when needed and DHS’s oversight of such funds. In addition, we recommend that the Secretary of the Department of Homeland Security direct the Executive Director of the Office of Grants and Training to develop policies and procedures to handle requests for cash advance funding, including the ability for DHS to track specific cases of cash advance funding to state grantees and the related interest liabilities; and develop policies and procedures to work with the SAA for any state that requests and receives cash advance funding to ensure that adequate policies and procedures are in place at the state grantee level to provide proper oversight of advances made to subgrantees, including the accurate recording of interest accruals on the advances and prompt payment of such interest to the federal government. We provided a draft of this report to DHS, Treasury, and OMB for comment. DHS stated that it would take our recommendations under advisement. DHS also noted that it will provide a detailed response to appropriate congressional committees and OMB in accordance with applicable reporting requirements. Treasury provided technical comments that have been addressed as appropriate in this report. In providing such comments, Treasury stated that it agreed with our conclusion that the requirements of the CMIA did not prevent grantees from receiving grant funds when needed and noted that it believes the CMIA statute and regulations provide inherent flexibility to ensure that the program purposes are served while minimizing the time between the transfer of federal funds and the disbursement of funds by the state for federal grant program purposes. In addition, OMB staff provided a technical comment that has been addressed as appropriate in this report. We are sending copies of this report to other interested congressional committees, the Secretary of the Department of Homeland Security, the Secretary of the Department of the Treasury, the Director of the Office of Management and Budget, and the Attorney General. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact Stanley J. Czerwinski at (202) 512-6806 or [email protected], or Gary T. Engel at (202) 512-3406 or [email protected], if you have any questions. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. Faisal Amin, Jeffrey W. Dawson, Carlos E. Diz, Richard H. Donaldson, Ernie Hazera, Kenneth R. Rupar, and Linda K. Sanders made key contributions to this report.
A key provision of the Cash Management Improvement Act (CMIA) of 1990 (P.L. 101-453), as amended, requires the federal government and the states to minimize the time between transfer of federal funds and payments made by states for federal grant program purposes. Concerns were expressed by representatives of local government subgrantees that more flexibility was needed in the receipt of federal funding for first responders. Congress exempted certain first responder grants from this CMIA provision in the Department of Homeland Security's (DHS) fiscal years 2005 and 2006 appropriations acts. Under the exemption, grantees can receive cash advance funding and hold such funds for extended periods of time prior to payment. GAO was asked to (1) assess whether this CMIA provision, prior to its exemption in fiscal year 2005, had prevented DHS grant recipients from receiving first responder grant funds when such funds were needed; and (2) identify any key fiscal and accountability implications of the exemption. GAO found no substantial evidence that the CMIA provision that limits the extent to which grantees can hold federal funds before making program payments, prior to its exemption for certain first responder grants in fiscal year 2005, prevented first responders from receiving DHS grant funds when such funds were needed. The vast majority of the officials of State Administrative Agencies (SAA) and national associations contacted neither cited the CMIA as a contributing factor to funding delays nor provided information that demonstrated that the CMIA prevented state grantees or local government and other subgrantees from receiving first responder grant funding when such funding was needed. Rather, the officials generally attributed delays in first responder operations to factors other than the CMIA, such as vendor delays in delivering goods and services and problems related to a lack of human resources to deal with the large influx of grant awards after the September 11, 2001, attacks. The information GAO obtained from these officials was consistent with the findings of DHS's Homeland Security Advisory Council's Task Force on State and Local Homeland Security Funding, which found that numerous factors other than the CMIA contributed to funding delays for first responders. According to DHS, as of March 2006, state grantees and local government subgrantees had used the CMIA exemption and DHS's corresponding 120-day cash advance funding provision, which DHS established to implement the CMIA exemption, only to a minimal extent. DHS's Office of Grant Operations is working with SAAs and local government entities to determine the extent to which the CMIA exemption may be used and the impact extensive use could have on DHS. According to a DHS official, extensive use of the CMIA exemption and DHS's 120-day cash advance funding provision could create management oversight difficulties for DHS. Concerns about oversight difficulties are warranted, as DHS currently lacks the policies and procedures to track and report on specific cases of cash advance funding. Such advances are not subject to Treasury's oversight through its administration of the CMIA program. While states' single audits are a tool for oversight, such audits are not designed to replace program management's oversight responsibilities, and GAO found that they may not cover all first responder grants because of the relatively small size of the grants. Importantly, case-by-case cash advance funding can be allowed by Treasury regulations implementing the CMIA and other applicable regulations. Such funding could enable DHS to focus its oversight efforts on grantees and subgrantees that have a demonstrated need for such funding. However, regardless of whether cash advance funding is available under the CMIA exemption and DHS's corresponding 120-day cash advance funding provision, or on a case-by-case basis, proper oversight is critical to ensure that interest due the federal government resulting from cash advance funding is accurately recorded and promptly paid.
In March 2012, OMB launched the PortfolioStat initiative which required agencies to conduct an annual review of their commodity IT portfolio to, among other things, achieve savings by identifying opportunities to consolidate investments or move to shared services. For PortfolioStat, OMB defined broad categories of commodity IT: enterprise IT systems, which include e-mail, identity and access management, IT security, web infrastructure, and collaboration tools; business systems, which include finance, human resources, and other IT infrastructure, which includes data centers, networks, desktop computers, and mobile devices. Of those categories, the first two include software applications, which are software components and supporting software hosted on an operating system that create, use, modify, share, or store data in order to enable a business or mission function to be performed. This includes custom, commercial off-the-shelf, government off-the-shelf, or open-sourced software. The memorandum establishing the PortfolioStat initiative also required agencies to develop a commodity IT baseline including the number, types, and costs of investments for all commodity IT categories. In a subsequent memorandum, OMB advocated the use of application rationalization to inform data center optimization efforts. Application rationalization is the process of streamlining the portfolio to improve efficiency, reduce complexity and redundancy, and lower the cost of ownership. It can be done by retiring aging and low-value applications, modernizing aging and high-value applications, eliminating redundant applications, standardizing on common technology platform and version (as is the case for moving to shared services), or consolidating applications. OMB stated in its memorandum that application rationalization would be a focus of PortfolioStat sessions and required agencies to describe their approach to maturing the IT portfolio, including rationalizing applications, in the information resource management plans and enterprise roadmaps that are required to be updated annually. In December 2014, the law commonly referred to as the Federal Information Technology Acquisition Reform Act (FITARA) was enacted and required covered executive branch agencies (except for DOD) to ensure that Chief Information Officers (CIO) have a significant role in the decision making process for IT budgeting, as well as the management, governance, and oversight processes related to IT. The act also required that CIOs (in each covered agency except DOD) review and approve (1) all contracts for IT services prior to their execution and (2) the appointment of any other employee with the title of CIO, or who functions in the capacity of a CIO, for any component organization within the agency. OMB issued guidance in June 2015 that reinforces the importance of agency CIOs and describes how agencies are to implement the law. In that same memorandum, OMB changed PortfolioStat from being an annual review session to quarterly reviews including a discussion of portfolio optimization efforts and focus on commodity IT. Specifically, the memorandum stated that agencies are to discuss how they use category management to consolidate commodity IT assets; eliminate duplication between assets; and improve procurement and management of hardware, software, network, and telecom services during the sessions. Furthermore, agencies are to share lessons-learned related to commodity IT procurement policies and efforts to establish enterprise-wide inventories of related information. The memorandum also specified key responsibilities for CIOs—including having increased visibility into all IT resources—and required agencies to develop plans to implement these responsibilities by December 2015. Further, during the course of our review, in January 2016, OMB updated guidance to agencies requiring that they provide information regarding their IT asset inventories when making integrated data collection submissions. The guidance required agencies to provide a preliminary inventory by the end of February 2016 and a complete IT asset inventory, including information on systems, sub-systems, and applications by the end of May 2016 to OMB. Finally, federal law and guidance specify requirements for protecting federal information and systems. Specifically, the Federal Information Security Management Act (FISMA) of 2002, among other things, requires agencies to maintain and update an inventory of major information systems at least annually, and the National Institute of Standards and Technology specifies that this should include an accurate inventory of software components, including the software applications which are the subject of our review. OMB plays a key role in monitoring and overseeing agencies’ security activities and their FISMA implementation. This includes tracking how well agencies are managing their inventories of hardware and software assets and protecting them. In November 2013, we reported that agency commodity IT baselines were not all complete and recommended that 12 agencies complete their commodity IT baselines. As of March 2016, 6 of the 12 agencies—the Departments of Agriculture, Commerce, Housing and Urban Development, and Labor; the Social Security Administration; and the U.S. Agency for International Development—reported that they had completed their commodity IT baseline. The remaining 6 agencies reported making progress towards completion. In May 2014, in a review examining federal agencies’ management of software licenses (which are types of enterprise IT applications), we determined, among other things, that only 2 of the 24 CFO Act agencies—the Department of Housing and Urban Development and the National Science Foundation—had comprehensive software license inventories. Twenty had partially complete inventories and two did not have any inventory. We recommended that agencies complete their inventories. We also recommended that OMB issue a directive to help guide agencies in managing licenses and that the 24 agencies improve their policies and practices for managing licenses. In June 2016, OMB issued a memorandum that is intended to improve agencies’ acquisition and management of enterprise software, consistent with our May 2014 recommendation. The memorandum contains elements related to having a comprehensive policy, such as developing and implementing a plan for centralizing the management of software licenses. We identified four practices to determine whether agencies had a complete software application inventory. To do so, we primarily relied on best practices used in our recent report on federal software licenses which determined, among other things, whether agencies had a comprehensive software license inventory, and our guide for assessing the reliability of computer-processed data. We determined that to be considered complete agencies’ inventories should: include business systems and enterprise IT systems, as defined by OMB; include these systems from all organizational components; specify basic attributes, namely application name, description, owner, and function supported; and be regularly updated with quality controls in place to ensure the reliability of the information collected. Most of the agencies fully met at least three of the four practices. Specifically, 4 agencies fully met all four practices; 9 agencies fully met three practices and 8 of these partially met the 6 agencies fully met two practices and 5 of these partially met the 2 agencies fully met one practice and partially met the three others, 3 agencies did not fully meet any practice. The following are examples of how we assessed agencies against our practices. See appendix II for a detailed assessment of all the agencies. The Environmental Protection Agency fully met three practices and partially met one. The agency fully met the first practice because its inventory includes enterprise IT and business systems, with the exception of very small systems. In addition, it included applications from all offices and regions in the organization. The agency partially met the practice for including application attributes in the inventory because, although it identifies the application name, and description, component managing the applications, and the business function associated with its applications, it does not identify the business function for every application. Officials stated that they are working to have this information populated for all applications. Lastly, the agency fully met the fourth practice of regularly updating the inventory because it has processes to update its inventory through the agency’s software life cycle management procedure and provided evidence of the annual data call issued by the CIO to ensure that the inventory is current. The U.S. Agency for International Development fully met two practices and partially met two. Specifically, the agency’s inventory includes business and enterprise IT systems and the inventory includes basic application attributes. However, the agency’s inventory does not include systems from all organizational components because officials stated that coordination and communication in the geographically-widespread agency is difficult. In addition, the agency has processes for updating its inventory; however, it relies on manual processes to maintain it. The Department of Transportation partially met all four practices. While the department’s inventory for the common operating environment includes all business and enterprise IT systems and its inventory of applications includes business systems, the inventory of applications does not include all enterprise IT systems. Furthermore, both of its inventories do not include applications used by all of its components. Specifically, the inventory does not include applications used by the Federal Highway, Federal Railroad, and Federal Transit Administrations, among others, and the inventory for its common operating environment does not include applications used by the Federal Aviation Administration. The department also partially met the practice of including basic application attributes because, although the department’s inventory includes these attributes, its common operating environment does not provide the business function that the applications support. Further, while the Department of Transportation has a process for its partners to provide information on its individual inventories in order to update the inventory of applications, it does not have processes in place to ensure the reliability and accuracy of the reported information, and thus partially met this practice. Regarding the four practices, the majority of the agencies fully met the practices of including business systems and enterprise IT system; including these systems from all organizational components; and specifying the application name, description, owner, and business function supported. Only five agencies fully met the practice of regularly updating the inventory and implementing quality controls for ensuring the reliability of the inventory data because they provided evidence of performing both of these activities. Table 3 shows the number of agencies who fully met, partially met, and did not meet the practices. OMB’s requirement for agencies to complete an IT asset inventory by the end of May 2016 greatly contributed to most of the agencies including business systems and enterprise IT systems for all of their organizational components and specifying key attributes for them. Those agencies that did not fully address these practices provided various reasons for not doing so. For example, one agency stated that it has not made its software application inventory a priority because it has been focusing on major and high risk investments, while delegating applications to the component level. Others noted that the lack of automated processes make collecting complete inventory information difficult. Further, others noted that it is challenging to capture applications acquired by components in the department-wide inventory. While it is reasonable to expect that priority be given to major and high risk investments, applications are nevertheless part of the portfolio and should be accounted for as such. Not accounting for them may result in missed opportunities to identify savings and efficiencies. It is also inconsistent with OMB guidance for implementing FITARA which requires that CIOs have increased visibility into all IT resources. In addition, the lack of a comprehensive inventory presents a security risk. If agencies are not aware of all of their assets, they cannot secure them, resulting in a vulnerable posture. Given the importance of securing federal systems and data to ensuring public confidence and the nation’s safety, prosperity, and well-being, we designated federal information security as a government-wide high-risk area in 1997. In 2003, we expanded this area to include computerized systems supporting the nation’s critical infrastructure. In our high risk update in February 2015, we further expanded this area to include protecting the privacy of personal information that is collected, maintained, and shared by both federal and nonfederal entities. As previously noted, application rationalization is the process of streamlining the portfolio to improve efficiency, reduce complexity and redundancy, and lower the cost of ownership. It can be done in many ways, including retiring aging and low-value applications, modernizing aging and high-value applications, eliminating redundant applications, standardizing on common technology platform and version (as is the case for moving to shared services), or consolidating applications. Based on common practices identified in technical papers from industry experts, to effectively perform rationalization, an agency should first establish a complete inventory of applications. It should then collect and review cost, technical, and business value information for each application, and use that information to make rationalization decisions. These practices are consistent with those used to manage investment portfolios. Therefore an agency can achieve application rationalization through established practices related to investment management, including budget formulation, security, or enterprise architecture. Each of the six selected agencies relied on their investment management processes and, in some cases, supplemental processes to rationalize their applications to varying degrees. However, five of the six agencies acknowledged that their processes did not always allow for collecting or reviewing the information needed to effectively rationalize all their applications. The sixth agency, NSF, stated its processes allow it to effectively rationalize its applications, but we found supporting documentation to be incomplete. Only one agency, NASA, had plans to address shortcomings. The following describes the six selected agencies’ processes for rationalizing their applications, provides rationalization examples, identifies weaknesses and challenges, and addresses plans, if any, the agencies have for addressing them. DOD: The department uses its investment management process for defense business systems to annually review its applications. Officials noted that the department’s enterprise architecture is also used to identify duplication and overlap among these applications. In addition, the department has identified eight enterprise common services for collaboration, content discovery, and content delivery it is requiring its components to use to, among other things, improve warfighting efficiency and reduce costs. One example of rationalization that DOD provided resulting from its efforts with Executive Business Information System that was replaced by the Navy Enterprise Resource Planning system in a full migration in 2014. Estimated cost savings or avoidances were estimated at $268,000 in fiscal year 2012 and almost $200,000 per year in fiscal years 2013 through 2015. In addition, in an effort to improve its financial management systems, the department has efforts underway to reduce the number of financial management systems from 327 to 120 by fiscal year 2019. However, officials acknowledged that its processes do not address all applications. Specifically, according to information provided by the department, about 1,200 enterprise IT and business systems which are associated with the Enterprise Information Environment Mission Area are not reviewed by the department—though they are reviewed by components—because they do not meet the definition of a defense business system. Officials cited several challenges with implementing systematic rationalization efforts, including the department’s organizational structure and contractual agreements. As an example, they noted that the Navy’s Next Generation e-mail system is being procured through a contract with a particular vendor and as such would be difficult to consolidate with other department e-mail systems. They also noted that the cost of collecting additional cost, technical, and business value information, along with maintaining even more data at greater granularity, may outweigh the benefits. The department does not have plans at this time to further enhance its processes to rationalize its applications. While we recognize the challenges and costs that may be associated with systematic rationalization efforts, the Enterprise Information Environment Mission Area could be considered as a near-term target for rationalization given the large number of enterprise IT and business systems associated with it. Modifying existing processes to allow for the collection, review, and evaluation of cost, technical, and business information of these systems at the department level could help identify opportunities for savings and efficiencies. DHS: DHS has several processes for rationalizing applications. For example, through its investment management process, portfolios are regularly assessed against criteria which help identify duplication. In addition, the department uses its DHS Collaborative Architecture Methodology in conjunction with its segment architectures to help identify duplication and fragmentation, at different levels, including at the application level. The DHS IT Duplication Reduction Act of 2015 mandated the department to report on a strategy for reducing duplicative IT systems and the department used the DHS Collaborative Architecture Methodology process to address this mandate, including about 700 commodity IT and back-office applications in the scope of the effort. Further, the department recently established an Application Services Council, chaired by its Enterprise Business Management Office. According to its charter, the council is a cross-component and cross- disciplined leadership team responsible for developing, maintaining, and overseeing the Enterprise Information Technology Services Portfolio, Lifecycle Governance Model, and Roadmap. It is expected to take a strategic approach to evaluating existing and future IT service offerings—including software, platform, and infrastructure services—and provide a forum to identify strategies, best practices, processes, and approaches for enterprise IT services, cloud computing, and shared service challenges. For example, officials reported the council is currently developing a standard service level agreement template and guidance, as well as a cloud adoption strategy. The department also reported other mechanisms related to rationalization include its Joint Requirements Council, strategic sourcing initiatives, IT acquisition reviews, and executive-level portfolio reviews. In addition, it reported that it uses its DHS Enterprise Architecture Information Repository Technical Reference Model to track application products and software versions—mainly consisting of commercial off-the-shelf software. The product information is gathered through the use of continuous network discovery scans. Examples of rationalization include the consolidation of learning management systems and the consolidation of site services, including help desk operations. The consolidation of learning management systems was identified through the segment architecture process and is expected to result in projected savings of 10 to 20 percent in fiscal year 2016 after transition costs are addressed. The modernization of the department’s help desk and on-site operations resulted in savings that cumulatively accrued to $202 million by fiscal year 2015 due to similar efforts among all department components. However, DHS’s processes do not address all applications because, while the components may carry out their own rationalization efforts, the department does not always collect the application-level cost, technical, or business information for applications used by its components. Specifically, officials reported challenges tracking product level information for deployed applications and difficulty gaining visibility into all the supporting application products for large systems. Officials particularly noted they have been challenged to collect such information and cited a general lack of visibility into the components’ budget and their spending. They also noted it was not clear whether there was a good return on investment for the resources needed to collect additional technical, cost, and business value data for systematic application rationalization efforts. Officials reported the department had a financial systems modernization effort underway which would provide greater visibility into components’ spending but they did not have a plan to address the collection and review of technical and business value information. While we recognize that collecting additional details on all applications may not be cost-beneficial, the department could consider taking a segmented approach and initially identify one high-cost function it is currently not collecting or reviewing detailed cost, technical, and business information for across the department. It could then modify existing processes to collect and review this information. These actions would assist the CIO in gaining visibility into all IT resources as specified in the OMB implementation guidance for FITARA and also help identify additional opportunities for savings and efficiencies. NASA: NASA uses its current investment management process—the Capital Planning and Investment Control process—and its configuration management tools—to review its applications. NASA reported examples of rationalization resulting in significant savings according to NASA officials. These included the NASA.gov Portal Cloud Transition which resulted in estimated savings of $4 million and the Enterprise Business Portal Transition/Consolidation which resulted in estimated savings of about $184,000 per year. However, NASA officials acknowledged that their current processes do not provide the level of detail needed to effectively rationalize the agency’s applications. In terms of challenges to rationalizing applications, officials stated that it is difficult to obtain transparency on all applications since each of the agency’s centers runs independently. In addition, officials stated that determining application business value is currently subjective to users because the agency’s process for obtaining this information is to ask the application owner the impact on the agency if the application did not exist, whereas application technical health information is more concrete. Furthermore, NASA officials stated that there is no systematic process to review applications facing end-of-life issues due to flat budgets and budget cuts. NASA has developed a plan for a supplemental process (the annual capital investment review process) that is to allow the agency to, among other things, collect detailed data about its applications. The agency has begun to implement the plan and has completed the first milestone of the process, which included conducting a data call to gather and validate application information provided by the various centers and agency stakeholders. At the time of our review, NASA had also performed an initial review and analysis of the information collected and identified optimization opportunities, including developing a plan to consolidate, decommission, or invest to achieve maximum cost efficiencies and process effectiveness across the application program. Fully implementing the annual capital investment review process could better position the agency to identify additional opportunities for savings and efficiencies. Interior: As part of its budget formulation process, Interior performs rationalization through annual reviews of its portfolio of investments (and supporting applications) against criteria which measure business value and technical fit. Reported examples of application rationalization include Interior’s cloud e-mail and collaboration services initiative, which consolidated 14 disparate systems into a single enterprise system and achieved a cost savings/avoidance of $13.56 million, and the consolidation of the Enterprise eArchive System with the eMail Electronic Records and Document Management System which resulted in cost savings/avoidance of $6.1 million. However, the department reported that its portfolio review process is not standardized because it has not been fully defined or established in policy. In addition, it has only been used at the department level, not at the bureaus or offices, and there is a lack of confidence in the data that is collected to support the analyses. In comments on a draft of this report, the department noted that it has also yet to document a plan to implement policy associated with these efforts which they believe would establish a standard analytical technique for rationalizing the investment portfolio. Such a plan would also help secure the commitment needed to carry out planned efforts. The department reported several challenges to rationalizing its applications, including (1) ensuring the quality and accuracy of data collected since it relies largely on manual processes for collecting information and (2) the lack of standard portfolio evaluation techniques to support information resource management decision- making across the department. The department has efforts underway which should help address these challenges. Specifically, it is making changes to its information resource management governance. According to the department, these changes, combined with efforts to implement the CIO responsibilities specified in FITARA, should help to address the challenges to rationalizing its applications and allow for rationalization of all applications. However, while the department has defined and begun to implement criteria to assess whether or not an investment and its underlying applications are wasteful, low-value, or duplicative, it has not documented its plan for improving its governance—which, according the department, would support application rationalization. Such a plan would help secure the commitment needed to carry out planned efforts. Labor: Similar to the other agencies, the department uses its investment management process to review the majority of its business and enterprise IT applications. In addition, officials stated that the department initiated an enterprise-wide budget formulation and Information Technology Acquisition Review Board approval function beginning in fiscal year 2013 which has helped with rationalization. Officials stated that their efforts have resulted in rationalization of commodity applications and on a case-by-case basis the rationalization of other applications, such as for a case management platform and an acquisition management system. Additional examples of application rationalization include the deployment of a web-based conferencing and collaboration shared service to employees which resulted in cost avoidance of travel costs of about $2.3 million. The department also noted benefits of moving to a cloud e-mail solution, such as saved time and increased user satisfaction. However, officials identified weaknesses and challenges with rationalizing their applications. Specifically, they reported that, in most cases, IT investments are associated with a group of IT assets, including applications, and individual application information is therefore not reviewed, making it difficult to effectively rationalize. In addition, officials stated that the fact that each bureau-level agency has had authority and responsibility for managing its own applications and that the department has over 600 locations present challenges. Further, though senior officials including the CIO, agreed with the benefits of rationalization, they did not have any plans to rationalize. They questioned the value of developing such plans stating that (1) maintaining mission critical applications and the department’s aging infrastructure are current priorities and (2) funding may not be available to implement rationalization plans. While we agree that mission critical applications should be given priority, rationalizing mission support applications, including enterprise IT and business systems, could result in solutions which allow agencies to focus more on mission capabilities and at the same time generate savings which could be reinvested. As we noted for DHS, the department could consider taking a segmented approach to further rationalize and identify a function for which it could modify existing processes to collect and review detailed application cost, technical, and business value information. NSF: NSF also uses its investment management processes and supporting budget formulation process—with key stakeholders such as the Executive IT Resources Board, Capital Planning and Investment Control Working Group, and Enterprise Architecture Working Group—to collect and review information for its investments. In addition, NSF’s Enterprise Modernization Roadmap—which is updated annually—identifies applications along with their associated business segment and modernization status and plans. NSF identified its e-mail migration to a new platform, which was completed in July 2013, as an example of an application rationalization effort with the highest savings. According to the agency’s November 2015 integrated data collection submission to OMB, the migration effort resulted in cost avoidances of $60,000 in 2014. Other examples of application rationalization include modernization and consolidation of NSF’s grant systems, the 2014 retirement of the financial functions of a legacy system, and the implementation of its financial system modernization initiative. However, while officials told us that evaluations for all applications meeting the scope of our review would be included in the roadmap, we only identified half of the applications (9 out of 18). In addition, cost information was only provided in the roadmap for three individual applications. NSF officials told us that because they are a relatively small agency with a single mission in a single location, many of their processes are handled informally and not thoroughly documented but they are able to discuss all the applications with each other on a regular basis and as a result there is no duplication. Nevertheless, consistently documenting the evaluations and costs for all applications in the roadmap would improve transparency. While it is encouraging that 13 of the 24 CFO Act agencies fully met at least three of the four practices for establishing a complete software application inventory, most could improve their software applications inventories—albeit to varying degrees—by taking steps to fully meet the practices we identified as being either partially met or not met. Doing so would better position them to identify opportunities to rationalize their applications, which could lead to savings and efficiencies. In addition, they would be better positioned to comply with OMB issued implementation guidance for the recent IT acquisition reform law which requires that CIOs have increased visibility into all IT resources and ensure they are effectively securing their IT assets. Six selected agencies used their investment management processes and sometimes supplemental processes to rationalize their applications. Of the six agencies, one—NSF—had processes that allowed it to rationalize all applications, though the supporting documentation was not always complete. In addition, while the remaining five agencies’ processes did not allow for rationalizing all applications, only one—NASA—had plans to address identified weaknesses. While these agencies all had examples of rationalization resulting in savings and efficiencies, modifying their existing processes to more completely address their applications would help identify additional opportunities to achieve such savings and efficiencies, which even small, would add up across agencies and over time. To improve federal agencies’ efforts to rationalize their portfolio of applications, we are recommending that: the heads of the Departments of Agriculture, Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, the Interior, Labor, State, Transportation, the Treasury, and Veterans Affairs; and heads of the Environmental Protection Agency; National Aeronautics and Space Administration; National Science Foundation; Nuclear Regulatory Commission; Office of Personnel Management; Small Business Administration; Social Security Administration; and U.S. Agency for International Development direct their CIOs and other responsible officials to improve their inventories by taking steps to fully address the practices we identified as being partially met or not met; and the Secretaries of Defense, Homeland Security, the Interior, and Labor; and the Director of the National Science Foundation to direct the CIOs and other responsible officials to modify existing investment management processes to address applications more completely. Specifically, the Secretary of Defense should direct the responsible official to modify the department’s existing processes to collect and review cost, technical, and business information for the enterprise and business IT systems within the Enterprise Information Environment Mission Area applications which are currently not reviewed as part of the department’s process for business systems; the Secretary of Homeland Security should direct the department’s CIO to identify one high-cost function it could collect detailed cost, technical, and business information for and modify existing processes to collect and review this information; the Secretary of the Interior should direct the department’s CIO to document and implement a plan for establishing policy that would define a standard analytical technique for rationalizing the investment portfolio; the Secretary of Labor should direct the department’s CIO to consider a segmented approach to further rationalize and identify a function for which it would modify existing processes to collect and review application-specific cost, technical, and business value information; and the Director of the National Science Foundation should direct the CIO to consistently document evaluations for all applications and report cost information for them in the roadmap or other documentation. We provided a draft of this report to the 24 CFO Act agencies in our review for comment and received responses from all 24. Of the 24, 17 agreed with the recommendations directed to them; one (the Department of Defense) disagreed with the recommendations directed to it; five (the Department of the Treasury, the National Science Foundation, the Nuclear Regulatory Commission, the Small Business Administration, and the and U.S. Agency for International Development) stated that they had no comments; and one (the Department of Justice) agreed with the assessment and conclusion for three of the four practices associated with establishing a complete software application inventory and provided clarifying information on the two other practices. Several agencies also provided technical comments, which we incorporated as appropriate. The agencies’ comments and our responses are summarized below. In e-mail comments, the Department of Agriculture’s Senior Advisor for Oversight and Compliance Enterprise Management stated that the department concurred with our recommendation. The department also provided technical comments which we incorporated as appropriate. As a result of these comments and additional documentation provided, we changed our evaluation of the practice associated with updating the software application inventory from not met to partially met. In written comments, the Department of Commerce concurred with our recommendation and stated that the department is committed to implementing a more efficient process by regularly updating its application inventory to ensure the reliability of the data collected. The department also specified actions it plans to take to provide regular updates of its inventory. The department’s comments are reprinted in appendix III. In written comments, the Department of Defense disagreed with both of our recommendations to the department. For the first recommendation, the department provided evidence showing that it updated its inventory subsequent to us sending the report for comment. As a result, we changed the rating for the related practice from partially met to fully met and removed the associated recommendation. For the second recommendation, the department stated that 53 percent of the inventory records for the Enterprise Information Environment Mission Area we focused on were IT infrastructure assets (specifically network enclaves or circuits) and not applications subject to rationalization. The mission area nevertheless includes enterprise and business IT applications which could benefit from rationalization, as evidenced by the example of e-mail system consolidation provided in the comments. Given the number of systems involved (at least 1,200), collecting and reviewing cost, technical, and business information for them would help identify additional rationalization opportunities which could yield savings and efficiencies. We therefore believe a recommendation to address these systems is still warranted. The department also stated that our draft implied that major IT infrastructure modernization efforts, many of which involve the Enterprise Information Environment Mission Area, were not reviewed or properly managed by the department. However, as noted in our report, we did not include IT infrastructure assets in the scope of our review and therefore made no comment on how these assets are being managed. We have restated our emphasis on enterprise and business IT systems as it relates to the mission area where appropriate. Finally, in its comments the department stated that our report ignored significant Enterprise Information Environment Mission Area application rationalization efforts, such as the Pentagon IT consolidation under the Joint Service Provider, the Business Process and System Review, and ongoing efforts concerning public-facing websites and associated systems. While we were not informed of these efforts during our review, our intent was to highlight additional opportunities for rationalization, not discount any that might have already been implemented. The department also provided technical comments, which we incorporated into the report as appropriate. The department’s comments are reprinted in appendix IV. In written comments, the Department of Education concurred with our recommendation and described actions it plans to take to address it. The department’s comments are reprinted in appendix V. In written comments, the Department of Energy concurred with our recommendation. In addition, the department stated that it partially met the four practices associated with establishing a complete software application inventory and provided the IT Asset Inventory it submitted to OMB in May 2016 and other documentation supporting this claim. Our review of the documentation found that the inventory includes business and enterprise IT systems; however, it does not include those systems from all organizational components and it is missing the business function code for a large number of systems. Furthermore, while the department is updating the IT Asset inventory in response to OMB guidance for the fiscal year 2016 integrated data collection submission process, it has not implemented quality control processes to ensure the reliability of the data within the inventory. As a result, we changed the department’s rating for the practice associated with including business and enterprise IT systems from not met to fully met and from not met to partially met for the remaining three practices. We modified sections of the report specific to the department accordingly. The department’s comments are reprinted in appendix VI. In written comments, the Department of Health and Human Services concurred with our recommendation and stated that that it would review the feasibility of fully addressing the practices it partially met. The department’s comments are reprinted in appendix VII. In written comments, the Department of Homeland Security concurred with our recommendation and described actions it plans to take to implement it. The department’s comments are reprinted in appendix VIII. In written comments, the Department of Housing and Urban Development concurred with our recommendation and stated that more definitive information with timelines will be provided once the final report has been issued. The department’s comments are reprinted in appendix IX. In written comments, the Department of the Interior stated that it would agree with the recommendations if we made its requested changes. However, we disagreed with the request to change the rating for the practice associated with regularly updating the inventory from not met to partially met because, while the department provided evidence supporting its claim that it recently updated its inventory, the evidence was not sufficient. Specifically, the department provided an e-mail requesting the bureaus and offices to complete an inventory survey. However, the department did not show how the survey resulted in updates to the inventory. We incorporated the remaining requested changes in the report as appropriate. The department’s comments are reprinted in appendix X. In written comments, the Department of Justice stated that it concurred with our assessment and conclusions. The department also provided clarifying information regarding its procedures to ensure application inventory accuracy and provided documentation showing that it updates the inventory and implements quality controls to ensure its reliability. As a result, we changed the rating for the related practice from partially met to fully met and removed the recommendation made to the department. The department’s comments are reprinted in appendix XI. In written comments, the Department of Labor concurred with our recommendations to the department and stated that it would take the necessary steps to address the recommendations. The department’s comments are reprinted in appendix XII. In written comments, the Department of State concurred with our recommendation to the department, and described current and planned actions to fully address it. The department’s comments are reprinted in appendix XIII. In e-mail comments, the Department of Transportation’s Audit Liaison stated that the department concurred with our findings and recommendation. In e-mail comments, the Department of the Treasury’s Audit Liaison stated that the department did not have any comments. In written comments, the Department of Veterans Affairs concurred with our conclusions and recommendation. The department also provided information on the actions it plans to take to address the recommendation. The department’s comments are reprinted in appendix XIV. In written comments, the Environmental Protection Agency generally agreed with our recommendation. The agency also asked that we include some of the language from the detailed evaluation in appendix II of the report to the example we have in the body to provide the full context of its practices. We added the language as requested. The agency’s comments are reprinted in appendix XV. In e-mail comments, the General Services Administration’s Associate CIO of Enterprise Planning and Governance concurred with the report. The agency also provided evidence of its processes to update the inventory and ensure the reliability of the data in the inventory, including the coordination between its Enterprise Architecture Team and subject matter experts. As a result, we changed the agency’s rating for the related practice from partially met to fully met and removed our recommendation to the agency. In written comments, the National Aeronautics and Space Administration concurred with our recommendation and stated that it would utilize the capital investment review process it is currently implementing to improve its inventory. The agency’s comments are reprinted in appendix XVI. In e-mail comments, the National Science Foundation Office of Integrated Activities’ Program Analyst stated that it had no comments on the draft report. In written comments, the Nuclear Regulatory Commission stated that it is in general agreement with the report. The agency’s comments are reprinted in appendix XVII. In written comments, the Office of Personnel Management concurred with our recommendation and described plans to fully address it. The agency’s comments are reprinted in appendix XVIII. In e-mail comments, the Small Business Administration Office of Congressional and Legislative Affairs’ Program Manager stated that the Office of the Chief Information Officer believes the report captures its current posture. In written comments, the Social Security Administration agreed with our recommendation to the agency, but disagreed with the partially met rating for regularly updating the inventory, including implementing quality controls, stating that it had provided evidence supporting its implementation of the practice. However, as noted in the report, the Social Security Administration reported that its systems development lifecycle contains steps for maintaining the inventory but did not provide evidence showing that it is using this process to regularly update the inventory. Therefore we did not change our rating. The agency’s comments are reprinted in appendix XIX. In an e-mail, the U.S. Agency for International Development Audit, Performance and Compliance Division’s Management Analyst stated that the agency did not have any comments. We are sending copies of this report to interested congressional committees; the heads of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs; the Environmental Protection Agency; the General Services Administration; the National Aeronautics and Space Administration; the National Science Foundation; the Nuclear Regulatory Commission; the Office of Personnel Management; the Small Business Administration; the Social Security Administration; the U.S. Agency for International Development; the Director of the Office of Management and Budget; and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to determine (1) whether agencies have established complete application inventories and (2) to what extent selected agencies have developed and implemented processes for rationalizing their portfolio of applications. For the first objective, we reviewed the 24 major agencies covered by the Chief Financial Officers (CFO) Act of 1990. To ensure consistency, we decided to focus on the software applications associated with the business and enterprise information technology (IT) commodity IT categories defined in the Office of Management and Budget (OMB) guidance since they would be familiar to the agencies in our scope. OMB defines enterprise IT systems as e-mail, identity and access management, IT security, web infrastructure, and collaboration tools; and business systems as finance, human resources, and other administrative functions. We then identified practices to assess whether agencies had a complete software application inventory. To identify these practices, we primarily relied on our guide for assessing the reliability of computer-processed data which addresses questions about the currency of the data and how often it is updated, procedures for ensuring the completeness of the data, and quality control processes in place to ensure the accuracy of the data; and on criteria used in our recent report on federal software licenses which determined whether agencies had a comprehensive software license inventory, among other things. To be considered complete, we determined an inventory should: include business systems and enterprise IT systems, as defined by OMB; include these systems from all organizational components; specify basic attributes, namely, application name, description, owner, and function supported; and be regularly updated with quality controls in place to ensure the reliability of the information collected. Following the identification of these four practices, we asked the 24 CFO Act agencies for their software application inventories. We used a set of structured questions to determine whether the agencies implemented the practices and identify lessons learned and challenges faced in establishing a complete software application inventory. We analyzed supporting documentation, such as agency and departmental guidance, policies, and procedures for updating the inventories, and interviewed relevant agency officials, as needed. We compared the information received to the four practices. We determined a practice to be fully met if agencies provided evidence that they fully or largely implemented the practice for establishing a complete application inventory; partially met if agencies provided evidence that they addressed some, but not all, of the practice for establishing a complete application inventory; and not met if the agencies did not provide any evidence that they implemented the practice for establishing a complete application inventory. To verify the inclusion of business and enterprise IT systems, we analyzed agencies’ inventories and looked for examples of each type of system identified by OMB in the business and enterprise IT commodity categories. We followed up with agencies when we were not able to identify a type of system to determine the reason for the omission. We considered the practice to be fully met if agencies’ inventories included all of the business and enterprise IT system types or if agencies provided valid reasons for excluding them. We considered the practice to be partially met if agencies acknowledged they were missing applications or if we determined system types to be missing and agencies did not provide a valid reason for this. Although we followed up with agencies to determine whether they maintained separate inventories of software licenses when they were not included in the inventories provided, we did not consider the inclusion of these applications in determining our rating because software licenses are expected to be tracked separately by OMB. To verify the inclusion of systems from all organizational components, we analyzed agencies’ inventories against the list of organizational components to determine whether they were included. We followed up with agency officials to determine causes, if any, for missing components. We considered the practice to be fully met if inventories included applications from all organizational components or if agencies provided valid reasons for excluding them. We considered the practice to be partially met if agencies acknowledged they were missing organizational components or if we determined several components to be missing and agencies did not provide a valid reason for this. Regarding application attributes, we determined that, at a minimum, agencies should have a name, a description, an owner, and function supported for each application. We considered the practice to be fully met if inventories included these attributes for all or most applications or the agencies provided evidence that attributes not included in the inventory provided were being tracked separately. We determined the practice to be partially met if agencies acknowledged that they were missing any of the attributes or if we determined them to be missing from the inventory and agencies did not provide alternate sources for them. For the last practice, we determined whether agencies (1) used relevant methods to update and maintain the application inventory and (2) implemented controls to ensure the reliability of the information collected. Regarding these controls, we looked for the use of automated tools to collect and track information as their use increases reliability. We determined the practice to be fully met if agencies provided evidence that they regularly updated the inventory and had controls for ensuring the reliability of information collected, including the use of automated tools, or if agencies had mitigating factors when these processes were not in place. We determined the practice to be partially implemented when agencies provided policies and procedures but no evidence of actual inventory updates or quality controls. We also determined the practice to be partially implemented if agencies provided evidence of either regular updates or controls for ensuring reliability but not both or did not make use of automated tools for collecting or maintaining information and had no mitigation factors. Finally, we also determined the practice to be partially implemented if agencies provided draft policy and guidance of their processes. For our second objective, we selected 6 of the 24 CFO Act agencies—the Departments of Defense, Homeland Security, the Interior, and Labor; and the National Aeronautics and Space Administration and National Science Foundation—to assess their application rationalization plans and efforts to implement them. We selected the agencies based on three factors: whether they had an application rationalization process; in our initial set of structured questions to agencies, we asked whether they had a plan or process for rationalizing applications and selected those that reported having one; the size of the agency based on fiscal year 2015 IT spending; we selected two large agencies (i.e., with spending equal to or greater than $3 billion), two medium agencies (i.e., with spending between $1 billion and $3 billion), and two small agencies (i.e., with spending of less than $1 billion) for a full range of IT spending; and if they were known for effectively rationalizing their applications based on OMB observations and our research on IT acquisition reform recognizing agencies for their application rationalization efforts. We identified key practices for effectively rationalizing applications. To do so, we reviewed OMB guidance on federal IT management. We also reviewed technical reports on application rationalization from industry experts. We synthesized the information collected, looked for themes, and determined that, to effectively rationalize applications, agencies should have a process addressing the following four key practices: establish an application inventory; collect information on each application, such as total cost, technical details, and business value; evaluate the portfolio and make application rationalization decisions based on a review of collected information and determine what applications to retain, retire, replace, eliminate, modernize, or consolidate/move to shared services; and execute and manage the process by implementing decisions from the evaluation and evaluate process outcomes against defined metrics and adjust, as needed. While our research identified specific processes for rationalizing applications, the principles of collecting application information and reviewing it to inform decision making are consistent with those used to manage investment portfolios. Therefore we considered established practices related to investment management, budget formulation, security, or enterprise architecture. Since the first key practice was addressed in our first objective, we focused on the last three practices. To do so, we interviewed relevant officials using a structured set of questions that were developed in conjunction with internal experts. We also reviewed documentation to determine the extent to which agencies had processes addressing these practices. We also asked agencies to provide their two best examples of application rationalization in terms of savings or cost avoidance—to illustrate the results of rationalization. When agencies did not provide two examples meeting these conditions—the case for DOD, DHS, and NSF— we drew examples from other documentation they had provided. Finally, we interviewed staff from OMB’s Office of the Federal Chief Information Officer to determine whether and how the office monitors agencies’ efforts to rationalize their portfolio of applications as recommended in OMB guidance. We also interviewed the staff to determine the impetus for the IT asset data inventory guidance and the planned used for the information collected. We conducted this performance audit from May 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following tables provide our evaluation of the 24 agencies’ application inventories. In addition to the individual named above, the following staff made key contributions to this report: Sabine Paul (Assistant Director), Chris Businsky, Rebecca Eyler, Dan Gordon, James MacAulay, Lori Martinez, Paul Middleton, and Di’Mond Spencer.
The federal government is expected to spend more than $90 billion on IT in fiscal year 2017. This includes a variety of software applications supporting agencies' enterprise needs. Since 2013, OMB has advocated the use of application rationalization. This is a process by which an agency streamlines its portfolio of software applications with the goal of improving efficiency, reducing complexity and redundancy, and lowering the cost of ownership. GAO's objectives were to determine (1) whether agencies have established complete application inventories and (2) to what extent selected agencies have developed and implemented processes for rationalizing their portfolio of applications. To do this, GAO assessed the inventories of the 24 CFO Act agencies against four key practices and selected six agencies—the Departments of Defense, Homeland Security, the Interior, Labor, and NASA and NSF—due to their IT spending, among other factors, to determine whether they had processes addressing applications. Most of the 24 Chief Financial Officers (CFO) Act of 1990 agencies in the review fully met at least three of the four practices GAO identified to determine if agencies had complete software application inventories. To be considered complete, an inventory should (1) include business and enterprise information technology (IT) systems as defined by the Office of Management and Budget (OMB); (2) include these systems from all organizational components; (3) specify application name, description, owner, and function supported; and (4) be regularly updated. Of the 24 agencies, 4 (the Departments of Defense, Homeland Security, and Justice, and the General Services Administration) fully met all four practices, 9 fully met three practices, 6 fully met two practices, 2 fully met one practice, and 3 did not fully meet any practice (see figure). A January 2016 OMB requirement to complete an IT asset inventory by the end of May 2016 contributed to most of the agencies fully meeting the first three practices. Agencies that did not fully address these practices stated, among other things, their focus on major and high risk investments as a reason for not having complete inventories. However, not accounting for all applications may result in missed opportunities to identify savings and efficiencies. It is also inconsistent with OMB guidance regarding implementation of IT acquisition reform law, referred to as the Federal Information Technology Acquisition Reform Act, which requires that Chief Information Officers at covered agencies have increased visibility into all IT resources. Not accounting for all applications also presents a security risk since agencies can only secure assets if they are aware of them. Each of the six selected agencies relied on their investment management processes and, in some cases, supplemental processes to rationalize their applications to varying degrees. However, five of the six agencies acknowledged that their processes did not always allow for collecting or reviewing the information needed to effectively rationalize all their applications. The sixth agency, the National Science Foundation (NSF), stated its processes allow it to effectively rationalize its applications, but agency documentation supporting this assertion was incomplete. Only one agency—the National Aeronautics and Space Administration (NASA)—had plans to address shortcomings. Taking action to address identified weaknesses with agencies' existing processes for rationalizing applications would assist with identifying additional opportunities to reduce duplication and achieve savings. GAO is recommending that 20 agencies improve their inventories and five of the selected agencies take actions to improve their processes to rationalize their applications more completely. The Department of Defense disagreed with both recommendations made to it. After reviewing additional evidence, GAO removed the recommendation associated with improving the inventory but maintained the other. The other agencies agreed to or had no comments on the draft report.
Each of the four major federal land management agencies manages its lands and the resources they contain on the basis of its legislatively mandated responsibilities. In general, the Fish and Wildlife Service and the National Park Service manage their lands primarily for noncommodity uses. The Fish and Wildlife Service manages its lands primarily to conserve and protect fish and wildlife and their habitat, although other uses—such as recreation (including hunting and fishing), mining and mineral leasing, livestock grazing, and timber harvesting—are allowed when they are compatible with the primary purposes for which the lands are managed. The National Park Service manages its lands to conserve, preserve, protect, and interpret the nation’s natural, cultural, and historic resources for the enjoyment and recreation of current and future generations. Conversely, the Forest Service and the Bureau of Land Management are legislatively mandated to manage their lands for both commodity and noncommodity uses. For example, the Forest Service’s organic legislation—the Organic Administration Act of 1897—refers to water flows and timber supply. The Multiple Use-Sustained Yield Act of 1960 added responsibilities for recreation, range, and fish and wildlife and required the agency to manage its lands so as to sustain all of these uses. The National Forest Management Act of 1976 (1) recognized wilderness as a use of the forests and (2) modified the Forest Service’s mandate for fish and wildlife to require the maintenance of diverse plant and animal communities (biological diversity). Similarly, the Federal Land Policy Management Act of 1976 requires the Bureau of Land Management to manage its lands for multiple uses and sustained yield. The act defines multiple uses to include recreation; range; timber; minerals; watershed; fish and wildlife and their habitat; and natural, scenic, scientific, and historic values. Both the Forest Service and the Bureau of Land Management have legislatively based incentives for producing resource commodities. For example, the Forest Service receives some of its operating funds from the receipts of timber sales under the Knutson-Vandenberg Act of 1930, which authorizes the national forests to retain a portion of their timber sale receipts to help fund reforestation and other activities as well as regional office and headquarters expenses. Under the Taylor Grazing Act of 1934, the Bureau of Land Management may issue permits for the use of rangelands only to persons engaged in the business of livestock grazing. The permits may not be issued for other uses, such as to provide habitat for fish and wildlife. As a result, the Forest Service and the Bureau of Land Management have managed their lands to a great extent for commodity uses, such as timber harvesting, livestock grazing, and mineral production. In addition, all four agencies must comply with the requirements of the National Environmental Policy Act (NEPA). NEPA and its implementing regulations specify the procedures for integrating environmental considerations into the agencies’ management of lands and resources. In managing their lands and resources, the agencies must also comply with the requirements of other environmental statutes, including the Endangered Species Act, the Clean Water Act, and the Clean Air Act, as well as numerous other laws and regulations. The Forest Service alone is subject to 212 laws affecting its activities and programs. Authority for implementing and enforcing these laws is dispersed among several federal agencies, including the Fish and Wildlife Service, the Department of Commerce’s National Marine Fisheries Service, the Environmental Protection Agency (EPA), and the U.S. Army Corps of Engineers, as well as state and local agencies. Several changes and developments suggest a basis for reviewing the current approach to federal land management with an eye to improving its efficiency and effectiveness. These changes and developments include the increased similarity in the responsibilities and the increased complexity in the management of federal lands, together with budgetary and ecological considerations. Over time, the responsibilities of the four major federal land management agencies have grown more similar. Specifically, the Forest Service and the Bureau of Land Management now provide more noncommodity uses on their lands. For instance, in 1964, less than 3 percent (16 million acres) of their lands were managed for conservation—as wilderness, wild and scenic rivers, and recreation. By 1994, this figure had increased to about 24 percent (over 108 million acres). According to Forest Service officials, several factors have required the agency to assume increased responsibilities for noncommodity uses, especially for biological diversity and recreation. These factors include (1) the interaction of legislation, regulation, case law, and administrative direction, (2) growing demands for noncommodity uses on Forest Service lands, and (3) activities occurring outside the national forests, such as timber harvesting on state, industrial, and private lands. With this shift in its responsibilities, the Forest Service is less able to meet the demands for commodity uses on its lands, especially for timber harvesting. For example, 77 percent of the 24.5 million acres of Forest Service and Bureau of Land Management lands in western Washington State, Oregon, and California that were available for commercial timber harvesting have been set aside or withdrawn primarily for noncommodity uses. In addition, although the remaining 5.5 million acres, or 22 percent, are available for regulated harvesting, the minimum requirements for maintaining biological diversity and water quality may limit the timing, location, and amount of harvesting that can occur. Moreover, harvests from these lands could be further reduced by plans to protect threatened and endangered salmon. The volume of timber sold from Forest Service lands in the three states declined from 4.3 billion board feet in 1989 to 0.9 billion board feet in 1994, a decrease of about 80 percent. While our work at the Bureau of Land Management has been more limited, this agency is also assuming increased responsibilities for noncommodity uses. This shift in responsibilities of the Forest Service and the Bureau of Land Management to more noncommodity uses has contributed to what is sometimes referred to as a “blurring of the lines” among the four major federal land management agencies. Some Forest Service officials are concerned about the workability of the agency’s current statutory framework, which they believe is making the management of the national forests increasingly complex. They believe that it is sometimes difficult to reconcile differences among laws and regulations. For example, the National Forest Management Act requires the Forest Service to maintain diverse plant and animal communities. One process that nature uses to produce such biological diversity is periodic small wildfires that create a variety of habitats. However, until recently, a federal policy required the suppression of all fires on federal lands. As a result, there has been an accumulation of fuels on the forests’ floors. The Forest Service now plans to undertake prescribed burning to restore the forests’ health and avoid unnaturally catastrophic fires. However, the minimum standards for air quality required under the Clean Air Act may at times prohibit the Forest Service from achieving this goal by limiting the timing, location, and amount of prescribed burning that can occur. In addition, the minimum standards for water quality required under the Clean Water Act and the conservation of species listed as endangered or threatened under the Endangered Species Act also can limit the timing, location, and amount of prescribed burning that can occur, since soils from burned areas wash into streams, modifying species’ habitats. Reconciling differences among laws and regulations is further complicated by the dispersal of authority for these laws among several federal agencies and state and local agencies. Disagreements among the agencies on whether or how these requirements can best be met sometimes delay projects and activities. According to officials in the federal land management and regulatory agencies with whom we spoke, these disagreements often stem from differing evaluations of environmental impacts and risks. For example, in 1995, the Fish and Wildlife Service, the National Marine Fisheries Service, and EPA could not agree with the Forest Service on the extent of risk the Thunderbolt salvage timber sale—on the Boise and Payette National Forests in central Idaho—may have to salmon spawning habitat. The federal government’s increased emphasis on downsizing and budgetary constraint demands that federal agencies look beyond existing jurisdictional boundaries to find ways to reduce costs, increase efficiency, and improve service to the public. Such gains could be achieved by refocusing, combining, or eliminating certain functions, systems, programs, activities, or field locations. Joint efforts in planning and budgeting; joint use of administrative, technical, and management systems; and joint stewardship of natural and cultural resources could lead to greater efficiency. For instance, in 1985 the Forest Service and the Bureau of Land Management proposed to the Congress to merge all field offices located in the same communities in western Oregon, restructure boundaries to achieve the optimum size and balance among land units, and eliminate some managerial and overhead positions. The agencies projected that this proposal would have reduced the number of permanent employees by 280 and would have achieved annual savings of $10.3 million (in 1985 dollars) once it was fully implemented. The Congress did not act on this “interchange” proposal. Ecological considerations also suggest that the federal land management agencies rethink their organizational structures and relationships with one another. Scientific research has increased the agencies’ understanding of the importance and functioning of natural systems, such as watersheds, airsheds, soils, and vegetative and animal communities, specific components of which (e.g., threatened and endangered species and wetlands) are protected under various environmental statutes. The boundaries of these natural systems are often not consistent with existing jurisdictional and administrative boundaries. Hence, activities and uses affecting these systems may need to be coordinated and managed across federal land units and agencies. For example, federal efforts to restore the environment of South Florida—including the Everglades and Florida Bay—transcend existing jurisdictional and administrative boundaries and involve numerous federal agencies, including the National Park Service, the Fish and Wildlife Service, the National Marine Fisheries Service, EPA, and the Corps of Engineers. Two basic strategies have been proposed to improve federal land management: (1) streamlining the existing structure by coordinating and integrating functions, systems, activities, programs, and field locations and (2) reorganizing the structure by combining agencies. The two strategies are not mutually exclusive, and some prior proposals have encompassed both. In 1983, President Reagan’s Private Sector Survey on Cost Control, also known as the Grace Commission, recommended that the Forest Service and the Bureau of Land Management combine administrative functions, eliminate duplicative efforts, and plan a program of jurisdictional land transfers to accomplish these objectives. Similarly, in 1993, the Clinton administration established the Interagency Ecosystem Management Task Force to develop an approach to ensuring a sustainable economy and a sustainable environment. In a November 1995 report, the task force stated that such an approach would entail a shift from the federal government’s traditional focus on an individual agency’s jurisdiction to a broader focus on the actions of multiple agencies across larger ecological areas. The task force recommended that federal agencies strive for greater flexibility in pursuing their missions within existing legal authorities and develop better information, communication, coordination, and partnerships. In December 1995, 13 federal departments and agencies, together with the Council on Environmental Quality, signed a memorandum of understanding establishing a network of agency coordinators and pledging to work together in support of such an approach. On February 1, 1994, and February 9, 1995, we testified that the four major federal land management agencies need to reduce costs, increase efficiency, and improve service to the public, as well as manage activities and uses across existing federal land units and jurisdictions so as to preserve the nation’s natural resources and sustain their long-term economic productivity. This approach would require them to look beyond their jurisdictions and work with the Congress and each other to develop a strategy to coordinate and integrate their functions, systems, activities, and programs so that they can operate as a unit at the local level. Over the last several years, the Forest Service and the Bureau of Land Management have collocated some offices or shared space with other federal agencies. They have also pursued other means of streamlining, sharing resources, and saving rental costs. However, the four major federal land management agencies have not, to date, developed a strategy to coordinate and integrate their functions, systems, activities, and programs. Several proposals for improving federal land management would reorganize the existing structure by combining various agencies. For example, in its 1970 report to the President and the Congress, the Public Land Law Review Commission (a bipartisan group established by the Congress in 1964 with members appointed by both the President and the Congress) recommended that the Forest Service be transferred from the Department of Agriculture to the Department of the Interior, which would then be renamed the Department of Natural Resources. Subsequent proposals included additional agencies. For example, in 1971-72, the Nixon administration proposed adding the Corps of Engineers, Agriculture’s Soil Conservation Service (now the Natural Resources Conservation Service), and the National Oceanographic and Atmospheric Administration in the Department of Commerce (which includes the National Marine Fisheries Service). Eight years later, the Carter administration made a similar proposal. Some Forest Service officials, including the Chief, believe that a commission similar to the Public Land Law Review Commission may need to be established if federal land management is to be improved. Such a commission would need to conduct a thorough review of federal land management and report its findings to the President and the Congress. Despite the commissions, reports, and recommendations over the past 26 years for streamlining or reorganizing federal land management, no significant legislation has been enacted. These efforts have not succeeded, in part, because they have not been supported by a solid consensus for change. For example, the Carter administration estimated that its proposal to create a Department of Natural Resources would result in annual savings of up to $100 million. However, it did not specify how these savings would be accomplished, and a consensus for change was never achieved. On May 17, 1995, in testimony before this Committee, the Comptroller General identified five principles to consider during any effort to streamline or reorganize government. These principles are based on past governmental restructuring efforts—both inside and outside the United States. Reorganization demands a coordinated approach, within and across agency lines, supported by a solid consensus for change in both the Congress and the administration. Reorganization should seek to achieve specific, identifiable goals. Once goals are defined, attention must be paid to how the federal government exercises its role—both in terms of organization and tools. Effective implementation is critical to success. Sustained oversight by the Congress is needed to ensure effective implementation. Because the federal land management agencies have similar responsibilities yet different legislative requirements, any effort to streamline or reorganize them will require a coordinated approach within and across the agencies to avoid creating new, unintended consequences for the future. In particular, potential gains in efficiency need to be balanced against the policy reasons that led to the existing structure. For example, transferring responsibility for environmental compliance from regulatory agencies, such as the Fish and Wildlife Service, EPA, and the Corps of Engineers, to the Forest Service and the Bureau of Land Management may help expedite the implementation of projects and activities. However, any potential gains in efficiency from such a transfer would need to be balanced against the policy reasons that led originally to separating the responsibility for federal land management from the responsibility for regulatory compliance. Moreover, while there may be a growing consensus for streamlining or reorganizing the existing structure of federal land management, as the Comptroller General noted in his May 17, 1995, testimony, the key to any streamlining or reorganization plan—and the key to building a consensus behind it—is the creation of specific, identifiable goals. Applying this principle to federal land management will require decisionmakers to agree on, among other issues, how to balance differing objectives for commodity and noncommodity uses over the short and long term. For example, the Forest Service is experiencing increasing difficulty in reconciling conflicts among competing uses on its lands, and demands for forest uses will likely increase substantially in the future. Some Forest Service officials believe that the laws governing the agency’s mission provide little guidance for resolving these conflicts. As a result, they have suggested that the Congress needs to provide greater guidance on how the agency is to balance competing uses and ensure their sustainability. In particular, the Chief of the Forest Service has stated that (1) the maintenance and restoration of noncommodity uses, especially biological diversity, needs to be explicitly accepted or rejected and (2) if accepted, its effects on the availability of commodity uses need to be acknowledged. Once decisionmakers reach a consensus on specific, identifiable goals, the desired results these goals are to accomplish should be made explicit through performance measures. The Congress, in enacting the Government Performance and Results Act of 1993, recognized that to be effective, goals need measures to assess results. Without such measures, the agencies’ ability to improve performance and the Congress’s ability to conduct effective oversight will be hampered. Moreover, goals cannot be set and performance measures cannot be defined in a vacuum. Decisionmakers need to consider how the desired goals will be achieved. Our past work on reorganizations has shown that, all too often, the issue of how desired goals are to be achieved is not considered as part of the goal-setting process. Considering such issues as how agencies’ structures and processes will need to function to accomplish the goals can benefit the goal-setting process itself. By thinking through the implementation process, decisionmakers are better able to clarify the goals and the results to be achieved and to identify potential pitfalls. In summary, Mr. Chairman, the responsibilities of the four major federal land management agencies have become more similar and the management of federal lands more complex over time. These changes, as well as budgetary and ecological considerations, suggest a basis for reexamining the current approach to federal land management with an eye to improving its efficiency and effectiveness. Two basic strategies have been proposed to improve federal land management—one would focus primarily on streamlining the existing structure by coordinating and integrating functions, systems, activities, programs, and field locations, while the other would reorganize the structure primarily by combining agencies. Although it is not clear which strategy would be more effective, or whether a combination of the two would be more appropriate, it is clear that the effective implementation of either strategy will require, among other things, a solid consensus for change and the creation of specific, identifiable goals for managing commodity and noncommodity uses. Mr. Chairman, this concludes my prepared statement. We will be pleased to answer any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed ways to improve the management of federal lands. GAO noted that: (1) the responsibilities of the National Park Service, Bureau of Land Management, Fish and Wildlife Service, and Forest Service have become increasingly similar; (2) federal land management has become increasingly complex due to the differences among laws and regulations; (3) federal land management agencies could be more cost efficient and ecologically effective if they improve their organizational structures and interagency relationships; (4) the two basic strategies to improve federal land management include streamlining the management structure by coordinating and integrating functions, systems, activities, programs, and field locations, and combining land management agencies; and (5) due to the lack of consensus for change, no legislation to streamline or reorganize the federal land management agencies has been enacted.
American Samoa lies 2,600 miles southwest of Hawaii and consists of seven islands, covering a land area of 76 square miles (see fig. 1). In 2003, it had a population of 57,844. The main island of Tutuila has very little level land and is mostly rugged, with four high peaks, the tallest rising over 2,000 feet. Agricultural production is limited by the scarcity of arable land, and tourism is impaired by the island’s remote location and lack of tourist-rated facilities. Two tuna canneries constitute the main sources of private sector employment. Most of the economic activity and government operations on Tutuila take place in the Pago Pago Bay area. As an unorganized, unincorporated U.S. territory, American Samoa is not subject to the U.S. Constitution in the same manner as the 50 states. For example, some constitutional rights, such as the rights to vote in national elections and to full voting representation in the U.S. Congress, do not apply to American Samoa. Although no congressional act formally establishes a government structure in American Samoa, the territory has its own local government and constitution. Those born in American Samoa are U.S. nationals. Since 1977, a popularly elected governor has headed the American Samoan executive branch for a 4-year term, and the legislature, or Fono, has comprised 18 elected senators and 20 elected representatives. Nearly 40 American Samoan departments, offices, and other entities provide public safety, public works, education, health, commerce, and other services to American Samoans. Providing these services has proved financially challenging for the American Samoan government. After a period of relative budget growth in the early 1980s, the territory’s finances rapidly deteriorated in the second half of the decade when expenditures exceeded income in American Samoa’s budget. In fiscal year 1991, the government borrowed $5 million from its employee pension fund to temporarily relieve its cash flow problems. Following a GAO report in 1992, Congress directed DOI and the American Samoa government to form a joint working group to address the government’s financial management problems. The working group made recommendations to the American Samoa government, which pledged to implement a financial recovery plan based on these recommendations. Beginning in fiscal year 1997, the Senate Appropriations Committee directed DOI to withhold $2 million of capital improvement funding from the territory until DOI could certify that the American Samoan government had adequately implemented the recovery plan. However, the territory’s financial situation subsequently worsened and, in 1999, Congress authorized a direct federal loan to American Samoa for $18.6 million to pay debts and implement reforms. In 2001, the American Samoa government submitted an initial fiscal reform plan to DOI. DOI and the American Samoa government signed an MOA in 2002, implementing fiscal and operational reforms. The MOA was designed to bring the American Samoa government operating expenses into balance with projected revenues for fiscal years 2003 and beyond. It also outlined a schedule for American Samoa to complete all outstanding single audit reports. Five federal departments have historically provided significant grants to the American Samoa government, including one large grant from DOI to support government operations. During fiscal years 1999-2003, DOI, USDA, ED, DOT, and HHS provided about $450 million in grant funds to American Samoa through 12 key grants. Of these 12 grants, 4 were structured specifically for American Samoa, 2 were structured for all U.S. insular areas, and 6 were structured in the same manner as in the 50 U.S. states. Table 1 shows the federal awarding departments and agencies, the grants, the grant structures, and the grant award amounts for fiscal years 1999- 2003. In fiscal years 1999-2003, 12 federal grants, funded by five departments, provided and supported several essential services in American Samoa. DOI awarded grants that subsidized government operations, supported infrastructure improvements, and provided technical assistance. USDA awarded grants that provided nutrition assistance for which about half of the territory’s population was eligible. ED awarded grant funds that supported American Samoa’s education programs, including the special education program. DOT awarded grants for critical infrastructure improvements to the territory’s airports and roadways. Finally, HHS awarded grants to support health care and early childhood education in American Samoa. In fiscal years 1999-2003, DOI provided grants that supported government operations and infrastructure improvements in American Samoa. DOI provided, on average, about 16 percent of the American Samoa government’s total budget during the period of our review, through an annual direct subsidy as well as through grants for capital improvements and technical assistance. (See app. II for more details and an assessment of the DOI grants.) DOI provides the government operations grant as an annual direct subsidy to the American Samoa government to help fund the difference between the territory’s revenues and the cost of maintaining its current government programs and services. To promote the American Samoa government’s self- sufficiency, DOI has held the amount of the grant constant, without adjusting it for inflation or population growth. The grant supports general government operations, including public works, economic development, and salaries. Specific operations that the grant supports include American Samoa’s Department of Education; LBJ Hospital, the territory’s primary clinic and only hospital; and the High Court of American Samoa. In fiscal years 1999-2003, the American Samoa government received an average annual operations grant award of about $23 million. According to DOI officials and our analysis, the portion of the American Samoa government’s budget supported by the government operations grant decreased from about 18 percent in fiscal year 1999 to about 15 percent in fiscal year 2003. DOI’s capital improvement grants provide funds to improve the physical infrastructure of American Samoa and other U.S. insular areas. Capital improvement projects in American Samoa are prioritized and carried out according to the American Samoa government’s Capital Improvements Plan. In fiscal years 1999-2003, DOI provided an average annual award for capital improvement grants of $10.2 million to the American Samoa government. During this period, about 28 percent of the funds awarded to American Samoa were allotted for water and sewer improvements; 25 percent for school improvements, including new and renovated classrooms; 16 percent for improvements to the LBJ Hospital; and 4 percent for roads. LBJ Hospital was allotted about $1.5 million for each year during that period. DOI provided general technical assistance grants to all U.S. insular areas for short-term noncapital projects, such as obtaining computer hardware and software and providing training to improve the insular area’s capacity to conduct government operations. In fiscal years 1999-2003, DOI’s general technical assistance grants provided American Samoa an average of about $350,000 annually. Examples of DOI’s technical assistance included, in April 2001, a $200,000 grant to the American Samoa Port Authority to purchase and install a container tracking system for cargo entering and leaving American Samoa’s harbor of Pago Pago and, in April 2002, a $185,000 grant to the American Samoa government to purchase and install an upgraded immigrant tracking system. LBJ Hospital also received technical assistance grants. Three USDA programs made nutrition assistance available to about half of the American Samoan population during most of the period of our review. The School Lunch Program made free breakfast and lunch available to all school-age children. WIC provided nutrition assistance to pregnant, breast- feeding, and postpartum women and to infants and children up to 5 years of age. The Food Stamp Program in American Samoa provided nutrition assistance to the low-income elderly, the blind, and the disabled. (See app. III for a more detailed description and an assessment of the USDA grants.) USDA’s School Lunch Program is funded as a special block grant and operates under a memorandum of understanding (MOU) established specifically for American Samoa in 1991 and administered by the American Samoa Department of Education. Before 1991, the program in American Samoa followed the same requirements as in the rest of the United States, providing subsidized breakfast and lunch to children in public and nonprofit schools, based on the income level of the children’s households. Since 1991, the American Samoa School Lunch Program has provided free breakfast and lunch to all school-age children. Officials explained that the change in grant and program structure gave American Samoa greater flexibility to serve the needs of its children. In fiscal years 1999-2003, USDA provided an average annual grant of $9.8 million. In school year 2002-2003, the American Samoa Department of Education reported public and private school enrollment of about 19,000 students, all of whom are eligible for the program. In the same year, the School Lunch Program served about 3.2 million breakfasts and 3.6 million lunches. The program currently serves meals at 23 elementary schools, 6 high schools, 10 private schools, 55 early childhood education (Head Start) centers, and 37 day care centers. The program has no citizenship, residency, or income requirements. USDA’s WIC Program in American Samoa follows the same requirements as the program in the 50 states, providing supplemental food and nutrition education at no cost to eligible pregnant, breast-feeding, and postpartum women and to infants and children up to 5 years of age. The American Samoa WIC Program was established in 1996 and is administered by the American Samoa Department of Human and Social Services. In fiscal years 1999-2003, USDA provided an average annual grant of $5.3 million. During fiscal years 2000-2003, an average of about 6,000 recipients were receiving monthly WIC “food instruments,” or checks. Eligibility for benefits is determined on the basis of nutritional risk, income, and residency. USDA’s Food Stamp Program in American Samoa is designed specifically for the territory and operates under a MOU that allows American Samoa to provide food vouchers for the low-income elderly and for blind and disabled persons. Under the MOU, American Samoa is able to set its own eligibility standards as long as it stays within the capped block grant—in fiscal year 2003, about $5.4 million. In the 50 states, the Food Stamp Program is an entitlement program; all qualified applicants receive benefits, and funding is not capped. In American Samoa, Food Stamp recipients must meet financial and nonfinancial eligibility criteria, as specified in the MOU; however, benefits are calculated so as not to cumulatively exceed the capped grant. The maximum benefit in American Samoa for fiscal year 2004 was $132 per person per month. In fiscal years 1999-2003, USDA provided an average annual grant of $5.3 million. During fiscal years 2000-2003, the program served an average of about 2,800 recipients monthly. The program is one of the few remaining U.S. Food Stamp Programs that still uses paper food coupons; most of the other programs have implemented an electronic benefits transfer system to provide food assistance to eligible recipients. ED’s Innovative Programs grant provides a large share of funds to the American Samoa Department of Education to support its education programs, and ED’s Special Education grant funds the territory’s special education program. In fiscal year 2003, the two grants provided, respectively, about $16.8 million and $5.8 million. (See app. IV for a more detailed description and an assessment of the ED grants.) State and local education agencies are eligible for federal grants and funds to implement numerous federal education programs. In fiscal years 1999- 2003, using a consolidated grant application, American Samoa applied for and received an Innovative Programs grant to fund many of the territory’s education programs. The Innovative Programs grant is designed to assist state and local education agencies in implementing education reform programs and improving student achievement. Funding under the grant can be used to implement local Innovative Programs, which may include at least 27 activities identified in the No Child Left Behind Act of 2001. For fiscal years 1999-2003, the American Samoa Department of Education reported that it implemented programs for training instructional staff, acquiring student materials, implementing technology, meeting the needs of students with limited English proficiency, and enhancing the learning ability of students who are low achievers. During the 5-year period, the annual Innovative Programs grant increased from about $6.8 million in fiscal year 1999 to about $16.8 million in fiscal year 2003. Beginning in 2002, the grant award to American Samoa more than doubled as a result of the No Child Left Behind Act of 2001, which increased appropriations for the Innovative Programs and other education programs. The grant award that the American Samoa government received in fiscal year 2003 provided about 40 percent of the American Samoa Department of Education’s budget for that year. Other federal funds provided another 30 percent of American Samoa’s education budget (including funds from the DOI Government Operations grant), with local funds contributing the remaining portion. In fiscal years 1999-2003, ED provided an average of $5.3 million, under its Individuals with Disabilities Education Act (IDEA) grants, for American Samoa’s Special Education Program. The program is required to provide a free, appropriate public education to eligible children with disabilities, regardless of nationality or citizenship. The Special Education Program in American Samoa operates under the same requirements and guidelines as special education programs in the 50 states and is almost entirely funded by its annual IDEA grant. The American Samoa Department of Education reported that, as of January 2004, its Special Education Program was providing services to slightly more than 1,100 eligible 3- to 21-year-old students with disabilities. DOT provided funds that allowed for important airport and roadway infrastructure improvements through the Airport Improvement Program and the Federal-aid Highway Program grants. (See app. V for more details and an assessment of the DOT grants.) In fiscal years 1999-2003, DOT, through the Federal Aviation Administration’s (FAA) Airport Improvement Program, provided American Samoa an average annual grant of $7.9 million. The program operates under the same regulations in American Samoa as in the rest of the United States. American Samoa has three airports, all of which receive Airport Improvement Program grants. The main airport, Pago Pago International, has two runways, one of which can accommodate large commercial jets, and has eight commercial airline flights departing per week. Since 1998, the Airport Improvement Program grants have been used for extending runways and constructing taxiways and for rehabilitation and new overlays of existing runways, taxiways, and shoulders. Projects funded with Airport Improvement Program grants also included the construction of a rescue and firefighting training facility, new aircraft rescue and firefighting vehicles, and perimeter fencing to improve airport security. Runway safety areas at Pago Pago International Airport, the territory’s main airport, were upgraded to meet FAA standards, providing additional margins of safety. These projects have benefited from the presence of an airport engineer, hired with funds from the Operations and Maintenance Improvement Program, a separate DOI grant. DOT’s Federal Highway Administration provided American Samoa an average annual grant of $6.2 million under the Federal-aid Highway Program during fiscal years 1999-2003. Although the territory’s highway subprograms are funded under a separate statute, the Federal Highway Administration administers them in the same manner as programs in the other states under the Federal-aid Highway Program, with the territorial transportation agency functioning as the state highway agency. American Samoa’s Five-Year Highway Division Master Plan sets forth sequenced budgets and time frames to improve and maintain Route 1, the island’s main traffic corridor. The American Samoa Department of Public Works typically handles the planning and construction supervision of the highway program. Figure 2 shows a map of American Samoa and selected highway projects that we reviewed along Route 1 and other village roads. HHS grants supported (1) health care at LBJ Hospital under the Medicaid program and (2) early childhood education for American Samoan children under the Head Start Program. (See app. VI for more details and assessments of each grant.) HHS’s Medicaid Program in American Samoa operates under a U.S. statutory waiver, which exempts it from most Medicaid laws and regulations; instead, it uses a plan of operations approved by HHS. A territorial statute requires American Samoa to provide free health care to its population. Virtually all care, both inpatient and outpatient, is provided by LBJ Hospital, which is managed by the LBJ Medical Center Authority. In fiscal years 1999-2003, HHS provided the hospital an average annual reimbursement of $3.4 million; in fiscal year 2003, federal Medicaid funds represented about 13 percent of the hospital’s revenues. American Samoa receives a capped amount for its Medicaid Program, like the other U.S. territories but unlike the states, where Medicaid is treated as an entitlement program with no cap on total federal funds. In American Samoa, the federal Medicaid grant is used as one of the hospital’s sources of revenue to support the territory’s universal health care system, rather than as support for a separate Medicaid Program with enrolled Medicaid beneficiaries as in the 50 states. Although there is no separate Medicaid enrollment in American Samoa, HHS requires the LBJ Medical Center Authority to submit an annual estimate of the population presumed to be eligible for Medicaid. This estimate of “presumed eligibility” is based on the size of the population in American Samoa and the percentage of families living below the U.S. poverty level, according to the U.S. Census. As the territory’s Medicaid provider, LBJ Hospital must provide all Medicaid-required services. If these services are not available on-island, American Samoa must arrange for them to be provided off-island. Although the Medicaid grant’s broadly stated goal is the provision of basic medical services, HHS officials do not require the hospital to supply data on its provision of such services. As a result, no data were available for us to determine the quality of the care or whether all required Medicaid services were provided to the eligible population. HHS officials stated that they have some assurance that a minimum standard of care is provided, because LBJ Hospital must meet Medicare certification standards to participate in Medicare and Medicaid. However, the hospital faces long-standing challenges in maintaining its Medicare certification (see app. VI). The Head Start Program in American Samoa, referred to locally as the Early Childhood Education Program, is part of the American Samoa Department of Education. The program in American Samoa is subject to the same performance requirements as Head Start Programs in the rest of the United States and delivers most required services, according to HHS officials. In fiscal years 1999-2003, HHS provided the Early Childhood Education Program an average annual grant of $2.7 million. The grant set the enrollment level at 1,532 slots for 3- to 5-year-old children. As of March 2004, the program had 54 classrooms and 111 classroom instructors, according to American Samoa officials. Early Childhood Education officials stated that although there are more eligible children than available slots, the program serves virtually all of the children who apply for it. Program highlights include dental screening and follow-up treatment for almost all enrolled children and a literacy program emphasizing both Samoan and English. The curriculum and materials are locally designed and incorporate native culture, community, and environment, as well as family traditions. Another key program activity is the construction of several new facilities dedicated exclusively to early childhood education classrooms. In fiscal years 1999-2003, HHS provided the program about $3.8 million in additional “program improvement” grant awards for the construction of seven new facilities containing 38 classrooms. Conditions in American Samoa limited the delivery of services or project completion for many of the grants we reviewed. A lack of adequately trained professionals limited financial oversight for all programs and service delivery in several programs. In addition, inadequate facilities affected the delivery of services under Head Start at Early Childhood Education Program centers and under Medicaid at LBJ Hospital. In particular, the LBJ Hospital building had persistent fire-safety deficiencies that jeopardized the hospital’s ability to maintain the certification required for continued Medicaid funding. Finally, limited local resources to complement federal grants slowed the completion of critical projects at LBJ Hospital and Pago Pago International Airport. Some of the programs that we reviewed experienced a shortage of staff with adequate professional training, which limited the financial oversight of federal funds and delivery of certain services. The relatively low salaries in American Samoa and the remote location of the territory made it difficult to attract and retain individuals with specialized training. Staff shortages included the following: In the American Samoa government, the position of Territorial Auditor remained unfilled in fiscal years 1998-2003. An official in the American Samoa Department of Treasury, the department that processes nearly all federal grants, reported that the department experiences difficulty in retaining certified public accountants, because the American Samoa government is unable to afford competitive salaries for these professionals. In the American Samoa Department of Education, most teachers had obtained only an associate in arts degree from the American Samoa Community College. Further, according to the Special Education Division Office, the program had only one physical therapist during the period of our review and needed speech pathologists, occupational therapists, audiologists, and psychologists. In addition, the local Head Start Program was unable to comply with the federal standard to deliver mental health services to enrolled children and families, because no mental health professionals were available in the territory to work with the program. In the American Samoa Department of Human and Social Services, the WIC and Food Stamp Programs lacked sufficient staff with technical skills to adequately maintain the databases on which the programs rely to record and process recipient transactions, reconcile transactions, and perform required monitoring and evaluation of issued benefits. LBJ Hospital officials reported that they did not have an adequate number of U.S.-certified medical doctors or registered nurses, despite incentive programs to attract them. The hospital also had unmet needs for medical technicians, such as radiology and operating room technicians. The hospital lacks the capacity to provide the full range of Medicaid-covered services, and consequently those services that are not available must be provided off-island. For fiscal years 2001-2003, the hospital reported an average off-island medical care expenditure of about $2 million annually. Limited facilities hampered the ability of the Head Start and Medicaid Programs to deliver services to their targeted populations. Examples are as follows: While the Head Start Program in American Samoa made progress in constructing several new facilities to provide modern classrooms, the program continued to depend on villagers who made their homes available for Early Childhood Education classes. As of March 2004, 19 of the program’s 54 classes were held in village homes, according to the local program officials. The officials stated that their first priority for the use of supplemental federal Head Start grant funds was to continue to build additional classrooms but that, as a result, no funds were available to provide adequate playgrounds or perimeter security fencing. LBJ Hospital’s poor physical infrastructure made it difficult to deliver a minimum standard of care to the population of American Samoa, including the Medicaid-eligible population. For more than a decade, the hospital suffered from persistent, serious fire-safety building code deficiencies that threatened its ability to maintain the Medicare certification required for participation in Medicare and Medicaid. In a Medicare-certification survey of the hospital conducted in November 2003, the survey team cited the hospital for a lack of “basic features of fire protection, which are fundamental to all health care facilities,” such as smoke and fire detection and alarm systems, automatic sprinklers, adequate water pressure, and fire-rated smoke and fire compartmentation. Earlier Medicare certification surveys cited many of the same problems, but the hospital has failed to correct them despite HHS’s threats, since at least 1993, to terminate the hospital’s certification. In 2004, in response to the fire-safety deficiencies identified in the 2003 Medicare-certification survey, the hospital reprogrammed $650,000 of its fiscal year 2003 DOI capital improvement funds to install a facilitywide sprinkler system. However, hospital officials said that the project would not be completed until December 2005 and that the renovation efforts would be constrained by “a fixed barrier of time, money and space.” Although the hospital depends primarily on DOI funds to bring its facility up to HHS standards, DOI and HHS did not collaborate during fiscal years 1999-2003 to identify construction needs and funding resources to ensure that common goals are met. Specifically, when awarding capital improvement grants to the America Samoa government and LBJ Hospital, DOI did not obtain information from HHS regarding deficiencies that threatened the hospital’s Medicare certification. Limited local resources also affected some of the programs in our review. LBJ Hospital’s ability to upgrade its facility and hire needed staff was severely hampered by chronic budget deficits and outstanding debt. Likewise, the lack of local funds to complement Airport Improvement Program grants slowed the pace of completing critical projects, according to American Samoa officials. Examples of the effect of limited local resources on these programs include the following: LBJ Hospital officials reported that because of persistent operating budget deficits, they were unable to hire needed staff and respond to the many infrastructure needs of its aging facility. DOI capital improvement grants, which average about $1.5 million annually for the hospital, support only one or two new construction projects per year. According to hospital officials, the hospital depends entirely on federal grant funds to support its infrastructure upgrades, including those needed to correct the fire-safety deficiencies cited by HHS hospital certification surveys. Two key sources of revenue for LBJ Hospital, from DOI and the American Samoa government, did not increase during the period of our review (see fig. 3). The hospital’s annual subsidy from the government of American Samoa dropped from about $8.1 million in fiscal year 1998 to about $5.3 million in fiscal year 2003. During the same period, DOI directly provided LBJ Hospital about $7.8 million of the government operations grant annually without adjusting this amount for inflation. Although the Medicaid grant increased over time to cover the cost of inflation, HHS officials reported that the cap on the Medicaid grant resulted in a smaller federal contribution than American Samoa would have received if funded like the 50 states. A hospital official reported that patient revenues increased during fiscal years 1998-2003 but that much greater increases would be needed if the hospital could not identify other sources of revenue. The LBJ Medical Center Authority has proposed to charge service fees to patients to cover about 20 percent of the cost of their medical care. However, hospital officials believed that the local legislation needed to change such fees would be difficult to obtain, because the public views free medical care as an entitlement. Currently, the hospital charges residents a facility fee of $5 per outpatient visit and $20 per day for inpatient stays. The hospital charges nonresidents $10 for outpatient visits and $100 per day for inpatient stays. American Samoa airport officials reported that they lacked the local resources to complement FAA’s Airport Improvement Program funds, which slowed the pace of critical airport infrastructure projects. For example, the airports had not acquired all of the rescue vehicles they needed, and upgrades of the main runway at Pago Pago International had to be phased in over several years. In August 2003, following damage to a commercial airplane from loose asphalt on the runway, the airport’s main runway shut down for 2 weeks. The closure left American Samoa cut off from commercial flights to Honolulu until the pavement could be repaired. According to FAA and American Samoa airport officials, a great deal of progress was made in improving Pago Pago International Airport’s infrastructure and rescue response capability during the past several years; however, it will probably not reach an acceptable standard until 2007. For most U.S. airports, including those in American Samoa, a passenger facility charge of up to $4.50 per passenger provides a key source of revenue. However, because only eight flights per week depart from Pago Pago International, the airport generates relatively little revenue and operates at a loss annually. Congress raised the cap on passenger facility charges from $3.00 to $4.50 in fiscal year 2000 in FAA’s reauthorization legislation but elected not to raise it again in legislation reauthorizing FAA for fiscal years 2004-2007. A lack of required single audits, U.S. agencies’ slow reactions to lack of single audits, and incidents of theft and fraud compromised the accountability of federal grants to American Samoa. The American Samoa government did not comply with the Single Audit Act during fiscal years 1998-2003. The delinquent single audit reports issued for fiscal years 1998- 2001 cited governmentwide and program-specific accountability problems. However, most federal agencies responsible for programs in American Samoa did not formally express concern about the delinquent single audit reports and were slow, or failed, to set forth a plan of action to complete single audits. In addition, two grants had instances of theft and fraud, and the accountability of almost all of the grants was potentially compromised by fraud in the American Samoa Government’s Office of Procurement. The American Samoa government did not complete single audits for fiscal years 1998-2003 in accordance with the time frame specified in the Single Audit Act. As a result, U.S. agencies had limited knowledge of American Samoa’s accountability for federal funds received during the period of our review. Specifically, they were unaware of whether grantees complied with the Davis-Bacon Act and with requirements for financial reporting and retention of and access to financial records, among other requirements. Federal agencies are responsible for ensuring that grant recipients subject to the Single Audit Act complete single audits no later than 9 months after the end of each fiscal year. An August 2002 MOA between DOI and the American Samoa government established a schedule for completing overdue single audits; however, American Samoa failed to comply with the schedule. The single audit reports for fiscal years 1998, 1999, and 2000 were completed by the auditors in August 2003. Relative to the deadlines in the MOA, the 1998 and 1999 reports were 8 months late, and the 2000 report was 3 months late. The auditors completed the 2001 single audit report in June 2004, 12 months late. The single audit reports for fiscal years 1998-2001 cited pervasive governmentwide and program-specific accountability problems. For the 1998, 1999, and 2000 single audits, the auditors did not express an opinion on the financial statements of the American Samoa government because the scope of their work did not enable them to do so. However, in the single audit report for fiscal year 2001, the auditor expressed a qualified opinion regarding American Samoa’s financial statements. According to the report, the qualified opinion was issued because the limitations on the scope of the audit resulted in the auditor’s inability to locate or verify physical inventory records, verify the accuracy of the beginning balance of the government’s general funds, and verify the physical existence and cost of recorded fixed assets, among other items. These opinions are similar to those in American Samoa’s single audits for fiscal years 1996 and 1997, indicating that federal and American Samoa officials did not resolve issues identified in prior single audit reports, as required. The reports for fiscal years 1998-2001 cited an average of 31 governmentwide and program-specific findings for each fiscal year. For example, each audit found that the American Samoa government and its entities did not maintain adequate systems of internal controls to ensure compliance with laws, regulations, contracts, and grants applicable to federal programs. The auditors reported that the American Samoa government did not comply with major federal program requirements for, among other items, financial reporting, grant payment, and retention of and access to records. The audits stated that these problems could adversely affect the American Samoan government’s ability to administer federal grant programs in accordance with applicable requirements. The single audits for fiscal years 1998-2001 also reported program-specific findings each year for at least 6 of the 12 programs we reviewed. For example, the auditors reported that in fiscal year 2000, DOI’s capital improvement funds for constructing toilet facilities were used to purchase computers. The 2000 report also stated that ED contract documents for $39,960 were missing. According to auditors, a number of program files were incomplete and many programs’ transactions were difficult to assess because the American Samoa government maintained its records in a haphazard and open manner. In spite of document retention issues, the auditors reported about $1.3 million in questioned costs and a total of about $18 million in budget overruns from their sampling of approximately $295 million in transactions funded by federal grants in fiscal years 1998- 2001. In our sample review of 12 selected grant transactions, we found that 7 of these had inadequate supporting documentation and insufficiently detailed data to show whether program expenditures were allowable. Of 12 transaction files that we requested from the American Samoa Department of the Treasury, 3 could not be located; 4 lacked purchase orders, invoices, receiving reports, or pricing estimates; and 2—from the Food Stamp and Head Start Programs—were complete. According to an American Samoa government official, grant transaction files should contain a purchase order or request; an invoice; a pricing estimate (if applicable); a copy of a receiving report, indicating that a purchased item was received, or a copy of the check issued for payment; and an accounts payable voucher. (See app. VII for a detailed description of federal grant processing in American Samoa.) In spite of the lack of single audits in fiscal years 1998-2003, most federal agencies were slow to act. For example, DOI did not set forth a plan of action to complete single audits until 2002 and ED did not take remedial action until 2003. In order for entities, such as federal and American Samoa agencies, to administer and control the grant programs, officials must have relevant, reliable, and timely communications relating to internal and external events. DOI, the cognizant agency for American Samoa, established a schedule for completing the delinquent single audit reports, in an MOA with the American Samoa government in August 2002 following several months of discussion. The MOA established a new completion schedule for the delinquent single audits, among other fiscal and operational reforms for the territory. Figure 4 provides a time line showing the single audits and federal actions, including OMB’s regulation deadlines for the reports, the MOA’s extended deadlines, the dates when American Samoa’s reports were completed, and the number of months that the reports were late. ED reported that it sent a letter in March 2002 to the then Governor of the territory expressing concern about the late single audits and advising that the department is authorized to take various administrative actions, including interrupting grant funding. ED’s Inspector General subsequently visited American Samoa and alerted its Deputy Secretary in December 2002 that inspectors had found instances of fraud, waste, and abuse that might have been detected and prevented if single audit reports had been completed and submitted on time. The memo from the Inspector General also indicated a need for ED to develop a coordinated strategy for obtaining the required Single Audits. USDA officials cited the lack of single audits in their 2003 on-site review. HHS noted the delinquency of single audit reports in on-site program reviews in 2000 and 2003; DOT reported that the last American Samoa single audit it had received was for fiscal year 1996. According to OMB Circular A-133, which implements the Single Audit Act, if a grantee has specifically failed to conduct its single audit reports, federal agencies should impose sanctions such as, but not limited to, (1) withholding a percentage of federal awards until single audits are completed satisfactorily, (2) withholding or disallowing overhead costs, (3) suspending federal awards until the single audit is conducted, or (4) terminating the federal award. None of the agencies in our review imposed any of these sanctions on American Samoa. According to the Grants Management Common Rule, federal awarding agencies may designate a grantee “high risk” if the grantee has a history of unsatisfactory performance, is not financially stable, has an inadequate management system, has not conformed to terms and conditions of previous awards, or is otherwise irresponsible. Single audits provide key information about the adequacy of a grantee’s management system. Federal agencies that designate a grantee high-risk may impose special conditions including (1) issuing funds on a reimbursement basis; (2) withholding authority to proceed to the next phase until receipt of evidence of acceptable performance within a given funding period; (3) requiring additional, more detailed financial reports; (4) requiring the grantee to obtain technical or management assistance; or (5) establishing additional prior approvals. According to DOI and DOT, they have required some similar conditions for American Samoa for years. For example, both agencies issue funds to American Samoa on a reimbursement basis. However, only ED exercised its authority under the common rule, when, in September 2003, it placed American Samoa on high-risk status as a result of American Samoa’s noncompliance with the Single Audit Act. ED now allows American Samoa to draw down only 50 percent of its grant funds until certain conditions defined by the department are fulfilled. Other agencies included in our review took none of the corrective actions available, under the common rule or under the OMB circular, as a result of the delinquent single audits. Specifically, although American Samoa did not comply with the agreed-on schedule for completing the outstanding single audits, the departments included in our review neither placed American Samoa on high-risk status nor withheld, disallowed, suspended, or terminated funds under any of their grants. Recent instances of theft and fraud by American Samoa government officials call into question accountability for most of the grants that we reviewed. Examples of theft or fraud are as follows: In May 2004, the Chief Procurement Officer of the American Samoa Government was found guilty of illegal procurement practices. Since this office handles the procurement activity for most of the grants that we reviewed, the accountability of the grant funds may be compromised. In the American Samoa Department of Education, the Director of the School Lunch Program pled guilty in July 2004 to charges of stealing approximately $68,000 worth of food and goods from the School Lunch Program warehouse between October 2001 and September 2003. The former School Lunch Program Director was also charged with conspiring with others to commit offenses against the United States. The current School Lunch Director said that, while most of the employees involved in the theft had been removed, one warehouse employee remains. In August 2004, the U.S Department of Justice filed charges against the former deputy director of the American Samoa Department of Human and Social Services (the department that operates the WIC and Food Stamp Programs) for conspiring to rig bids for contracts totaling more than $120,000 in exchange for cash kickbacks. During the September 2003 USDA review of WIC in American Samoa, USDA officials were alerted to vendor fraud. The review found widespread evidence of WIC food checks being exchanged for cash, cigarettes, other nonfood items, and unauthorized foods at WIC- authorized grocery stores instead of for the supplemental foods prescribed by WIC and paid for with federal funds. USDA officials informed the American Samoa WIC Program that it must comply with corrective action or face fiscal sanctions. As USDA became aware of problems with theft and fraud, it took action to increase oversight of those programs. Additional accountability problems have been alleged. For example, the local press has published numerous accounts of ongoing federal investigations. The American Samoa Fono has conducted hearings and investigations of accountability problems in the territory’s government. Finally, the recently hired American Samoa Comptroller, at work since March 2004, resigned as of August 2004 citing concerns over fraudulent and unethical American Samoa government practices. In fiscal years 1999-2003, federal grants from multiple agencies provided critical funds for essential human services and critical infrastructure improvements in American Samoa. However, the American Samoa government faced a range of local challenges to delivering services and completing infrastructure projects funded with federal grants. These challenges included a shortage of adequately trained professionals, such as accountants and teachers, as well as inadequate facilities and limited local funds. In particular, LBJ Hospital, which provides medical care for most of American Samoa’s population, received multiple federal grants but struggled to overcome challenges posed by an inadequate facility and limited resources. Specifically, although it receives DOI construction grants for facility upgrades, the hospital struggled to meet HHS fire-safety standards for continued Medicare certification required for Medicaid funding. Nevertheless, in recent years federal departments, principally DOI and HHS, have not formally collaborated on the use of DOI construction grants at the hospital. In overseeing the hospital’s use of capital improvement grants, DOI could benefit from information that HHS could provide regarding the hospital’s ongoing efforts to maintain Medicare certification. In addition, in fiscal years 1998-2003, the American Samoa government failed to comply with the Single Audit Act, demonstrating a lack of overall accountability for federal grants. Federal agencies are responsible for ensuring that grant recipients subject to the Single Audit Act complete single audits no later than 9 months after the end of each fiscal year, yet when American Samoa failed to complete the audits, the agencies either failed to act or acted slowly to designate the American Samoa government a high-risk grantee. The agencies had no consistent response. Further, incidents of theft and fraud should have heightened federal agencies’ concerns about enforcing the requirements of the Single Audit Act and the Grants Management Common Rule. The lack of federal action indicates a need for greater monitoring and reporting and a need for improved coordination among agencies to ensure the accountability of federal grants awarded to American Samoa. We recommend that the Secretary of the Interior take the following four actions: To ensure resolution of fire-safety deficiencies threatening the continued certification of the Lyndon Baines Johnson Tropical Medical Center in American Samoa and, as warranted, to address the hospital’s staffing and resource constraints, we recommend that the Secretary coordinate with federal agencies that grant funds to the hospital and the American Samoa government to address these issues. To improve fiscal accountability of federal grants to American Samoa, we recommend that the Secretary coordinate with other federal awarding agencies to designate the American Samoa government as a high-risk grantee, according to the Grants Management Common Rule, at least until it has completed all overdue single audits; take steps designed to ensure that the American Samoa government completes its overdue single audits in compliance with the Single Audit Act; and take steps designed to ensure that current and future single audits are completed in compliance with Single Audit Act requirements. We provided a draft of this report to the Departments of the Interior, Agriculture, Education, Transportation, and Health and Human Services as well as to the government of American Samoa. We received oral comments from the Departments of Agriculture and Transportation on October 22 and Education on October 25, 2004. The Departments of Agriculture and Transportation limited their oral comments to technical corrections. The Department of Education agreed with our initial recommendations and provided technical corrections. We received written comments from the Departments of the Interior and Health and Human Services as well as the American Samoa government, which are reprinted in appendixes VIII through X. The Departments of the Interior, Health and Human Services, and Education, as well as American Samoa, agreed with our first recommendation. DOI stated that it would take appropriate action with other federal agencies to address issues that affect LBJ Hospital’s certification. HHS agreed to collaborate with DOI and American Samoa on hospital infrastructure issues. The American Samoa government pointed out that it is making progress in bringing LBJ Hospital into compliance with Medicare standards. The Departments of the Interior and Health and Human Services and American Samoa disagreed with our second recommendation, and the Department of Education agreed with us. DOI raised serious concerns about declaring American Samoa a high-risk grantee but agreed to consult with the other federal agencies to evaluate whether, or under what conditions, a joint declaration of high-risk status would be prudent. DOI’s concerns about imposing high-risk status for American Samoa included the possible loss of access to federal programs for American Samoa and the possible impact of such an action on the American Samoan population and eventually on other insular areas. Losing access to such programs would further limit the funds available to American Samoa to address their staffing and resource problems. Furthermore, DOI argued that many of the measures available with a high-risk declaration are already being taken by DOI in American Samoa. HHS stated that American Samoa should not be designated a high-risk grantee with respect to the Medicaid Program. In our view, the findings of the audits of the LBJ Hospital raise concerns about accountability at the hospital. The American Samoa government strongly recommended against its being declared a high-risk grantee unless it fails to meet the terms of its agreement with DOI, because it believed high-risk status would imperil future funding. As we report on pages 28-29, the American Samoa government has already failed to comply fully with the terms of the agreement with DOI. We recognize DOI’s concerns about the population of American Samoa and its dependence upon federal grants for key services. We also recognize the challenges that DOI faces in balancing its activities in any individual insular area with sensitivity to the effect of those activities on other insular areas and on insular area populations. However, a declaration of high-risk status would more accurately reflect the findings of the completed single audits, specifically, the auditors’ declining to express an opinion on the financial statement and citing numerous internal control problems. In addition, according to the relevant regulations, high-risk status does not require a suspension of funds. For example, ED declared American Samoa a high- risk grantee while continuing its funding to the territory and significantly improving its oversight of the funded programs. Under a coordinated high- risk designation, the federal agencies could impose a common set of improvement milestones for American Samoa to have the high-risk status removed. Under the current system, several agencies exercise different levels of heightened oversight, and only ED has declared American Samoa a high-risk grantee. We continue to believe that a coordinated, consistent approach to a high-risk grantee across the agencies would be more productive than the agencies’ current inconsistent approaches. The Departments of the Interior, Education, and Health and Human Services agreed to collaborate to ensure completion of outstanding and future single audits, as per the initial wording of our third and fourth recommendations. DOI agreed to consult with other agencies to determine other steps that might be taken to help American Samoa come into compliance more quickly. However, responding to the initial wording of our third and fourth recommendations that the agencies coordinate efforts to ensure compliance with the act, DOI stated that it is unable to ensure that a grantee will comply with the Single Audit Act. In light of DOI’s response to our initial recommendations, we are recommending that DOI coordinate with the other awarding agencies to take steps designed to ensure American Samoa’s compliance with the act. The American Samoa government cited its progress in completing the delinquent single audits. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to interested Congressional Committees and to the Secretaries of the Departments of the Interior, Agriculture, Education, Transportation, and Health and Human Services as well as to the Governor of American Samoa. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at 202-512-4128 or [email protected] or Emil Friberg, Assistant Director, at 202-512-8990 or [email protected]. Staff acknowledgments are listed in appendix XI. To provide information for the Ranking Minority Member of the House Resources Committee and the U.S. Delegate from American Samoa, we (1) examined the uses of key federal grants to American Samoa, (2) identified local conditions that affected the grants, and (3) assessed accountability for the grants. To address these objectives, we first analyzed available information on total federal expenditures in American Samoa. We reviewed data from the U.S. Census Consolidated Federal Funds report and the American Samoa delegate’s Web site, which listed total expenditures to American Samoa in fiscal years 1995-2001 by federal department. We used these data to identify the federal departments that provided the largest grants over the 7-year period. We narrowed our scope to five federal departments—the U.S. Department of the Interior (DOI), the U.S. Department of Agriculture (USDA), the U.S. Department of Education (ED), the U.S. Department of Health and Human Services (HHS), and the U.S. Department of Transportation (DOT)—whose aggregate grant expenditures totaled more than 80 percent of the total grants to American Samoa in fiscal years 1995- 2001. To determine that the data were sufficiently reliable for the purpose of sample selection, we corroborated the ranking from the U.S. Census Consolidated Funds Report data with data from the American Samoa delegate’s Web site. We found that despite discrepancies in the dollar amounts of the five departments’ grants shown by the two sources, the amounts are the same when aggregated for fiscal years 1995-2001. To obtain current and original data, we met with and requested grant award data from the five federal departments for fiscal years 1999 and 2003. Each department referred us to their agencies with grants or programs to American Samoa, and these agencies provided data for a total of 61 grants. From that data, we identified the largest granting agencies across the five federal departments and selected 12 key federal grants to review that were among the largest total grant awards when aggregated for fiscal years 1999- 2003. These grants primarily covered areas of government operation, infrastructure, social programs (such as health and nutrition), and education. DOI’s grants for capital improvement projects and technical assistance were selected although they were smaller than some of the other large federal grants, because DOI was the largest federal grantor to American Samoa during the period of our review and because these two grants provided infrastructure assistance that helped meet funding requirements or served as support to help meet the requirements of other grants that we selected. We excluded loan grants that are not provided through local agency or government offices in American Samoa. We also excluded grants from the Departments of Justice, Commerce, and Labor and the Environmental Protection Agency because of the grants’ small size. Finally, we excluded grants from the Federal Emergency Management Agency because they do not provide ongoing support for government and related operations. The scope of our report was limited to the information that we collected from the five departments and specific agencies that administer the grant funds; we cannot make statements about grants that we did not review. However, based on our analysis of data for fiscal years 1999-2003, the aggregated grant totals from the departments that we did not review were smaller, in most cases, than the largest single grants we selected. To corroborate the data for federal funds to American Samoa, we compared agency data with data in the single audit reports for fiscal years 1998-2001 and found that of the grants that we had selected, only the general technical assistance grant was not included in the single auditor’s reports. However, we used the single audit data only to compare grant data from the federal agencies with total federal grant expenditures in American Samoa. We estimated that the selected grants represented about 70 percent of all federal expenditures in American Samoa in fiscal year 2000. To examine the uses of key federal grants to American Samoa, we collected and reviewed grant data from the federal and local agencies responsible for overseeing the selected programs in fiscal years 1999-2003; interviewed federal and American Samoa program officials to obtain knowledge of program activity and operations; conducted site visits to observe programs and projects funded by federal grants; and compared data in single audit reports for fiscal years 1998-2001 with agency data for selected grants and background on total federal grants reported by the American Samoa government. Single audit reports for years after fiscal year 2001 were not available during the time of our review. To report grant awards to American Samoa between fiscal years 1999-2003, we relied on grant data provided by federal agencies. Although we did not audit the grant data from the federal officials and are not expressing an opinion on them, we discussed the sources and limitations of the data with the appropriate officials and addressed discrepancies before reporting grant totals. We determined that the federal agency data were sufficiently reliable for the purposes of reporting grant award totals and the general use of grant funds and, to the extent possible, we corroborated these data with other information sources, including federal department (headquarters) data, single audit reports, and U.S. Census data. To describe the activities that grant funds supported, we relied on information from federal and American Samoa officials overseeing or administering the grants. We corroborated information from American Samoa officials with the information we received from federal officials. For example, we used participation rates in fiscal years 2000-2003 for the American Samoa Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) and the Food Stamp Program and the total number of children enrolled during the 2000- 2003 school years to estimate the percentage of the population for which nutrition assistance was made available during those years. These estimates are approximations. Although the participant populations may occasionally overlap (e.g., a WIC recipient might also have received free school lunches), the distinct target populations in American Samoa would not allow enough overlap to greatly affect our estimates. To identify local conditions that affected the uses of the selected grants, we interviewed federal and American Samoa officials, reviewed program documents, and made observations in American Samoa in March 2004. Specifically, we looked at the availability of professional staff to administer grants services or projects, the adequacy of facilities to deliver services, and the availability of funds to deliver services or complete projects as specified by program officials or supporting documents for the 12 key grants that we reviewed. To assess accountability for the grants, we identified requirements in the legislation, regulations, or other relevant documents; reviewed monitoring reports and financial audits conducted by federal agencies; reviewed the single audit reports for fiscal years 1998-2001; conducted federal agency interviews and on-site observations; discussed accountability issues with federal and local officials; and reviewed GAO reports on selected grants and programs for reviews relating to accountability issues. To further assess accountability, we randomly selected transaction data from the American Samoa Department of Treasury, the Lyndon Baines Johnson Tropical Medical Center (LBJ Hospital), and the Territorial Office of Fiscal Reform—the three American Samoa departments responsible for accounting for the 12 grants we selected. We based our selection of transactions on seven “object codes” (e.g., expenditure categories for personnel, supplies, contractual services, travel, other expenses, office equipment, and indirect costs) assigned by the Department of Treasury. To determine the reliability of the single audit data, we interviewed the external auditors who completed the single audit reports for American Samoa and confirmed that the auditors had received a peer review. We consulted with financial accountants in GAO regarding the single audit reports. We determined that the single audit data were sufficiently reliable for reporting on American Samoa governmentwide accountability and citing specific audit findings for the selected grants. We relied on federal monitoring reports to assess other accountability issues for our selected programs. We confirmed the opinions or report findings with federal officials. We determined that these data were sufficiently reliable for the purpose of assessing the overall and specific accountability of federal funds. To evaluate the performance of the selected grants, we determined whether the grants had specific program goals or performance standards that federal and American Samoa officials used for evaluation; collected and reviewed agency performance and monitoring reports; reviewed GAO reports; and consulted with GAO experts and methodologists on the selected grants. On basis of the evaluative criteria provided by federal officials overseeing the selected programs, we concluded that most agencies evaluated the grants based on program or service delivery or whether projects funded by grants were completed. We relied, for the most part, on federal agency reviews and found them to be sufficiently reliable for our purposes of describing if and how federal and American Samoa officials evaluated performance of the 12 key grants. Our findings are detailed in appendixes II through VI. We performed our work from September 2003 through October 2004 in accordance with generally accepted government auditing standards. Since fiscal year 1952, the U.S. Department of the Interior (DOI) has provided the government operations grant to American Samoa as directed assistance, earmarked through the federal budget process and appearing in federal appropriations tables as a line item. The grant is divided among the American Samoa government, the Lyndon Baines Johnson Tropical Medical Center (LBJ Hospital), and the High Court of American Samoa. According to DOI, the annual grant to the American Samoa government is the only regular general operating subsidy that DOI provides to an insular area government in the form of a grant and is intended to supplement, but not substitute for, local revenues and is also intended to promote self- sufficiency. The portion of the grant allocated to LBJ Hospital is stated in the grant award documents. The portion of the grant allocated to the High Court of American Samoa is included in the budget justifications. The government operations grant comprises almost $23 million each year (see table 2 for details). Since 1998, DOI has specified that nearly $7.8 million of the grant be allotted to the budget of LBJ Hospital. Since 1952, a portion of the grant has been allotted directly to the budget of the High Court. The use of these funds is not restricted to U.S. nationals or citizens by law or regulations. The government operations grant supports the operations of the American Samoa government, LBJ Hospital, and the High Court. In each instance, the money is deposited directly to the recipient’s accounts and becomes part of the recipient’s funding stream, losing its separate identity. The grant funds are drawn down from U.S. Treasury accounts in monthly allotments. During fiscal years 1999-2003, once the funds were drawn down, they were deposited in the American Samoa government accounts. The grant is allocated as follows. Basic government operations. According to the American Samoa government annual budget for 2003, the funds allocated for basic government operations were to be spent as follows: $7.4 million to the American Samoa Department of Education, $2.7 million to the Department of Public Works, $1.4 million each to the Department of Public Safety and the American Samoa Community College, $866,500 to the Department of Legal Affairs, and $750,000 to the Port Administration. In fiscal year 2003, the grant’s $14.5 million provided 6.5 percent of the American Samoa government total budget. LBJ Hospital. The portion of the grant designated for LBJ Hospital enters the hospital’s budget as a revenue source, whereupon its specific uses cannot be traced. In fiscal year 2003, the $7.7 million represented about 26 percent of LBJ Hospital’s $29.3 million revenue. High Court. According to DOI and American Samoa budget documents, the grant provides all of the High Court’s budget. The primary goal of the government operations grant is to provide financial assistance to help ensure that the American Samoa government is providing adequate government systems and services. DOI’s secondary goal for this grant is to promote self-sufficiency for American Samoa. According to DOI, over the years American Samoa has assumed an increasing percentage of the total costs of government operations. According to DOI, since the mid-1990s, the agency’s policy has been to maintain the grant at a constant level, requiring American Samoa to absorb costs associated with inflation and population growth and thereby encouraging the territory’s self-sufficiency. According to DOI officials, the single audit is a major source of accountability for the portion of the grant provided to the American Samoa government. LBJ Hospital is to conduct its own audit annually. Both the American Samoa government and LBJ Hospital are also supposed to provide financial and cash transaction reports as they use the DOI grant. According to DOI, providing the government operations grant to American Samoa is consistent with the agency’s goals of serving communities by providing financial assistance to help ensure that governments provide adequate systems and services and encouraging self-sufficiency. Budget data show and DOI confirms that, generally, over the years, American Samoa has assumed an increasing portion of the total costs of government operations. However, assessing the American Samoa government’s progress toward self-sufficiency is difficult because of the lack of verifiable expenditure data. Because the grant is a direct subsidy to the American Samoa government, the grant’s performance in encouraging self-sufficiency must be evaluated in light of accurate revenue and expenditure information, which single audits should provide. However, because of American Samoa’s failure to comply with the Single Audit Act, audited financial statements do not exist for years after fiscal year 2001, and DOI has no verifiable information on American Samoa’s actual revenues and expenditures other than the financial and cash transaction reports sent to DOI by the American Samoa government. Therefore, it is difficult to determine the extent to which the American Samoa government is moving toward self-sufficiency. American Samoa government budget data show that DOI’s contribution to the government’s budget decreased from about 18 percent in fiscal year 1999 to about 15 percent in fiscal year 2003. According to DOI officials and American Samoa’s Department of Treasury, local revenues accounted for about 60 percent of all government revenue for fiscal year 2003, an increase of about 5 percent since fiscal year 1999. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the government operations grant was limited. DOI officials asserted that the unique nature of the grant—that is, as a subsidy to the American Samoa government—implies limited accountability and that Congress designed the grant as such. Except for standard grant reporting requirements, the government operations grant is entirely dependent on the single audits for assurance of accountability. In the single audits of the American Samoa government for fiscal years 1998- 2001, the auditors stated no opinion about the reliability of the financial statements or the allowability of claimed costs. They found significant failure in the internal controls structure. The single audits for fiscal years 2002-2003 remain uncompleted. Accountability for LBJ Hospital is likewise limited. Independent audits of the LBJ Medical Center Authority for fiscal years 1998-2001 found significant problems with the LBJ Hospital accounts. For the relevant years, LBJ Hospital declined to present the auditor a statement of cash flows, summarizing its operating, investing, and financing activities as required by generally accepted accounting principles. Because of this and other matters, the auditor was unable to express an opinion on the financial statements printed in the audit. In reviewing compliance with internal controls, the auditors found instances of noncompliance as well as several reportable conditions and material weaknesses. Audits of later years were not available as of November 2004. Capital improvement grants to American Samoa are among the covenant grants authorized by the 1976 Covenant to Establish a Commonwealth of the Northern Mariana Islands. As such, they are mandatory, subject to annual appropriations. Although a specific amount of covenant grants is reserved for the Northern Mariana Islands, capital improvement grants are provided for all other territories, including American Samoa. DOI’s budget justifications list the intended recipient territory and the projects to be funded each year. Before 1996, American Samoa received an annual discretionary grant for capital improvement needs. These grants averaged approximately $5 million annually and came from the Assistance to the Territories appropriation. According to DOI officials, during that time period, American Samoa fell further behind the infrastructure needs of its rapidly growing population. As a consequence, according to DOI, the people of the territory faced increasing hardship and risk with regard to basic needs such as drinking water, medical services, and education. In fiscal year 1996, Congress enacted legislation directing that some of the mandatory covenant funds be used to pay for critical infrastructure in American Samoa. The legislation also required the Secretary of the Interior to develop a multiyear capital plan with American Samoa and to update it annually. DOI and the American Samoa government together developed the Capital Improvements Plan, which established the following priorities for capital improvement projects: First-order priorities include health, safety, education, and utilities. Second-order priorities include ports and roads. Third-order priorities include industry, shoreline protection, parks and recreation facilities, and other government facilities. DOI awards capital improvement grants on the basis of a ranked list of proposed projects submitted by the American Samoa government based on the plan. Independent American Samoa authorities also received capital improvement grants. In fiscal years 1999-2003, American Samoa was awarded $50.8 million for capital improvements, an average amount of $10.2 million annually. According to DOI, the use of these funds is not restricted to U.S. nationals or citizens, and construction projects are not limited to U.S. companies by law or regulation. Table 3 shows the annual grant award. In fiscal year 2005, DOI will implement a new competitive allocation system for the $27.72 million in mandatory covenant grants. Of the $50.8 million in capital improvement projects awarded to American Samoa in fiscal years 1999-2003, the American Samoa Power Authority received about $14 million; the American Samoa Department of Education received about $12.6 million; health care services, including LBJ Hospital, received about $8.3 million; the Department of Port Administration received about $4.6 million; and the Department of Public Works received about $1.8 million for village road construction. An operations and maintenance fund receives 5 percent of each capital improvement grant, accruing about $2.7 million in fiscal years 1999-2003. (See fig. 5 for percentages.) Other recipients of capital improvement grants include the American Samoa Community College, the Department of Public Safety, and a fuel storage facility for rehabilitation, among others. Although the American Samoa government compiles the list and awards grants with DOI approval, many American Samoa agencies either manage their own projects or arrange for another agency to manage them. Both the American Samoa Power Authority and LBJ Hospital use their own contract management to control grant funds and obtain desired services. Also, the American Samoa Departments of Education and Port Administration use the Territorial Office of Fiscal Reform to oversee and manage their capital improvement grants. According to agency officials, the American Samoa agencies have established separate contract management systems because the regular American Samoa Treasury administrative process for project design, contracting, construction, and vendor payment is extremely slow. As a result, several American Samoa agencies have developed parallel payment systems. (See app. VII for a diagram showing this payment process.) The American Samoa Department of Education received about $2.5 million per year on average—approximately 25 percent of all capital improvement grants in fiscal years 1999-2003. According to American Samoa officials, the American Samoa Department of Education used its grants to construct almost 120 new rooms, including classrooms (see fig. 6), school offices, and science labs; purchase 16 new buses for $1 million; construct new toilet facilities at several schools and hire bathroom monitors at 21 schools to clean and guard the new toilets; renovate classrooms and office buildings by improving electrical systems with lights and fans, as well as installing new window screens, new doors and locks, and roofs; and provide new classroom furniture in many of the new and renovated buildings. LBJ Hospital, built in 1968, has used its $1.5 million average annual capital improvement grants to renovate its aging facility and obtain specific medical devices. Until 1999, few improvements had been made since the building’s construction. In fiscal years 1999-2003, the total of $7.4 million in capital improvement grants allowed the hospital to expand the existing hospital laboratory and renovate of the old laboratory space (see fig. 7); construct an ear, nose, and throat clinic and public restrooms; purchase and install five dialysis machines; purchase and install a new medical records filing system; and replace hospital core area air-conditioning chillers. The Department of Public Works receives $361,000 annually to build village roads, which are not eligible for funds from the Federal Highway Administration’s programs. Village roads run from the main connector road into a population center or to a school. DOI reported that capital improvement projects in American Samoa are consistent with its goal of improving infrastructure in American Samoa. These grants are the only direct financial assistance for infrastructure in DOI’s budget. According to DOI officials, project completion is the main criterion for assessing performance of capital improvement grants. The agency does not have a staff engineer to conduct technical reviews of construction projects; instead, it has a standing agreement with the U.S. Army Corps of Engineers in Hawaii to conduct reviews on an “as needed basis.” Accountability arises from the inclusion of large projects in the single audits; on-site monitoring by federal officials, including the resident DOI representative; and financial reports. We selected and reviewed several completed projects constructed with capital improvement grant funds. According to DOI, the resident DOI representative visits projects as she determines necessary or when requested by DOI. About once each year, DOI officials from headquarters visit American Samoa, review project files, and inspect the projects. American Samoa Department of Education. We toured several recently constructed classroom buildings, which featured handicapped-accessible classrooms for about 30 students, furnished with new desk chairs, electric lights and ceiling fans, and sinks. We also visited renovated classroom buildings. Generally, these buildings had no peeling paint, and no plaster or drywall was falling from the walls. According to a principal at a newly built facility, a number of postconstruction problems remained unaddressed by the contractor or the Departments of Education and Public Works. These problems included failure to clean and restore playground areas to a safe standard for the returning children, office spaces built without provision for telephone lines, and improperly welded stair railings. We also toured several new and renovated toilet facilities on the school campuses. Generally, these toilets were clean and functional, although we found instances of blocked drains, tiles missing from walls, and disconnected power lines into a new building. LBJ Hospital. We visited the new lab facility, air-conditioned with new equipment and updated workstations, and the new ear, nose, and throat clinic, which also had air-conditioned facilities. We were also shown new wards with private rooms and oxygen piped to bedsides rather than provided in tanks as in the older wards. We saw many pieces of new equipment, including equipment for mammography, magnetic resonance imaging, sonograms, X-ray, and X-ray developing. We visited the new file room for maintaining medical records. In contrast, the older parts of the hospital had no air-conditioning and poor ceiling ventilation. The hospital has had persistent fire-safety problems, including inflammable building materials and lack of sprinkler systems in older wards. During the period of our review, the inflammable materials were being replaced as wards were renovated; however, sprinklers remained inadequate. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the capital improvement grants was limited. According to DOI officials, the accountability of these grants is no less than for other federally funded construction grants to the states and local governments. However, in American Samoa’s single audits for fiscal years 1998-2001, which include the grants, the auditors disclaim any opinion about the reliability of the territory’s financial statements, the allowability of claimed costs, and the effectiveness of internal controls. The single audits for fiscal years 2002 and 2003 remained uncompleted as of November 2004. The audits for LBJ Hospital for fiscal years 1998-2001 found significant problems with the hospital accounts. For fiscal years 1998-2000, LBJ Hospital declined to present a statement of cash flows summarizing the operating, investing, and financing activities as required by generally accepted accounting principles. As a result, the auditor was unable to express an opinion regarding the financial statements printed in the audit. For fiscal year 2001, the auditors found the hospital unable to locate supporting documents for its accounting records. The auditors expressed no opinion on the hospital’s financial statements for 2001. The auditors found several instances of noncompliance as well as several reportable conditions and material weaknesses in internal controls. Audits for fiscal years 2002-2003 were not available as of November 2004. Each year, Congress appropriates money for technical assistance grants in the territories. Significant portions of this appropriation have been used for specific projects, such as the Coral Reef Initiative; Brown Tree Snake Control, focused on Guam; Maintenance Assistance, also known as the Operations and Maintenance Improvement Program; and the Insular Management Control Initiative. The annual appropriation also provides for general technical assistance to support short-term, noncapital projects. General technical assistance is not designated for any specific purpose, unlike the other forms of technical assistance, and is not intended to supplant local funding of regular operating expenses. DOI allocates these funds as it deems appropriate through an application process. The number of grants funded annually varies. For example, in fiscal year 2001, general technical assistance funding of $665,600 (see table 4) comprised 10 separate grants, the largest of which was $200,000 for a container tracking system for the Port Administration. General technical assistance grants must be spent in the year that they are obligated; however, DOI sometimes provides another year of funding to a project with the understanding that funding for the following year will depend on the availability of funds. All territories and freely associated states may compete for general technical assistance grants. DOI staff assess whether the applications adequately address the problems cited in the applications. According to DOI officials, DOI helps the insular governments structure their grant applications to address applicants’ needs and capacity—for example, whether a requested computer system is sufficient and appropriate for the designated purpose. The 23 general technical assistance grants to American Samoa in fiscal years 1999-2003 totaled $1.75 million, and included $7,790 for Medicare Coverage Training and $350,000 for computers for the American Samoa government. Several technical assistance grants, totaling about $390,000, were to be used to improve operations at LBJ Hospital. In April 2001, DOI granted the American Samoa Department of Port Administration $200,000 to purchase and install a container tracking system for cargo entering and leaving American Samoa’s harbor of Pago Pago. The system was designed to maintain complete information about the status of all containers arriving in American Samoa and to improve the accuracy of the billing procedures for the containers. According to the pier superintendent, the system allows ships at sea to radio their container tracking numbers and contents to the port authority, allowing for better revenue collection and more timely handling of the containers. In May 2002, DOI granted the American Samoa government $185,000 to purchase and install an immigrant tracking system upgrade (see fig. 8). According to DOI documents, the new system maintains a database of visitors entering the territory and presents a daily list of those whose visitation has expired or is about to expire. The system also keeps a digital photograph of visitors’ passports. In 1999, DOI provided $285,000 and, later in 2001, $300,000 more to the Pacific Basin Development Council in Honolulu for organizing the American Samoa Economic Advisory Commission. The commission was chartered to make recommendations to the President through the Secretary of the Interior regarding the economic future of American Samoa and to analyze the history of, and prospects for, economic development in American Samoa. The commission was also to recommend policies, actions, and time frames to achieve a secure and self-sustaining economy for American Samoa. Finally, the commission was to comment on the related appropriate role of the federal government. In 2002, the commission issued a four-volume report that targeted four potential growth industries: fisheries, agriculture, and aquaculture; telecommunications and technology information; manufacturing; and tourism. The report recommended creating a public-private working group in American Samoa to define and set up a process, structure, and timetable and to manage and oversee the implementation of the plan explained in the report; and a federal-territorial task force to coordinate activities and resolve pressing and potential problems and conflicts by seeking workable solutions. An interim report from 2001 by the commission summarized its findings and cited skepticism within the American Samoan population about the federal government’s long history of commissioning studies that yielded no tangible or sustainable results. DOI officials told us that no one in the American Samoa government had taken responsibility for pursuing the commission’ s recommendations. The commission included the then Lieutenant Governor, who became Governor of the territory in March 2003. According to DOI officials, the American Samoa government responded to these recommendations by promoting an e-commerce development corporation for which it had already requested DOI funds. No performance goals have been established for this program. According to the DOI official responsible for administering the program, DOI works to structure the general technical assistance grants according to the American Samoa government’s needs. However, according to DOI, once the grant is structured, the funds provided, and the training or project completed, DOI does not follow up to evaluate performance unless prompted by a complaint from the government or recipient. The U.S. Department of Agriculture (USDA) provides grant funds for the American Samoa School Lunch Program. The purpose of the program in American Samoa is to provide nutrition assistance to residents of American Samoa, with priority given to school-age children. The current program is funded by a special block grant that operates according to a memorandum of understanding (MOU) and provides free breakfast and lunch to all school age children. From 1962 to 1991, the School Lunch Program in American Samoa followed the same regulations, policy, and procedures as the National School Lunch Program in the 50 states. In 1991, USDA converted the amount paid under the original program to the Child Nutrition block grant, which has been adjusted for inflation annually since the transition. According to the MOU, the governor of American Samoa is charged with administering the program in American Samoa. The American Samoa Department of Education has been designated as the grant coordinator. According to federal officials, this transition caused no break in program services to the children in American Samoa. Officials explained that the change in grant and program structure was intended to provide American Samoa with greater flexibility to serve the needs of its children. In addition, given American Samoa’s remoteness and unique needs, funding the program with the block grant allowed American Samoa to better meet those needs than would the national USDA child nutrition programs (National School Lunch Program, School Breakfast Program, State Administrative Expense Funds, and Nutrition Education and Training Program). Another reason cited for the change, according to federal officials, was that the management and oversight responsibilities for the traditional child nutrition programs in American Samoa were costly and severely disproportionate to the overall level of federal assistance provided to American Samoa; in contrast, the block grant reduced USDA’s oversight responsibilities and administrative investment. School Lunch Program grants to the American Samoa government are made on a federal fiscal year basis. Since fiscal year 1991, USDA’s Food and Nutrition Service (FNS) has provided grant funds on a quarterly basis, with each year’s grant contingent on the availability of funds and FNS’s approval of American Samoa’s fiscal year Plan of Operations and completion of the Drugfree Workplace Certification and Lobbying Certification. On August 15 of each year, American Samoa is required to submit a Plan of Operations to FNS that describes how funds will be used, the targeted population to be served, and how often food or other services will be made available to program recipients. The plan also must include a budget for program expenditures. Grants are calculated with a fiscal year 1989 grant calculation methodology that was amended in 1992 and includes a yearly inflation adjustment. After adjusting for base year funds, FNS adds funding for the Nutrition Education Training Program, as authorized by Section 19 of the Child Nutrition Act of 1966 (42 U.S.C. §1788). Funds that are obligated by FNS to American Samoa in a given fiscal year are available for obligation and expenditure by the School Lunch Program in the following fiscal year, or 2 years from the date of disbursement. Table 5 shows the grant award amount for fiscal years 1999-2003. The American Samoa School Lunch Program uses grant funds to provide free breakfast and lunch to children attending public or private schools and early education centers (see fig. 9). As of July 2004, the program was serving meals at 23 public elementary schools, 6 public high schools, 10 private schools, 55 early childhood education centers, and 37 day care centers. Although the School Lunch Program in American Samoa is not held to the same nutritional requirements as in the 50 states, the MOU requires that meals be nutritious and include a variety of foods. FNS encourages the use of foods native to the Samoan Islands, as well as other nutritious foods acceptable to the groups being served. FNS also encourages menu planning to keep fat, sugar, and salt at moderate levels and to keep the menu consistent with dietary guidelines published by USDA and the U.S. Department of Health and Human Services. According to FNS officials, the American Samoa School Lunch Program develops its own menu, and the nutritionist works with the schools’ cooks to ensure that the menu is being followed. FNS provides as much advice as possible on the development and nutrition quality of the meals. In addition to funding the delivery of meal services and program administration, the block grant includes funds earmarked specifically for nutrition education. The National School Lunch Program is not legislatively required to provide, and does not receive funding specifically for, nutrition education. However, training funds are included in the grant portion for nutrition education. The American Samoa School Lunch Program Director told us that he is committed to seeking training for his employees and that, following our fieldwork, several of his staff attended training in the continental United States. He reported that, in April 2004, he sent four employees to attend the USDA School Meals Initiative conference held in Phoenix, Arizona. This conference addressed areas of concern for school meals initiatives, with particular focus on the advancement of research and technology to improve services. The Director explained that his staff acquired updated knowledge of school food services techniques and methods for improving the American Samoa program. Three other employees received training in Sacramento, California, and visited the FNS offices in San Francisco. The Director reported that the staff returned with fresh enthusiasm about improving menu planning for nutritious student meals and assisting the field school food coordinators in improving their job performance. The American Samoa School Lunch Program does not have specific program goals, but language in the MOU states that in developing its Plan of Operations, the program should give priority consideration to the needs of its preschool and school-age children; meals should be appealing and nutritious; and the program should work toward serving meals that meet the current dietary guidelines for Americans, contain nutrients at Recommended Dietary Allowances, and conform to the Food Guide Pyramid. To assess accountability, the annual Plan of Operations requires the American Samoa government to identify program activities and administrative areas that it funds with the grant. The plan should identify the number of schools where services will be provided and estimate the number of students who will be served both breakfast and lunch. It should also provide details of administration expenses and nutrition education expenses. According to federal officials, there is no requirement that the American Samoa School Lunch Program “buy America” or that the American Samoa government hire U.S. citizens. Program and financial information is provided to federal officials annually and quarterly in a series of reports. FNS also relies on annual single audit reports to assess accountability for American Samoa School Lunch Program funds. In addition, according to USDA headquarters officials, FNS program and financial management staff are required to conduct program and financial reviews every 3 years to ensure that American Samoa is complying with the terms and conditions in the MOU. However, FNS program staff reported that although they would like to conduct reviews more frequently, cuts in the travel budget make this difficult. Because the American Samoa School Lunch Program is funded by a special block grant, FNS program officials have discretion in the criteria they use to evaluate and monitor the program. FNS further explained that funds allocated to American Samoa are much smaller than those allocated to mainland programs and that the agency focuses its limited resources where attention is needed most. FNS said that the programs in American Samoa and the Commonwealth of the Northern Mariana Islands were converted to block grants to enable the FNS to save on administrative and oversight costs, among other reasons. FNS conducted program reviews in American Samoa in September 1998, September 2001, and January 2004, and it conducted financial management reviews in September 1997 and January 2004. The American Samoa School Lunch Program is meeting its purpose of delivering breakfast and lunch to schoolchildren. Federal program officials reported that they review meal service based on information that the American Samoa government submits in the FNS required quarterly performance reports, which contain the number of meals served in that period of the grant. Federal officials evaluate the program on the basis of its effectiveness in delivering services, and they identify areas where American Samoa can improve management effectiveness and efficiency to achieve quality management practices. Following are some of the findings that the officials reported, based on FNS program reviews in September 2001 and January 2004: FNS reported that the American Samoa School Lunch Program was doing a good job of using grant funds to feed children in schools and day care centers; however, FNS expressed concern about the maintenance of refrigeration equipment, health and sanitation, and the availability of fresh fruit and vegetables in the menus. FNS reported that the American Samoa School Lunch Program staff had made significant improvements in program operations and administration under the new School Lunch Program Director. These improvements followed charges and a guilty plea of the former School Lunch Program Director owing to the mishandling and theft of department food supplies and materials. Regarding program delivery, FNS reported that the warehouse is the only area where staffing is short and that food collection for distribution to day care centers consumes considerable staff time. The American Samoa School Lunch Program includes meal service to day care centers. FNS reported its concern that supporting the day care centers may limit the administrative ability of program staff to provide food to all other schools. Since day care centers already receive $180 per month per child from the American Samoa Department of Human and Social Services under a grant from HHS, FNS is recommending that the American Samoa School Lunch Program (1) consider charging a small per-pound fee to help cover the administrative costs of delivering food to the centers and (2) develop a contract with each center to explain that the program contribution is only a subsidy for the center’s food needs. FNS reviewers reported that American Samoa School Lunch Program nutritionists have conducted workshops with the school cooks to help develop their skills and to improve nutritional quality of the meals being served. Nutritionists have been working with the department to expand the use of fresh fruits and vegetables, particularly those that can be purchased locally, and have attended training for the School Meals Initiative for Healthy Children to improve the menus and track nutritional content. During our visit to American Samoa, we observed meals being served at one school, and inspected the kitchens and cafeterias in seven schools. At one school we visited, pests were evident. When we addressed this with American Samoan officials, the American Samoa School Lunch Program Director said that they have had some problems with rodents and termites and have submitted a request for pest control. In addition, four kitchens had equipment or maintenance problems, such as broken thermometers on refrigerators and freezers, holes in window or door screens, and leaking faucets. The American Samoa School Lunch Program has faced barriers to program delivery owing to recent natural disasters and program dependence on imported food supplies. In January 2004, Cyclone Heta struck the island, and for 1 week the program’s food service department had to provide food to a number of emergency shelters throughout the island. Although the Federal Emergency Management Agency reimbursed the program for the food costs, both staff and food resources were diverted from the program’s routine services, and the cyclone damaged at least one school cafeteria. The American Samoa School Lunch Program Director said that he does not want the program to be the sole source of disaster relief in any future emergencies. The nutrition education specialist said that the program’s reliance on food imports by boat and the lack of local food production also present barriers to the program’s delivery of services. Problems with the boat sometimes cause food shortages. Food shortages also occurred in 2004 because of the cyclone. Because the nutrition specialist prepares the menu based on what is available in the warehouse, shortages limit the menu options and the program’s ability to meet federal nutrition guidelines. Accountability in the American Samoa School Lunch Program was limited at both the federal and the program levels, but changes have recently taken place to improve accountability. The main mechanisms for accountability in the School Lunch Program are the single audit reports and the financial management reviews that FNS conducts, in addition to their monitoring through quarterly and annual reports. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the School Lunch Program was limited. Further, the single audits for fiscal years 1998-2000, which were completed in August 2003, have questioned costs because of missing documentation and unaccounted-for expenses, for which the audit findings cited lack of internal controls and lack of adherence to the accounting documentation procedures required by the Office of Management and Budget. According to the single audits, the questioned costs during fiscal year 1998-2000 totaled $168,252. As of July 2004, FNS reported that they had received the single audit report for American Samoa for fiscal year 1999 but not for fiscal years 1998 and 2000. In addition to being aware of the internal control problems cited in the single audits, federal officials were alerted to procurement fraud and theft that occurred in the program throughout fiscal year 2003. The American Samo School Lunch Program Director and the Chief Procurement Officer were charged with committing offenses against the United States between October 2001 and September 2003. These officials pleaded guilty to the federal charges on July 2004. The School Lunch Program Director pleaded guilty to charges of taking food and goods valued at $68,000 or more from the American Samoa School Lunch Program Warehouse and converting such goods for his and others’ personal use. Although the chief officials involved in theft and fraud have been replaced, the new Director told us that not all staff involved in the theft were terminated from program employment. He said that one person is still working in the warehouse because of his government status and the department’s inability to place him elsewhere. The Director said that he is trying to put more controls in all areas to prevent repetition of past problems. FNS program officials said that there are still problems with procurement. For example, the American Samoa School Lunch Program staff asked recently for an orange juice contract, but the Governor and Attorney General rewrote the specifications of the contract to allow a contractor to provide a different and less expensive juice. This change was never communicated to the School Lunch Program staff. To improve oversight and monitoring, FNS officials are now requiring that all milk and juice contracts be sent to the Western Region office for review, with follow-up documentation and justification, to be approved by FNS in accordance with USDA’s regulation governing procurement (7 C.F.R. § 3016.36). FNS officials stated that they would not normally be involved with this level of oversight. FNS officials also reported that program funds were used to purchase vehicles for the Director of the Department of Education and the Director of the American Samoa School Lunch Program. FNS officials asked the American Samoa officials to return the vehicles to the warehouse and explained that no government-funded vehicles should be used during nonwork hours. FNS financial management officials recently issued their Financial Management Review of fiscal year 2000. This is the first review that American Samoa School Lunch Program financial management officials have conducted since September 1997. FNS officials explained that they focused on fiscal year 2000 because they had not conducted a financial management review for a long time and they needed to select a year for which there would be complete transaction records. An official explained that they have experienced budget constraints and staff shortages and that they currently schedule on-site reviews every 5 years. Their review findings included the following: According to the Code of Federal Regulations, “effective control and accountability must be maintained for all grant and subgrant cash, real and personal property, and other assets. Grantees and subgrantees must adequately safeguard all such property and assure that is used solely for authorized purposes.” However, FNS reviewers could not determine whether the American Samoa government Property Management Department consistently performed a physical inventory of the American Samoa School Lunch Program assets. Four vehicles were not being used exclusively for program purposes. FNS officials explained that government-funded vehicles should not be used during nonwork hours and that the American Samoa officials probably were not aware of this. FNS has requested that American Samoa officials provide documentation ensuring that the vehicles are used solely for program purposes. The financial management review cited internal control problems regarding inventory of food and fixed assets, misuse of food service equipment, and draws from the grant’s letter of credit that were not made on an as-needed basis. In addition to reviewing reports by the FNS officials, we met with American Samoa School Lunch Program staff to better understand the program’s operations and controls. The Program Director provided documentation and responded to our questions regarding corrective measures to improve the previous problems in the program. These actions included suspension and removal of staff involved in incidents of theft, identification of personnel resources to carry on continued operations, and tighter controls and monitoring of purchases. The Director has also identified long-range corrective measures, such as the development and implementation of a modern computer system to improve food inventory; development of a network system to improve shipping, receiving, and issuing of inventory; and a more transparent distribution of resources to ensure that services and tasks are not duplicated among employees. While discussing program budgets with American Samoa School Lunch Program staff, we found that American Samoa had not established a food cost per child and had estimated food program costs based on an arbitrary annual increase from the previous year. Until July 2003, the budget report for the Plan of Operations was completed by staff in the main American Samoa Department of Education and not by the program staff. The Director also reported that the program staff did not receive the grant award letter and that, as a result, the Plan of Operations was not submitted on time, resulting in a delay of the grant obligation. Although FNS does not require a food cost per child for budgets in the Plan of Operations, we found it problematic that program year budget estimates were not based on analyses of student enrollment and number of meals served for the prior school year and were not compared with food costs, food used, and other inventory expenses and allocations. When we communicated our concern to the American Samoa School Lunch Program and Department of Education staff, they agreed that estimating food costs per child would be an important step in improving the budget process, particularly given the program’s purpose to provide meals to all school-age children. The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) follows the same regulations and requirements in American Samoa as in the 50 states. The purpose of the WIC Program is to provide supplemental food and nutrition education at no cost to eligible low- income pregnant, breast-feeding, and postpartum women and to infants and children up to 5 years of age. According to federal regulations, the program is intended to serve as an adjunct to good health care during critical times of growth and development, in order to prevent the occurrence of health problems, including drug and other harmful substance abuse, and to improve the health status of these persons. The WIC Program in American Samoa was established in 1996. The grant to American Samoa is awarded by USDA’s FNS and is overseen from FNS’s Western Region. In American Samoa, the state agency is also the local WIC services provider. Eligibility determinations, nutrition assessments, and distribution of benefits are all provided in one building, administered by the American Samoa Department of Human and Social Services, with a satellite clinic operating on the sparsely populated Manu’a Islands. Funding for the WIC Program in American Samoa increased steadily in fiscal years 1999-2003 (see table 6). For fiscal year 2004, American Samoa received a grant award of $6,145,322, with $4,736,905 dedicated to food benefits and $1,408,417 for nutrition services and administration. The American Samoa WIC Program also receives a rebate every month from Mead Johnson for cans of infant formula purchased from WIC vendors. FNS officials explained that the fiscal year 2003 rebate was between $62,000 and $62,668 monthly; in fiscal year 2004, the average monthly rebate increased to $70,034. Rebates are deposited into the WIC food account and offset charges to the WIC food grant for food costs. Pregnant, breast-feeding, and postpartum women; infants; and children up to 5 years of age become eligible if they (1) are individually determined by a competent professional authority to be in need of the special supplemental foods supplied by the program because of nutritional risk; (2) meet the WIC income criterion or receive, or have certain family members that receive, benefits under the Food Stamp, Medicaid, or Temporary Assistance for Needy Families Program; and (3) reside in the state in which the benefits are received. FNS program officials explained that nutrition risk is based on blood work, height, weight, health history, and dietary assessment and that participants must qualify on at least one of these factors. The current income requirement is 185 percent of the poverty level. FNS officials told us that because incomes in American Samoa are so low, nearly everyone in American Samoa is eligible for WIC benefits if they also meet the gender, age, and residency requirements. Additionally, FNS officials explained that, similar to WIC recipients in the 50 states, most American Samoans who meet the income requirement also meet the nutritional risk criteria. The WIC Program in American Samoa has 30 full-time staff, including five eligibility workers, an eligibility manager, a registered nurse, three licensed nurses, three community health assistants, and one clerk. As of March 2004, the program had 6,300 WIC recipients, and the WIC offices were serving about 350 clients per day, with services ranging from nutrition risk assessments to issuance of WIC “food instruments,” or checks. Eligible WIC recipients receive (1) a food package, which is a prescription for food specific to each client; (2) nutrition education; and (3) referrals for health care. American Samoan officials explained that all WIC applicants are given a health assessment when they first visit the clinic. Applicants are asked to present immunization cards for the children; if the immunizations are not current, children are referred to the Lyndon Baines Johnson Tropical Medical Center, where shots can be obtained. After applicants are certified to receive WIC benefits, the public health staff conduct follow-up assessments for infants every 6 months, from birth to 1 year. WIC recipients are offered at least two nutrition classes within a 6-month period. Classes are generally 10 to 15 minutes long and focus on issues such as breast-feeding tips and other nutrition topics that emphasize the use of the WIC foods. American Samoa WIC Program staff reported that the nutrition unit of the WIC Program holds classes regularly. In addition to nutrition classes, the WIC Program implemented a Reading Readiness Class in 2002 for children. The class is intended to support education delivered through the Early Childhood Education Program and is targeted to children aged 1 to 5 years. WIC recipients are issued WIC checks that they can use to obtain food at authorized vendor locations. Currently, there are about 80 authorized WIC vendors in American Samoa among the three islands. According to FNS officials, most of the goods on American Samoa are imported and, consequently, the WIC vendors have high food costs. As a result, the average cost of WIC food packages is higher in American Samoa than in the 50 states. WIC recipients give vendors WIC checks for specific foods, and the vendors fill in the dollar amount on the checks and submit them to their bank. FNS officials reported that the high costs for WIC food packages in American Samoa also result, in part, from vendor fraud (See Grant Accountability). To gauge the performance of the nutritional services that WIC offers, FNS has established multiple program output measures. Generally, these measures are used to assess the types and quantities of services the state agencies provide and the agencies’ compliance with grant expenditure and other program requirements. The state agencies develop guidelines intended (1) to ensure that local agencies effectively deliver WIC benefits to eligible participants and (2) to monitor local agencies’ compliance with these guidelines. In addition to using output measures to measure performance of WIC state agencies, FNS has established breast-feeding initiation rate as an outcome-based measure for the WIC Program’s breast- feeding promotion and support activities. However, FNS has no outcome measures for its nutrition education or health referral services. To monitor the delivery of WIC services in American Samoa, FNS program officials conduct an on-site management evaluation known as a State Technical Assistance Review, usually on a 3-year cycle as funds allow. According to FNS officials, these reviews were conducted in 2000 and 2003. FNS financial management officials conduct on-site financial management reviews, and FNS officials told us that they follow a schedule similar to that of the program staff’s on-site reviews. However, FNS officials later reported that only one financial management review of American Samoa WIC had been conducted, in June 2004. FNS Regional financial management reviews are now performed on a 5-year cycle. To ensure the accountability of WIC funds in American Samoa, FNS relies on state technical assistance reviews, financial management reviews, and A-133 audits (single audit reports). FNS requires American Samoa grantees to submit monthly financial and participation reports (FNS-798), which provide information on projected and actual food expenditures, infant formula rebates, cumulative nutrition services and administration expenditures and obligations, and revenues from food vendor and participant collections and from program income. If the WIC Program receives separate infrastructure grant funds, American Samoa reports these expenditures annually to FNS on the SF-269A report. WIC services and nutrition education were being delivered in American Samoa, but data to evaluate the performance of the WIC Program, beyond general program delivery, was limited. Furthermore, incidents of fraud and theft have jeopardized the integrity and, possibly, the quality of services to recipients. Under the FNS criteria for the state technical assistance review, program reviewers assess 11 functional areas of the WIC Program; however, FNS officials told us that it is difficult to cover all 11 areas during on-site reviews because they spend only about 4.5 days on island. Consequently, they identify and focus on the functional areas they see as critical. During an FNS program review in June 2000, FNS reviewers found that program services were hampered by inefficient clinic operations and recipient certifications. FNS officials reported a number of errors in the determination of nutritional risk and the capture of related participant data in the automated system. For example, one file recorded a child’s height as 32 inches and, 6 months later, as 31 inches. FNS recommended that the staff be unified under a single supervisor to improve communication and clinic operations. FNS also recommended that staff conducting eligibility assessments be retrained in the certification requirements. In the 2003 FNS state technical assistance review, officials reported that, although not all clinic operations recommendations from the 2000 report had been implemented, the designation of a single supervisor for the certification process had improved communication and the certification procedures had dramatically improved, including the documentation, assessing, and processing of WIC recipients. However, the review cited serious concerns and program violations, including food vendor overcharging and fraud, and the program is now being monitored by FNS until American Samoa officials respond to and implement corrective actions necessary to avoid fiscal sanction. The American Samoa WIC officials reported technology barriers to program delivery. Until June 2004, the WIC nutrition education official lacked an Internet connection that would allow her access to important nutrition information available on the USDA Web site, despite a request by the WIC staff in 2003 in response to a recommendation in the FNS on-site review in September 2003. The WIC Program Director said that the computer programs were out-of-date and needed to be redesigned. The Director also reported that the information specialist needed technical assistance and that the program needed a computer system that connects all WIC units, including finance, nutrition education, and public health. FNS officials cited distance to American Samoa and limited travel budgets as a barrier to effective oversight. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the WIC Program was limited. All USDA grantees are required to comply with the Single Audit Act Amendments of 1996 and OMB Circular A-133. In the single audit reports for fiscal years 1998-2001, auditors found questionable costs in the WIC Program for all 4 years, totaling $46,799. The reports also identified various internal control weaknesses, including missing files and support documentation for purchases, payments, and contracts, and payroll as well financial records. They further stated that the auditors could not test for eligibility of participants in all 4 years because sufficient data systems and documentation were not available. FNS officials told us that as of February 2004, they had not received copies of the 1998-2001 single audit reports that list the questioned costs. They also said that they had not been made aware of any findings that required them to follow up on corrective actions for at least 5 years. In July 2004, Western Region FNS officials reported that they had received only the fiscal year 1999 single audit report, on May 5, 2004. In addition, FNS identified various accountability problems in the American Samoa WIC Program, including incidents of vendor fraud and abuse and misuse of grant funds. The single audit reports for fiscal years 1998-2001 also identified a lack of internal controls, including missing documentation for expenditures and case files to test for recipient eligibility. During the September 2003 FNS review of the American Samoa WIC Program, FNS officials were alerted to vendor fraud and abuse occurring in the program. The review found widespread evidence of WIC checks being exchanged for cash, cigarettes, other nonfood items, and unauthorized foods at WIC-authorized grocery stores and redeemed by the stores at local banks. In addition, the reviewer found frequent instances of vendors overcharging for WIC foods. The 2003 review requires corrective action to disqualify eight vendors. FNS reported that American Samoa’s food package cost was the highest among 88 WIC state agencies and almost double the national average for food package costs (American Samoa’s average food package cost per person was $62.15, compared with the national average of $35.22 and Guam’s average of $52.05). The June 2000 FNS program review stated that the American Samoa vendor manager had done a “very good job” in establishing a strong WIC presence at all 36 authorized stores through frequent visits and policy clarifications. However, by September 2003, when FNS conducted another on-site review, the FNS reviewer found that the number of authorized vendors had increased from 36 to 83 and that the “authorization process appeared to be little more than an annual ‘rubber stamp,’ with no evidence of applications being denied or assessed for competitive pricing against other stores as required by WIC regulations.” FNS responded by recommending that all new vendor applications be frozen until further notice, preferably for 2 fiscal years. American Samoa officials told us that no new vendor applications had been approved since 2003. Although no new vendors have been authorized, FNS reported that the previous number, 36, was more than adequate for an island of slightly more than 100 square miles, with an excellent transportation system connecting all villages. FNS reported that the Guam WIC Program has only 16 stores for about the same number of WIC participants and on an island twice the size of Tutuila, the main island of American Samoa. WIC regulations require the state agency to “authorize an appropriate number and distribution of vendors of order to ensure adequate participant access to supplemental foods and to ensure effective State agency management, oversight, and review of its authorized vendors.” FNS officials reported that they do not see the justification for having more than 40 well-distributed WIC-authorized vendors in American Samoa. FNS officials said that American Samoa WIC staff are responsible for authorizing vendors and training them on WIC check transaction and redemption procedures. FNS officials also reported that the state agencies administering WIC are required to perform compliance investigations and/or inventory audits for the WIC Program. The American Samoa Department of Human and Social Services established a Grants Management and Evaluation Division to conduct programmatic reviews of grant-funded programs and monitor programs’ compliance with regulations. The division found various noncompliance issues, which it reported along with recommendations for corrective action to the department director and WIC staff. However, according to division staff, program officials did not report back to them whether actions were taken based on their findings and recommendations. While visiting several authorized WIC stores with American Samoa WIC officials, we found two violations, based on the guidelines in the American Samoa WIC Vendor Handbook. In one instance, WIC-authorized food had no price displayed, and in another instance, a WIC-authorized food item had expired. Owing to the seriousness of the problems in the WIC Program, FNS officials have involved the Governor of American Samoa. The Governor responded to FNS with a corrective action plan in January 2004; however, the State agency had delayed implementation of critical actions in the plan, including mandatory disqualification of the eight stores found to have committed the most serious WIC violations, cited in the September 2003 FNS review. The FNS Regional Administrator conveyed his concern to the American Samoa Governor during their July 2004 meeting. During his visit to American Samoa, the Regional Administrator also found that the Governor’s concerns about participant access and cheaper prices at the affected vendors were not warranted. The Governor reported that actions had been taken against 12 other vendors who were found to have overcharged for food packages and that 9 of these vendors had reimbursed the program as of June 3, 2004. We requested documentation from American Samoa’s Treasury department and found that 8 out of 12 vendors had paid the program for the overcharges in April 2004. FNS has yet to determine whether the American Samoa government’s actions met WIC regulatory requirements and is following up with the state agency regarding the individual cases. As of August 2004, FNS officials were deciding what actions to take against the American Samoa WIC Program. In October 2004 the Governor wrote to FNS stating that the eight disqualifications, required in the September 2003 FNS review, had been carried out; FNS is requesting additional documentation to assure this and other corrective actions had taken place. FNS officials have threatened fiscal sanctions if the program does not come into compliance. In addition to failing to take corrective action on the cases of vendor fraud and abuse, the WIC Program staff did not meet deadlines for submitting monthly status reports to FNS. Grant data that we requested from FNS officials revealed that in fiscal years 1998-2003, FNS staff had to communicate with American Samoa officials because reports were submitted late, or information was missing. Furthermore, in August 2004, charges were filed against the Deputy Director of the Department of Human and Social Services, the grantee of the WIC and Food Stamp Programs in American Samoa. The Deputy Director was charged with defrauding the government by conspiring to rig contracts totaling more than $120,000 in exchange for cash kickbacks. With regard to internal controls, FNS officials said that the American Samoa WIC staff were encouraged to adopt an automated system for financial management and that FNS provided some technical assistance but that WIC staff turnover had hampered the system’s implementation. In addition, in a May 2004 review, FNS Financial Management staff found that the American Samoa Department of Human and Social Services had overcharged the WIC Program $128,400 for WIC building renovations; FNS has since demanded a repayment. The American Samoa Food Stamp Program is a nutrition assistance program that provides food coupons to American Samoa’s eligible low- income elderly residents and blind or disabled residents. The program is administered by the American Samoa Department of Human and Social Services. The Food Stamp Program in American Samoa was authorized by the act of December 24, 1980, which allowed USDA to extend programs administered by the department to American Samoa and other territories. The program became effective in April 1994, and the first month’s benefits of the Food Stamp Program were issued in July 1994. The current program is funded through a capped block grant and operates under an MOU between the American Samoa government and FNS. The MOU is effective for a 1-year period and is negotiated annually prior to the beginning of each fiscal year. Unlike the Food Stamp Program in the 50 states, the American Samoa Food Stamp Program is not an entitlement program; further, the MOU under which it operates allows American Samoa to set its own eligibility standards as long as they are within the capped block grant. FNS officials explained that American Samoa decided to target the program to the elderly and disabled in part because they do not receive Supplemental Security Income and because offering benefits on the basis of income would have caused the program to be too broad given the limited resources of the capped grant. The American Samoa program requirements are outlined in the MOU and not in the laws and regulations that apply to the Food Stamp Program in the 50 states. Prior to the negotiation of the MOU, no Food Stamp Program existed in American Samoa. Over the years, certain aspects of the MOU have changed, such as the American Samoa Food Stamp Program’s definition of “disabled”; however, the basic concept and design of the program have remained the same. The block grant for the American Samoa Food Stamp Program covers the administration costs of operating the program (e.g., staff salaries, facility charges) and delivering nutrition assistance benefits to the recipients. The initial grant amount was $2.7 million; however, in fiscal year 1996, the annual grant was capped by statute at $5.3 million with adjustments for annual inflation. When the Food Stamp Program reauthorized in fiscal year 2002, the cap for fiscal year 2004 increased to $5.6 million, as a result of the Farm Bill, and tied American Samoa funding to Puerto Rico’s grant amount. American Samoa may not carry over more than 2 percent of its funding from one fiscal year to the next. Table 7 shows the grant awards for fiscal years 1999-2003. The American Samoa Food Stamp Program provides nutrition assistance to low-income elderly, blind, or disabled American Samoa residents. American Samoa is allowed to set its own eligibility standards to stay within the capped block grant. Food Stamp recipients in American Samoa must meet the following financial and nonfinancial eligibility criteria, as specified in the MOU: Nonfinancial eligibility criteria (residency, citizenship, and age or mental or physical disability). To be eligible, a recipient must be either a U.S. national; a citizen; an alien lawfully admitted to the United States as an immigrant as defined in Section 101 (a) (15) of the Immigration and Nationality Act; an alien admitted to the Territory of American Samoa as a permanent resident pursuant to sections 41.0202 (c)ii, 41.0402 and 41.0403 of the American Samoa Code; an alien legally married to a U.S. citizen or U.S. national; or an alien who has legally resided in American Samoa for at least 5 consecutive years. Resource eligibility standards. A recipient aged 60 and older, disabled, or blind is subject to the maximum resource standards of $3000. Gross income eligibility standards. Income is based on the applicant’s (not household’s) monthly gross income. The current standard is a gross monthly income of $712 or less. The fiscal year 2004 MOU defines maximum monthly benefits as $132 per person. By comparison, the Food Stamp Program in the 50 states provides maximum monthly benefits of $141 per person. The American Samoa Food Stamp Program Director told us that potential recipients must attend an orientation offered weekly explaining the program benefits and qualification requirements. After attending an orientation, a potential recipient must apply for certification. The Director of the American Samoa Food Stamp Program described the program’s efforts to encourage healthier eating through nutrition classes that recipients can take while waiting to receive their monthly benefits. Although classes are not mandatory, the Food Stamp Program staff budget approximately $6,000 for nutrition education classes per year. At the time of our visit, the American Samoa Community College was holding nutrition classes at the Food Stamp clinic to show recipients how to prepare healthier meals at home. The Director explained that classes are conducted once per month to coincide with the issuance of the food stamps. Food Stamp recipients must come to the Food Stamp offices monthly to receive their food coupons. In fiscal year 2003, the program served an average of 2,830 persons and issued $292,061 in benefits per month. The average monthly benefit per person during this period was about $103. The program is one of the few remaining U.S. Food Stamp Programs that still uses paper food coupons since the program in the 50 states has implemented an electronic benefits transfer system to provide food assistance to eligible recipients. The American Samoa Food Stamp Program does not have federally prescribed program goals or performance standards. FNS officials told us that they evaluate the American Samoa program according to (1) the number of people served, (2) whether recipients received the right number of coupons, (3) whether benefits were awarded correctly, and (4) whether the coupons were used appropriately. According to FNS officials, the Department of Human and Social Services, the American Samoa grantee, is required to monitor and coordinate all program activities and ensure that the activities conform to the guidelines established in the MOU. The MOU outlines procedures for operating the program, such as determining eligibility and processing applications. Program monitoring includes reviews to evaluate program operations, eligibility certification, and retail compliance. To monitor the program for accountability, the Department of Human and Social Services is required to keep necessary records indicating whether the program is being conducted in compliance with the MOU. To monitor this, FNS requires Human and Social Services to submit monthly reports on participation and issuance data and financial information. FNS has also established procedures for recipients and retailers, penalties and disqualifications for fraud, and procedures for clients and retailers who do not adhere to the procedures. To ensure that the program is in compliance and the services are being delivered appropriately, FNS conducts on-site reviews. These reviews are scheduled annually but may occur less often. Program reviews were conducted in 1995, 2001, and 2004. The last financial management review was conducted in 2003. FNS officials reported that they generally find that the American Samoa Food Stamp Program delivers assistance to the appropriate recipients. FNS officials reported that the program operations have improved since the implementation of an automated system in August 2001. However, during an on-site review of the Food Stamp Program in April 2004, the FNS Program Manager found some instances in which file certification procedures for granting eligibility were not adequately documented. The FNS reviewer reported that FNS made recommendations to the American Samoa Food Stamp Program staff to improve documentations and adhere to the guidelines set forth in the MOU. The Food Stamp Program Director in American Samoa believes that the grant is adequate to serve the number of eligible recipients and that recipients are happy with the program. During our interview with the American Samoa Department of Human and Social Services Director and Deputy Director, officials reported that no long-term assessment of the program’s effectiveness had been conducted but that they are seeing a change of diet as a result of the nutritional training that the Food Stamp Program provides. However, the officials could not document dietary changes. Moreover, they reported that when they recently surveyed recipients regarding where they would like to use their coupons, recipients responded that they would like a chain fast-food restaurant to be added as one of the program’s authorized vendors. FNS officials responded that under the current MOU, the American Samoa program staff cannot authorize the fast-food chain to be a program vendor. Several local conditions affected the delivery of the American Samoa Food Stamp Program services. Our interviews with federal and program officials indicated that the program had an inadequate number of professional staff to maintain and operate the program’s technology infrastructure, including databases to manage program services and account for the use of funds. We also found that other technology barriers affected the delivery of program services. In addition, because the American Samoa Food Stamp Program staff consider the local postal system unreliable, they require applicants and recipients to come to the program offices for all correspondence regarding their benefits. According to the FNS review in April 2004, the automated system that processes eligibility and administers benefits automatically closes cases that are not certified within 30 days of the initial application but does not generate a letter to inform the applicant. The FNS review also found that although the System Administrator in American Samoa is very knowledgeable of the automated system, the staff has limited programming knowledge essential to designing and programming detailed reports or enhancing the system to meet all of the Food Stamp Program’s automation needs. FNS officials reported that American Samoa program officials are in the process of recruiting a computer programmer. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the Food Stamp Program was limited. The program is subject to OMB Circulars A-87, A-102, and A-133, which contains standards required by the Single Audit Act. The recently released American Samoa single audit reports for fiscal years 1998-2000 showed questionable costs of $26,033 for the American Samoa Food Stamp Program. For example, in fiscal years 1998, 1999, and 2000, the questioned costs resulted from missing reports and missing support documentation that auditors cited as a lack of adherence to accountability documentation procedures. Also, in all 3 fiscal years, because of incomplete or missing participant files auditors were unable to verify that participants were eligible to receive benefits or that they did not receive benefits prior to their approval through the certification process. Although the March 2003 financial management review by FNS officials did not find significant problems with internal controls in the American Samoa Food Stamp Program, the findings in the single audit reports point to accountability weaknesses in financial management. The financial management review noted that the program was not in compliance with 7 C.F.R. § 3052, which requires submission of an agency’s single audits; however, FNS reported only that it would follow up on the completion status of the missing audits. Additionally, FNS officials reported that as of July 2004, they had received only one of the three single audit reports for fiscal years 1998-2000, despite the fact that all three were completed in August 2003. Federal officials explained that since 1997, when the Federal Audit Clearinghouse was established, a management decision documents the agreement between FNS and American Samoa on the proposed corrective action for single audit findings and the date that the actions will be completed. The American Samoa Food Stamp MOU states that any firm or local food producer that has been disqualified by the American Samoa WIC Program will automatically be disqualified from the American Samoa Food Stamp Program for the same period of time. FNS officials said it is difficult to uncover fraud in retail purchases through the type of management evaluation reviews conducted by federal Food Stamp Program staff. Federal program reviewers examine American Samoa Food Stamp retailer authorization and redemption processes, and adherence to retailer requirements, and retailer training and monitoring. While these reviews would not reveal food stamp retailer fraud, since the WIC vendors are also food stamp vendors, and there have been problems with WIC transactions, FNS is monitoring food stamp retailers closely. FNS officials told us in February 2004 that the Food Stamp Program in American Samoa has a compliance program but that compliance reports are not required by FNS. They are considering amending the fiscal year 2005 MOU to include a requirement for compliance reports to be submitted to FNS in addition to the already required reports. In its April 2004 review, FNS found that Food Stamp Program staff were diligent in ensuring timely authorizations for vendors participating in the program. FNS also found that program staff in American Samoa conducted periodic site visits to vendors and ensured that vendors that redeemed large numbers of food stamps were monitored and reported violators were investigated. FNS Food Stamp officials discussed the problems in the American Samoa WIC Program with Food Stamp Program staff and found that vendor case files contained copies of disqualification letters; however, these disqualification letters had not been enforced by the WIC program officials as of August 2004. Staff acknowledged that they were aware of the MOU requirement to disqualify the vendors from the Food Stamp Program once decisions have been made in the WIC Program. We visited three stores that were authorized vendors for the Food Stamp Program, WIC, or both. In our Food Stamp review, we found that one vendor had not posted the Official Food List (see fig. 10 for an example of the posted list). We did not conduct a full-scale review of all compliance requirements, but when we asked store staff about the Food Stamp procedures, one staff member had difficulty understanding Samoan and English. Other staff members could name only a few of the procedures on the checklist in the Food Stamp Program retailer guide, which program staff had provided us before our visit. The Food Stamp Programs in the 50 states implemented an electronic benefit transfer (EBT) system, a point-of-sale system that helps ensure program compliance. FNS had discussions with American Samoa officials about the territory’s implementing the system. However, FNS cautioned that many factors should be considered in determining the feasibility of implementing an EBT system in American Samoa, including the costs of the system relative to American Samoa’s resources under the capped grant award; the state of American Samoa’s automation technology and resources; the financial and technology limitations of vendors; and the potential impact of such a system on elderly and disabled recipients. State and local education agencies are eligible for federal grants and funds to implement numerous federal education programs. In fiscal years 1999- 2003, under a consolidated grant application, American Samoa applied for, and received, Innovative Programs grants to support its education programs. The Innovative Programs grant is designed to assist state and local education agencies in implementing education reform programs and improving student achievement. Innovative Programs grant funding provided by a state education agency to local education agencies can be used to carry out local innovative assistance programs that may include at least 27 “activities,” which are identified in the No Child Left Behind Act (NCLBA). The American Samoa Department of Education reported that it is both a state education agency and a local education agency because it acts as a state education agency when performing its federal grant administration functions but as a local education agency when implementing and assessing local assistance programs. Table 8 identifies the Innovative Programs grant award amounts to American Samoa for fiscal years 1999-2003. In fiscal year 2003, the Innovative Programs grant accounted for about 40 percent of the American Samoa Department of Education’s total budget (about $40 million). Annual awards to American Samoa and the other insular areas are based on a statutory formula for set-asides that allocates up to 1 percent of the total federal education funds available each year to the 50 states for distribution to the insular areas, according to their respective need. The American Samoa Department of Education can draw down awarded grant funds throughout the year and spend any remaining grant funds during the following fiscal year. The increase in the Innovative Programs grant award to American Samoa for fiscal years 2002 and 2003 resulted from the enactment of the NCLBA. The act authorized a $65 million increase in total federal appropriations for Innovative Programs grants and parental choice provisions from fiscal year 2001 to 2002 and a $25 million increase from fiscal year 2002 to 2003. The NCLBA also permitted consolidated grant applicants such as American Samoa to transfer up to 50 percent of certain nonadministrative federal funds to the Innovative Programs grant. For fiscal years 1999-2001, the American Samoa Department of Education reported that it implemented programs for training instructional staff, acquiring student materials, implementing technology, meeting the needs of students with limited English proficiency, and enhancing the learning ability of students who are low achievers. Similar initiatives were proposed in American Samoa’s fiscal year 2002 and 2003 consolidated grant applications, in addition to others (see table 9 for a full list). Under the Innovative Programs grant rules, American Samoa must spend at least 85 percent of the funds on local innovative assistance programs, whereas up to 15 percent of the funds may be spent on state education agency programs and the administration of the Innovative Programs grant. Table 9 shows how the American Samoa Department of Education allocated Innovative Programs grant funds among its various programs in fiscal year 2003. In fiscal year 2003, the largest dollar share of local education agency funds supported local activities such as teacher quality improvement programs, class size reduction efforts, and the purchase of supplemental instructional materials. According to the American Samoa Department of Education, every classroom for kindergarten through eighth grade currently has an average of about 27 students per teacher. However, the department would like to reduce the average class size to 15 students per teacher for kindergarten through third grade and to 20 students per teacher for grades four through eight, by hiring more fully certified teachers. Teachers may obtain teaching degrees locally from the American Samoa Community College or from a University of Hawaii cohort program. The American Samoa Department of Education reported that the community college enrolled 600 to 900 students per year from 1999-2002 in its teacher certification program and that 142 teachers graduated from the Hawaii cohort program in 1999-2002. However, according to department officials, teachers are difficult to retain owing to the island’s inability to pay salaries that are commensurate with the cost of living. For all fiscal years included in our review, American Samoa used the Innovative Programs grant to budget for local education agency innovative assistance programs and costs associated with those programs, such as payroll, supplies, contractual services, travel, equipment, and indirect costs. According to the American Samoa Department of Education, various programs receive local education agency program funds on a per child basis, with equal allocations for each 5- to 12-year-old child. American Samoa’s fiscal year 2004 consolidated grant application reported that about 17,000 children aged 5 to 17 years were attending 23 elementary, 6 secondary, and 13 private schools. The consolidated grant application form developed by the U.S. Department of Education (ED) identifies five performance goals, with corresponding indicators, that apply to all proposed education programs. The form requires applicants to provide certain minimum information, including performance “targets” to confirm the state or local education agency’s program compliance with these five goals. The American Samoa Department of Education is not specifically required to comply with NCBLA, but it reported in its fiscal year 2003 grant application that “it has made the commitment to utilize” some of the performance goals as a framework for improving education in the territory. In addition, local assistance programs funded under the Innovative Programs grant must be (1) tied to promoting challenging academic achievement standards, (2) used to improve academic achievement, and (3) part of an overall education reform strategy. According to an American Samoa Department of Education official, implementing certain aspects of the NCLBA could begin to tie federal dollars to progress and measurable results for students in American Samoa. In 2002, ED began requiring the American Samoa Department of Education (and state education agencies in the 50 states) to submit reports that describe how programs implemented under the Innovative Programs grant have affected student achievement and education quality. State and local education agencies have the authority to develop the content and format of their own summaries and evaluations, but each agency must meet certain reporting requirements. According to ED guidance, local education agencies must submit annual “evaluations” that include, at a minimum, information and data on the funds used, the types of services furnished, and the students served by the programs. State education agencies must submit an annual statewide “summary” based on the evaluation information received from the local education agencies. ED reported that it relies primarily on single audit reports, in addition to its own financial monitoring, to assess the fiscal accountability of American Samoa’s Innovative Programs grant. ED’s annual performance report requires grantees to include information about how grant funds were spent. Since September 2003, ED has designated American Samoa as a high-risk grantee and has begun requiring the American Samoa Department of Education to submit quarterly financial reports. We found that local program performance was difficult to evaluate, owing to yearly variations in the types of programs implemented, variations in funding levels for the programs that did not change, and variations in the types of data provided in annual performance reports. The Western Association of Schools and Colleges (an accrediting commission for schools in the United States) reported that one of American Samoa’s six high schools continued to be denied accreditation because of long-standing issues, including poor teacher qualifications, failure to make certain improvements in student education programs, and failure to procure education materials and equipment in a timely manner. In spite of our inability to determine local program performance, ED's Office of Elementary and Secondary Education indicated that the American Samoa Department of Education generally submitted the annual reports on a timely basis for fiscal years 1999-2002 and that the reports provided some information about American Samoa's education programs. In addition, ED’s program managers reported that they have frequent communication with American Samoa Department of Education officials throughout the application and reporting process but that on-site reviews of the program are infrequent: the last ED program review in American Samoa was conducted in 1991. Officials from ED’s Office of Inspector General (OIG) visited American Samoa in August 2002 to determine whether allegations of fraud in its programs warranted additional investigation and audit. The OIG’s report did not include specific findings on the Innovative Programs grants. ED officials told us that they visited American Samoa in September 2004. According to the American Samoa Department of Education, American Samoa’s remoteness presents challenges in all aspects of implementing the Innovative Programs grant in American Samoa. For example, transporting personnel, materials, and supplies to and from the territory is costly and logistically difficult. Attracting and retaining qualified teachers is also a problem, given that the average teacher salary in American Samoa is about $13,000 per year while the cost of living is comparable to that in Hawaii. Although the American Samoa Community College offers an associate degree in education, the territory has no institutions of higher education. Most teachers hired in American Samoa have an associate degree from the community college. Another factor affecting education in American Samoa is limited English proficiency. Most students are formally introduced to English in kindergarten but are raised speaking Samoan, which has fewer letters in its alphabet and many fewer words than English. According to the American Samoa Department of Education’s annual grant application for 2003, at least 70 percent of all students in kindergarten through twelfth grades have limited English proficiency. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the Innovative Programs was limited and ED was unable to ensure fiscal accountability for the grant funds. ED designated the American Samoa government a high-risk grantee in September 2003, primarily because of its failure to provide timely single audit reports. Although the American Samoa Department of Education submitted annual Innovative Programs grant applications and reports in fiscal years 1999-2003, we determined that the annual reports did not contain sufficient detail on program expenditures to demonstrate accountability for the use of all the grant funds. ED officials report that the agency is now working closely with the American Samoa Department of Education to submit quarterly financial reports that describe in more detail how funds are being used. The Individuals with Disabilities Education Act (IDEA) is the primary federal law that addresses the unique needs of children with disabilities, including, among others, children with specific learning disabilities, speech and language impairments, mental retardation, and serious emotional disturbance. Under IDEA, Part B, ED provides grants to states and outlying areas, including American Samoa, to provide eligible children with disabilities who are aged 3 through 21 years with a free appropriate public education in the least restrictive environment to the maximum extent appropriate. American Samoa relies almost entirely on its IDEA grants to fund its Special Education Program. Special Education grants to American Samoa and other outlying areas are allotted proportionately among them on the basis of their respective need, not to exceed 1 percent of the aggregate amounts available to the states in a fiscal year, as determined by the Secretary of Education. IDEA funds have historically been appropriated every July 1 and remain available for obligation for 15 months. Under the law, if a state education agency does not obligate all of its grant funds by the end of the fiscal year for which the funds were appropriated, it may obligate the remaining funds during a carryover period of one additional fiscal year. The per student federal amount includes special education services such as regular and special education classes, resource specialists, and other related services. Table 10 shows Special Education Program funds awarded to American Samoa for fiscal years 1999-2003. Amounts do not reflect carryovers from prior years. As of January 2004, the American Samoa Department of Education reported that its Special Education Program was using the IDEA grant to provide services to slightly more than 1,100 eligible 3- to 21-year-old students with disabilities and that it was providing the requisite services to eligible children in the territory. Under IDEA, federal funds may be used for salaries of teachers and other personnel, education materials, and related services such as special transportation or occupational therapy. According to the American Samoa Department of Education, IDEA funds support all but 1 of the Special Education Program’s approximately 200 positions. According to ED officials, IDEA does not prohibit the provision of services to non-U.S. nationals. The Special Education Program in American Samoa is required to demonstrate that it meets all of the conditions that apply to the 50 states under IDEA. The main objective of IDEA is to identify each child with a disability, determine his or her eligibility for special education services, and provide each eligible student an individualized education program designed to meet his or her needs. To monitor performance of special education programs nationwide, ED required two biennial reports for the program covering school years 1998-1999 and 2000-2001. For 2002 and 2003, American Samoa was required to submit an annual report that included (1) a comparison of actual accomplishments to the objectives established for the reporting period, (2) reasons for any failure to meet the established objectives, and (3) additional pertinent information including a description of planned future educational activities. In response to ED’s request, American Samoa submitted a self-assessment in May 2003 based on ED's special education monitoring process (Continuous Improvement Monitoring Process), which was being implemented in 2003. Federal officials said that they rely primarily on the single audit reports to determine accountability for IDEA program funds. ED also reported that although the agency is not currently required to perform on-site reviews of the Special Education Program in American Samoa or any other insular area or state, members of ED’s Office of Special Education Programs conducted an on-site review in September 2004. The American Samoa Special Education Division Office submitted the required biennial and annual performance reports between 1999 and 2003. The Division Office also submitted the required self-assessment report for 2003. In these reports, American Samoa reported to ED that it was difficult to measure the progress of its Special Education Program because of data limitations and because its limited review indicated both progress and “slippage” in several core IDEA areas, such as general supervision of the program, provision of transition services, parent involvement, and provision of a free appropriate public education in the least restrictive environment. In 1999, a consultant from the Western Regional Resource Center (a grantee of ED’s Office of Special Education Programs) was contracted by the American Samoa Department of Education to conduct a compliance review of IDEA, as part of its general supervisory authority, by reviewing eight elementary and secondary schools. The consultant reported that all of the schools had various problems in preparing, updating, and retaining students’ individualized education programs. Seven of the eight schools did not provide a free appropriate public education to all eligible disabled students in accordance with requirements under IDEA. Four schools failed to place their special education students in the least restrictive environment; four schools were out of compliance with procedural safeguards of the act; and four schools had no mechanisms in place for identifying children and referring them for an evaluation, conducting an evaluation for those referred to the program, and determining whether those evaluated were eligible for services. In May 2003, an American Samoa special education program steering committee submitted a self-assessment report of the Special Education Program to ED. The report indicated that certain aspects of the program needed improvement in areas such as general supervision, public awareness and child find, early childhood and secondary transition, and providing a free appropriate public education in the least restrictive environment. The steering committee also reported that some aspects of American Samoa’s Special Education Program complied with IDEA requirements. After reviewing American Samoa’s self-assessment, ED’s Office of Special Education Programs identified program areas that were noncompliant or in danger of failing to comply with IDEA. For example, American Samoa’s self-assessment indicated that its Special Education Program had a limited pool of trained personnel and no physical therapists, occupational therapists, or social psychologists, chiefly because of a reported freeze on new hires and new positions in the program. ED also identified inconsistencies in the program’s stated ability to meet the requirement for special education students to participate in territory-wide assessments. In addition, ED found that the program failed to comply with IDEA requirements for parent participation and interagency coordination in transition planning and provision of services. During our visit to American Samoa, we selected 17 individualized education program files from six elementary and secondary schools that provide special education services in American Samoa, and we reviewed them for the requisite content. All requested files were provided, and they generally included the requisite content. We did not evaluate the quality of the written content in each individualized education program, although some student files appeared more comprehensive than others. IDEA also requires each public education agency to identify all children with possible disabilities residing in its jurisdiction. For each child identified, the agency must provide a full and individual evaluation to determine whether the child has a disability and the nature of the child’s educational needs, so that an individualized education program can be developed. IDEA requires the public education agency to initiate a collaborative planning effort between parents and school officials to develop this education program and calls for implementing the program as soon as possible. However, parents, teachers and education officials in American Samoa reported that the Special Education Division Office was often slow in responding to requests for services and other resources. For example, we met one student who was completely deaf in both ears but had been passed from kindergarten to third grade without being identified and referred to the Special Education Program for an assessment or evaluation. In third grade, the student was tested by an audiologist and confirmed to be deaf, and her principal requested the purchase of hearing aids to enhance the child’s ability to hear. According to the American Samoa Special Education Division Office, it did not submit a purchase order for hearing aids until April 2003, 4 months after the request was made; as of our visit in March 2004, the hearing aids had not arrived. Officials from the American Samoa Department of Education’s Special Education Division Office explained that the hearing aids had not yet arrived because of a miscommunication with the off-island company from which the devices were ordered. American Samoa Special Education officials said that the off-island company did not process the hearing aids order because it required advance payment, but that the company did not notify the American Samoan Special Education Division Office officials of this requirement. As a result, payment was not sent to the vendor and the hearing aids were not ordered. One barrier to effective implementation of the Special Education Program in American Samoa is the limited number of licensed or certified professionals. At the time of our review, American Samoa’s Special Education Program had about 200 staff, including program administrators, teachers, social workers, bus drivers, and other personnel. However, American Samoa Department of Education officials noted that the program needs more certified professionals. For example, according to the American Samoa Special Education Division Office, the program has only one physical therapist (hired in October 2003) and needs speech pathologists, occupational therapists, audiologists, psychologists, and other professionals certified or trained in teaching special education. In addition, the program had no certified psychologist at the time of our review. American Samoa reported that because its education program is supported almost entirely by federal funds, its average dollar allocation per child is more limited than are allocations in states that subsidize their IDEA grants with state or local contributions. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the American Samoa Special Education Program was limited. The single audit reports for fiscal years 1999 and 2000, which were completed in August 2003 stated that the program did not adequately maintain supporting documents for certain financial transactions and had questioned costs of more than $18,000 in 1999 and more than $170,000 in 2000. In addition, we found that the Special Education Program Director and staff had limited awareness of the program’s fiscal position for at least 2 years. Program funds are controlled almost entirely by the American Samoa Department of Education. The Department of Transportation’s (DOT) Airport Improvement Program provides federal grants for airport planning and infrastructure development involving safety, security, environmental mitigation, airfield infrastructure, airport capacity projects, landside access, and terminal buildings. The Federal Aviation Administration (FAA), which administers the program, has identified more than 3,000 airports that are significant to the national air transportation system and thus eligible to receive Airport Improvement Program grants. Total funding authorization for the Airport Improvement Program was $3.4 billion in fiscal year 2003. American Samoa has participated in the Airport Improvement Program since it began in 1982. Table 11 provides a summary of total FAA Airport Improvement grant awards to American Samoa during the period of our review. Distribution of Airport Improvement Program grants is based on a combination of formula grants and discretionary funds. The amounts of formula grants for primary airports, which include the main airport in American Samoa, are based on the number of passenger boardings or a minimum of $1,000,000 per year in grant funds. For nonhub primary airports like Pago Pago International, these funds are available in the year they are apportioned and remain available for 3 fiscal years. Larger airports have only 2 additional fiscal years to use these funds. Airports compete with other airports in their region for available discretionary funds. The American Samoa Department of Port Administration, which operates the airports, is the grant recipient. Before fiscal year 2004, the department was not required to provide any matching funds for the first $2 million of the grant award; above $2 million, the local contribution was 10 percent. For fiscal years 2004-2007, the department is not required to provide matching funds for the first $4 million; above $4 million, the required local contribution will be 5 percent. The department fulfills its matching requirement with credit for in-kind contributions, such as land or staff time, because it has no funds to contribute to the projects. American Samoa has three airports, all of which receive Airport Improvement Program grants. The main airport, Pago Pago International, is classified by FAA as a commercial service–primary airport and has two runways, one of which can accommodate large commercial jets. Typically, eight commercial passenger flights depart Pago Pago International per week. The other two airports, Fitiuta and Ofu, are very small commercial service–nonprimary airports that cannot accommodate large commercial carriers. Since 1998, Airport Improvement Program grants have been used for constructing taxiways, extending runways, and rehabilitating existing runways, taxiways, and shoulders. Maintaining the quality of runways, taxiways, and shoulders is critical to airport safety; according to airport officials in American Samoa, the jet engines can suck in debris such as loose asphalt as if they were “huge vacuum cleaners.” Projects also included the construction of an “aircraft, rescue and firefighting” training facility, the purchase of new fire and rescue vehicles (see fig. 11), new shelters for rescue vehicles, and the installation of perimeter fencing to improve airport security. Runway safety areas at the airport in Pago Pago were upgraded to meet FAA standards, providing additional margins of safety. Construction projects are completed through competitive contracts with engineering and construction firms. The same federal regulations apply to Airport Improvement Program grants in American Samoa as in the 50 states. Airports must have a 3- to 5-year capital improvement plan, which identifies the airport’s development priorities and forms the basis for the grants they request and are awarded by FAA. FAA works with the airports to develop this plan. FAA views project completion as the primary performance goal and monitors the performance of projects primarily through weekly construction progress reports. An FAA engineer also conducts an on-site inspection of every project, ideally at the project’s completion. However, according to an FAA official, such inspections are not always possible because of the cost of travel from the FAA Airport District Office in Honolulu to American Samoa. According to the FAA official overseeing Airport Improvement Program grants in American Samoa, contractors’ monthly claims for reimbursement represent a key means of assuring project accountability. Additionally, FAA must approve all contract change orders. Grantees must conform to a broad range of requirements governing the implementation of project grants, detailed in the FAA Airport Improvement Program Handbook. The handbook outlines project eligibility requirements, planning process guidelines, procurement and contract requirements, project accomplishment requirements, grant closeout procedures, and audit requirements. The Airport Improvement Program also relies on single audit reports to assess accountability for its funds to American Samoa. Procurements made under the Airport Improvement Program must comply with required federal contract provisions established by various laws and statutes. For example, the grantee must ensure that contractors comply with minimum wage requirements under the Davis-Bacon Act. The FAA official responsible for American Samoa stated that “Buy America” preferences apply to the purchase of steel and manufactured products but not to services, such as engineering, consulting, and construction, that comprise the bulk of grant expenditures. The official also stated that American construction firms do not bid on runway pavement projects in American Samoa, most likely because of costs associated with American Samoa’s remote location and the relatively small size of the projects involved. In addition, the official stated that the program does not require contractors to hire workers from the local labor force, according to the FAA official. According to the FAA official responsible for American Samoa, in fiscal years 1999-2003, the airports successfully completed projects, paid for with Airport Improvement Program grants, to improve safety and capacity. The main runway at Pago Pago International is free of areas with the potential for foreign object debris, and the taxiway’s repavement is almost completed. The airport now has the ability to respond to a land accident with its new aircraft rescue and firefighting vehicles, although maritime rescue capability does not currently exist. According to an FAA official, the use of separate DOI Operations and Maintenance Improvement grants to hire an experienced airport engineer in 2001 to manage the infrastructure projects contributed significantly to the effective use of the Airport Improvement Program grants in American Samoa. Prior to the engineer’s arrival, the airports had difficulty prioritizing and implementing projects funded with FAA’s Airport Improvement grants. Because of the engineer’s presence, projects were completed and contractors were paid on time, according to the FAA official. These DOI funds, which required a 50 percent local match, were sufficient to cover the engineer’s salary for 3 years. The engineer’s contract with the American Samoa Department of Port Administration expired at the end of June 2004. American Samoa airport officials reported that because the airports operate at a loss annually, they have been unable to complement Airport Improvement Program grant funds, which has slowed the completion of critical projects. For example, as of July 2004, the airports had not acquired all needed rescue vehicles, and upgrades of the main runway at Pago International had to be phased in over several years. Despite significant progress in upgrading the airport’s infrastructure and rescue response capability, an American Samoa airport official estimated that the airport would probably not reach an acceptable standard until 2007, based on the amount of federal funding available. An incident at Pago Pago International in August 2003 illustrates the impact of delays in upgrading the airport runway surface. The main runway had to close for 2 weeks because of the presence of foreign object debris on the runway. Hawaiian Air, which provides the only service between Pago Pago and Honolulu, suspended service after one of its jets took in debris after landing at Pago Pago, sustaining damage to one of its engines. Service did not resume until after emergency repairs to the runway, stranding travelers in American Samoa for 2 weeks. The airports recently acquired two aircraft rescue and firefighting vehicles, which are now available for use at Pago Pago International. However, two additional rescue vehicles are still needed, one each for Fitiuta and Ofu airports, according to an airport official in American Samoa. At Pago Pago International, crowded commercial jets arrive and depart despite a lack of maritime rescue capability. The airports have had to delay acquisition of this essential rescue equipment because of other priorities for the use of available grant funds. Future grant funds are to be used to purchase additional aircraft rescue and firefighting vehicles and a maritime rescue craft. According to an American Samoa official, the airport generates relatively little revenue from passenger facility charges of up to $4.50 per boarding passenger—a key revenue source for airports in the United States. Because only eight flights per week depart from Pago Pago International, passenger facility charges at that airport generate about $300,000 per year, which is insufficient to support any significant infrastructure upgrades or matching contributions, the official stated. American Samoa officials pointed out that foreign airports in the Pacific islands charge as much as $25 per departing passenger. They roughly estimated that if Pago Pago International were to charge $20 per departing passenger, it would generate more than $1 million per year. However, the $4.50 cap is statutory; Congress raised the cap from $3.00 to $4.50 in FAA’s 2000 reauthorization legislation and elected not to raise it again in FAA’s 2004-2007 legislation. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the timeframe specified in the Single Audit Act, overall accountability for the Airport Improvement Program in American Samoa was limited. The single audits for fiscal years 1998-2000 did not test Airport Improvement Program expenditures. The 2001 single audit report tested several of the program’s expenditures and found that the American Samoa government received federal funds in excess of allowable federal expenditures and did not meet the matching requirements of FAA grants. In addition, the auditors found that the American Samoa government is not in compliance with drawdown requirements of FAA funds, because the funds requested were not supported by proper documentation. According to prescribed procedures, these findings were forwarded to the U.S. Department of Transportation Inspector General, who would determine whether FAA needed to take remedial measures to improve the American Samoa Department of Port Administration’s financial accountability. The FAA official responsible for American Samoa stated that the Airport Improvement Program grantee had complied with accountability requirements. The official reported that, throughout projects, he received on a timely basis contractors’ monthly requests for reimbursement, as well as weekly construction progress reports from American Samoa airport officials. We asked airport officials in American Samoa to document that the contract for the major runway extension project was bid on competitively and that FAA reviewed and approved the contract award and contract change orders. The airport officials complied with this request, and FAA officials confirmed that they reviewed the bid process and all change orders. FAA officials also stated that there were no unresolved bid protests for any projects in American Samoa. DOT’s Federal-aid Highway Program provides funding to state transportation agencies in the planning and development of an integrated, interconnected highway system important to nationwide commerce and travel. The primary focus of the program is funding construction and rehabilitation of the National Highway System (NHS)—including the Interstate System—and improvements to public roads, with some exceptions, such as local roads. In 1970, the Federal-Aid Highway Act established, among other programs, the Territorial Highway subprogram; since then, the Federal-aid Highway Program has provided for the improvement of roads in American Samoa. Although Federal-aid Highway Program projects in American Samoa are funded under a different statute than projects in the 50 states, the territory’s projects are administered in the same manner as those in the states, with the territorial transportation agency functioning as the state agency. The Department of Transportation’s Federal Highway Administration (FHWA) Hawaii Division Office administers three main subprograms in American Samoa under the Federal-aid Highway Program. Territorial Highway subprogram. The subprogram’s purpose is to assist American Samoa and other U.S. territories in constructing and improving its arterial highways and necessary interisland connectors. Territorial Highway funds can be used for improvements on all routes designated as part of the Territorial Highway System. High Priority Projects subprogram. The subprogram provides designated funding for specific projects described in law and determined by Congress to be high priority. Emergency Relief subprogram. Subprogram funds are intended for the repair and reconstruction of federal-aid highways and roads on federal lands that have suffered serious damage as a result of natural disasters or catastrophic failures from an external cause. The funds may be used for repair work to restore essential travel, minimize the extent of damage, or protect the remaining facilities. Table 12 shows total annual funding for federal highway planning and construction in American Samoa for fiscal years 1999-2003. The table also shows funding for the Territorial Highway, High Priority Projects, and Emergency Relief subprograms. Federal funds account for 100 percent of all federal highway construction projects in American Samoa. The Territorial Highway subprogram provides a set amount of $36.4 million each fiscal year for the U.S. territories. Of this amount, American Samoa and the Commonwealth of the Northern Mariana Islands each receive 10 percent, while Guam and the Virgin Islands each receive 40 percent, according to a 1993 allocation formula. Territorial Highway funds are available for expenditure in the fiscal year in which they are awarded and up to an additional 3 years. High Priority Projects and Emergency Relief funds are available for an unlimited period until they are expended and are subject to an annual obligation limit. The obligation limit for Emergency Relief funding in the territories as a group is $20 million. The Federal Highway Administration’s (FHWA) Hawaii Division Office is responsible for administering the Federal-aid Highway Programs in American Samoa, while the American Samoa Department of Public Works typically handles the actual work, including planning and construction supervision. The FHWA-Hawaii Division Office estimated that it approved, funded, and initiated a total of 43 projects for the Territorial Highway, High Priority Projects, and Emergency Relief subprograms in American Samoa in fiscal years 1999–2003. Officials said that about 19 projects showed signs of being completed or near completion in March 2004. Many of these projects were to construct and rehabilitate different segments of the island’s main road—Route 1—and other village roads. One of the completed projects we viewed restored a segment of Route 1 with new pavement, curb and gutter, a new concrete revetment on one side, and an embankment to protect the road from falling rock on the other side. FHWA officials characterized the goal of the Federal-aid Highway Program in terms of project completion more than performance. The main goal for federal highway projects in American Samoa is to complete funded projects listed in American Samoa’s Five-Year Highway Division Master Plan. The master plan serves as a guidebook for highway development goals in American Samoa and sets forth sequenced budgets and time frames for the program’s main priority—to rebuild the heavily trafficked corridor stretching from American Samoa’s main airport to Breakers Point. According to a U.S. Department of Transportation (DOT) official, American Samoa, as a Federal-aid Highway Program grantee, is generally subject to the same construction and program regulations as a state grantee. The program’s financial accountability is determined in part by the results of single audit reports. Officials from the Federal Highway Administration’s Hawaii Division Office said that it relies on the American Association of State Highway and Transportation Officials’ greenbook, A Policy on Geometric Design of Highways and Streets, for technical (construction) accountability standards. The greenbook contains specific nationwide design controls and criteria for the optimization and improvement of highways and streets. According to DOT officials, the Buy America Act applies to the procurement of materials such as steel, iron, and other manufactured goods that are used in all Federal-aid Highway Program construction projects. The act does not apply to procurement of engineering or other services. The act requires competitive bidding for contracting, equipment, and other services. The act also requires that federal-aid highway projects follow other general provisions for awarding contracts, construction, prevailing wage rates, nondiscrimination in hiring practices, and other requirements. In addition, the act stipulates that the Federal Highway Administration’s Hawaii Division Office comply with general project approval and oversight requirements, but it defines no specific level of federal oversight for projects in American Samoa. According to DOT officials, projects in fiscal years 1999-2003 were completed in a timely manner, within federal regulations, and in accordance federal highway greenbook standards. Officials said that the level of oversight and control of highway funds to American Samoa is uniquely determined by the FHWA Hawaii Division Office. Officials stated that they visit American Samoa frequently (at least once per quarter, not including emergency events) to ensure that projects continue to meet these goals. FHWA officials said that federal-aid highway programs in American Samoa have vastly improved significantly in the past several years. Nonetheless, officials acknowledged that documentation and certain organizational capability issues in the American Samoa Department Public Works have been a problem in the past, although they stated that this problem has improved as well. The weather and topography in American Samoa present significant barriers to highway construction and maintenance. Tropical storms cause major problems, particularly on Route 1, which runs along the shoreline. Because some roads throughout the island are built into narrow terraces on hillsides, storms often wash out roadbeds or cause landslides. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the timeframe specified in the Single Audit Act, overall accountability for the Federal-aid Highway Program was limited. Officials from the Federal Highway Administration’s Hawaii Division Office indicated that they were confident that federal-aid highway projects initiated in fiscal years 1999-2003 were carried out according to the program’s requirements and standards. According to the agency, the American Samoa government submits invoices or other documentation for each current bill submitted to the Hawaii Division Office for reimbursement. However, the delinquent 1998-2001 single audit reports cited noncompliance with the Davis-Bacon Act. The reports also found that the program lacked formal procedures regarding the retention of road sampling results in 1998 and that documentation for expenditures in at least 3 of those years could not be found. Medicaid was established in 1965 as a joint federal-state program that finances health care coverage for certain low-income families, children, pregnant women, and individuals who are aged or disabled. Medicaid consists of mandatory health care services, which participating states and territories must offer to certain categories of beneficiaries, and optional services, which states and territories can elect to offer under a federally approved state Medicaid plan. In exchange for their providing Medicaid services, the federal government pays each state and territory a federal medical assistance percentage of its Medicaid expenditures, which is determined through a statutory formula based on states’ per capita income. Under this formula, states and the District of Columbia are generally eligible to receive reimbursement for 50 to 83 percent of their Medicaid expenses with no cap on the federal share. However, under federal law, American Samoa can receive federal funding for only 50 percent of its Medicaid expenses up to a maximum dollar ceiling, or cap. In fiscal year 2001, Medicaid had more than 46 million enrollees nationwide, and federal and state Medicaid expenditures totaled $228 billion. Medicaid is administered by the HHS Centers for Medicare & Medicaid Services. Table 13 reflects the federal funding received by American Samoa for its Medicaid Program since fiscal year 1999. American Samoa operates its Medicaid program under a statutory waiver, which exempts it from most Medicaid laws and regulations but not the statutory 50 percent federal match or cap. As a result, American Samoa’s “Plan of Operations” approved by the U.S. Department of Health and Human Services (HHS) has only three requirements: federal payments may not exceed the cap, the federal matching rate may not exceed 50 percent, and American Samoa must provide all mandatory Medicaid services. All in- patient care and virtually all outpatient care are provided by the territory’s only hospital, the Lyndon Baines Johnson Tropical Medical Center (LBJ Hospital). Unlike the 50 states, American Samoa does not enroll individuals in a separate Medicaid program based on eligibility determinations. Instead, Medicaid funds in American Samoa are combined with LBJ Hospital’s other sources or revenue to support a system of free universal health care. In lieu of federal Medicaid reimbursements for specific services to enrolled Medicaid beneficiaries, HHS requires that American Samoa submit an annual estimate of the number of people “presumed eligible” for Medicaid. According to its Medicaid plan, American Samoa defines its presumed eligible population as the share of its population living below the U.S. poverty level, which in American Samoa is 61 percent, according to the 2000 Census. It is not known what the federal Medicaid expenditure for American Samoa would be if the Medicaid Program were administered there in the same manner as in the 50 states. However, according to HHS officials in Region IX, Centers for Medicare & Medicaid Services headquarters, and Honolulu, the federal Medicaid expenditure in American Samoa would probably be greater if there were no statutory funding cap. According to its approved Medicaid plan, American Samoa is required to provide standard Medicaid mandatory services, which include physician services; laboratory and X-ray services; inpatient and outpatient hospital services; medical screening of minors; family planning; nurse-midwife and certified nurse-practitioner services; nursing facilities for individuals 21 years or older; and home health care for individuals entitled to nursing facilities. If these services are not available on-island, American Samoa is required to make arrangements for them to be provided off-island. In addition to meeting approved Medicaid plan requirements, American Samoa must also ensure that LBJ Hospital, American Samoa’s only hospital facility and a provider of the territory’s Medicaid services, complies with certain Medicare hospital requirements. Specifically, HHS requires hospitals receiving payment under Medicaid to meet hospital conditions of participation established under the Medicare Program. These conditions are required by the Social Security Act and are intended to protect patient health and safety and ensure that high-quality care is provided. To assess LBJ Hospital’s compliance with these conditions, HHS conducts an on-site survey about every 3 years. Further, HHS requires the American Samoa Medicaid Program to submit both an annual budget request and quarterly expenditures reports. In addition, American Samoa must submit its annual estimate of the presumed eligible Medicaid population, which HHS must approve before awarding Medicaid funds. HHS also relies on single audit reports to assess accountability for the federal Medicaid funds provided to American Samoa. No data showing whether all required Medicaid services were being provided to the eligible population, on- or off-island, or indicating the quality of care were available for the period of our review. HHS officials stated that they had some assurance that a minimum standard of care was provided, because, as a participant in the Medicaid Program, LBJ Hospital must meet Medicare certification standards to participate in Medicare and Medicaid. However, federal and American Samoan officials also acknowledged that the hospital, which was built in the late 1960s, has struggled to meet the conditions of participation and to provide adequate health care. The quality of health care in American Samoa, supported partially by Medicaid funds, depends largely on the standards of care at LBJ Hospital. However, the hospital must contend with an inadequate facility, a lack of qualified medical staff, budget constraints, and American Samoa’s remote location. LBJ Hospital persistently suffers from serious fire-safety code deficiencies, which threaten its ability to maintain its Medicare certification. HHS has conducted on-site Medicare certification surveys of the hospital every few years, most recently in November 2003. The hospital has failed to correct its fire-safety problems despite formal threats by HHS, beginning in 1993, to terminate its certification. The 2003 survey cited many of the same deficiencies identified in earlier surveys conducted in 1997 and 2000, including a lack of “basic features of fire protection, which are fundamental to all health care facilities.” The hospital’s primary fire safety code violations were due to noncompliant smoke and fire detection and alarm systems, the failure to install automatic sprinklers, and inadequate water pressure. In April 2004, the hospital submitted a “plan of corrections,” as required, in response to the deficiencies cited in the hospital certification survey. The plan of corrections, which has been approved by HHS, indicated that the hospital was dependent on annual U.S. Department of the Interior (DOI) capital improvement grant funds of about $1.5 million annually to address infrastructure deficiencies cited in Medicare certification surveys. In fiscal year 2004, the hospital reprogrammed $650,000 of these funds to install a facility-wide sprinkler system; however, the hospital reported that this project will not be completed until December 2005. The hospital also cautioned, “LBJ will continue to face a fixed barrier of time, money and space in efforts to renovate the entire campus facility to fire safety code requirements.” Although funds from DOI are essential to LBJ Hospital’s ability to address critical infrastructure deficiencies cited by HHS, the two federal departments have not formally collaborated on the hospital’s priorities for using DOI’s capital improvement grants. According to hospital officials, the DOI capital improvement grants are sufficient to support only one or two new construction projects per year. The hospital also reported that it uses these grants for many other hospital facility upgrades beyond those needed to address deficiencies cited in Medicare certification surveys. During our visit to the hospital, we found that although the newly renovated areas had been fitted with automatic sprinklers, the sprinklers were not yet hooked up or functional. LBJ Hospital officials attributed this situation to inadequate water pressure. LBJ Hospital’s ability to deliver adequate health care was also hampered by a lack of qualified staff. According to LBJ officials, the hospital has difficulty attracting U.S.-certified medical doctors and relies mostly on medical staff that attended medical school in Fiji. The hospital also suffers from a shortage of nurses. Recent Medicare certification surveys found that the hospital did not meet minimum standards for 24-hour nursing services. With only 22 registered nurses available, the hospital acknowledged that it does not have a large enough nursing staff to cover every shift on every unit, 24 hours per day, 7 days per week, as the standard requires. LBJ Medical Center Authority officials stated that they have installed incentive programs to try to attract medical doctors and registered nurses but that the relatively low salaries and the territory’s remote location make it difficult to attract qualified staff. The hospital also had unmet needs for medical technicians such as radiology and operating room technicians. LBJ Hospital’s ability to upgrade its facility and hire needed staff is severely hampered by chronic budget deficits and outstanding debt, according to hospital officials. Key local and federal financial support for the hospital has either decreased or remained constant. The hospital’s annual subsidy from the government of American Samoa has dropped from about $8.1 million in fiscal year 1998 to about $5.3 million in fiscal year 2003. Since 1998, the DOI has directly provided LBJ Hospital with about $7.8 million annually from its government operations grant. This amount has not been adjusted for inflation. Although its federal Medicaid funding has increased over time to cover the cost of inflation, HHS and American Samoa Medical Center Authority officials reported that the cap on the funding probably results in a smaller federal contribution than American Samoa would receive if it were funded in the same way as the 50 states. According to a hospital official, patient revenues increased during fiscal years 1998-2003; however, much greater increases are needed if the hospital cannot identify other sources of revenue. The Medical Center Authority has proposed a plan to charge patients higher fees to cover about 20 percent of the cost of their medical care. However, hospital officials believe that passing local legislation to authorize the increases would be difficult, since the public views free medical care as a free service or entitlement. Currently, the hospital charges a nominal facility fee of $5 per outpatient visit and $20 per day for inpatient stays. The hospital charges nonresidents $10 for outpatient visits and $100 per day for inpatient stays. American Samoa’s remote location also hampers the delivery of medical care. Costs of importing supplies are high and, as stated, attracting qualified medical and other personnel is difficult. Medical care not available in the territory must be provided off-island at a much higher cost. For example, patients in need of long-term care must be moved to nursing homes off the island, usually in Hawaii or California. In fiscal years 2001- 2003, the hospital reported that the average cost of care referred off-island averaged over $2 million per year—about 8 percent of the hospital’s total expenses. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for Medicaid funding was limited. Medicaid expenditures were not included in the American Samoa’s single audits for fiscal years 1998-2001, because they were included in LBJ Hospital’s financial statements; however, an independent audit of the hospital’s financial statements for fiscal year 2001 found significant problems. The hospital had difficulty locating documentation to support its accounting records and lacked adequate evidential matter to support a number of recorded transactions. Because of this and other problems, the auditor was unable to express an opinion on the financial statements printed in the audit. In addition, in reviewing compliance with internal controls, the auditors found several instances of noncompliance that they considered to be reportable conditions and material weaknesses. An independent audit of LBJ Hospital for fiscal years 1998-2000 found similar problems, which also resulted in the auditor’s inability to express an opinion on the financial statements for those years. The purpose of the Head Start Program is to promote school readiness by providing comprehensive services designed to foster healthy development in 3- to 5-year-old children from low-income households. The program, created in 1965, is administered by the HHS Head Start Bureau, Administration on Children, Youth and Families, Administration for Children and Families. Grants are awarded by the HHS regional offices. Federal appropriations for the Head Start Program nationwide have grown substantially in recent years, from $1.552 billion in fiscal year 1990 to $6.668 billion in fiscal year 2003. The expansions have been used to increase the number of children served and provide “quality improvement” activities. Funds to grantees are awarded at the discretion of HHS from state allocations determined by a formula set forth in law after set-aside provisions have been applied. Payments to the U.S. territories of Guam, American Samoa, the Commonwealth of the Northern Mariana Islands, and the Virgin Islands are not to exceed one-half of 1 percent of the total annual appropriation. Table 14 shows annual grants to the Head Start Program in American Samoa during the period of our review. During the annual grant award process, HHS regional offices communicate to Head Start grantees their level of funded enrollment. For American Samoa’s Head Start Program, known as Early Childhood Education, HHS set the enrollment level for fiscal year 2003 at 1,532 funded slots. The annual grant award includes a base amount to cover basic operating expenses, plus additional funds such as cost of living adjustments and quality improvements, which are included in the base amount for the next fiscal year. A grant award may also include nonrecurring funds for training and technical assistance and for program improvements such as new facilities. According to an HHS official, funds may be carried over for 1 year. The Head Start Program in American Samoa typically does not carry over any funds, with the recent exception of some supplemental grant funds used for construction of new facilities. Head Start grantees provide a range of individualized services in the areas of education and early childhood development; medical, dental, and mental health; nutrition; and parent involvement. The targeted population is 3- to 5-year old children from low-income families. American Samoa has had a Head Start Program for more than 30 years. Early Childhood Education, the territory’s only Head Start grantee, is part of the American Samoa Department of Education. Program officials report that they had 54 classrooms and 111 classroom instructors as of March 2004. (Fig. 12 shows a Head Start classroom.) According to Early Childhood Education officials, there are more eligible children in American Samoa than available slots; however, the program serves virtually all of the children who apply for it. Not all eligible children apply or remain enrolled throughout the year. Some children start each year on a waiting list but are eventually able to participate because of attrition. The Head Start Program in American Samoa is subject to the same goals, standards, and oversight requirements as Head Start Programs in the 50 states, according to HHS officials. Head Start grantees must adhere to a set of performance standards required in federal regulations. The performance standards define the services that grantees are to provide to the children and the families they serve and constitute the Head Start Program’s expectations and requirements that grantees must meet. The performance standards cover five service areas including services for children with disabilities; education (i.e., classroom instruction); building family and community partnerships; health, including medical, dental, and mental health screening as well as nutrition and safety; and program management and operation. The Head Start Act and accompanying regulations require the HHS Head Start Bureau to conduct an on-site review every 3 years to ensure that performance standards are met. The grantee must respond in writing with a plan for correcting any findings of noncompliance with federal standards. In addition to undergoing the on-site reviews, grantees are required to submit an annual “Program Information Report” that tracks program characteristics and performance data. Several processes exist to ensure the financial accountability of the national Head Start Program. First, an HHS fiscal analyst completes an annual checklist for assessing a grantee’s financial accountability and makes a recommendation for approving funding to the grantee. Second, the grantee’s budget figures are included in the annual application, which the fiscal analyst reviews to make sure that they are allowable. The grantee must have an approved indirect cost rate and must certify that administrative costs do not represent more than 15 percent of the total approved costs of the program. Third, the single audit reports test for accountability of the program. Fourth, quarterly financial status reports must be sent by the grantee to the HHS in Region IX. Finally, during HHS’s triennial, on-site monitoring review, a fiscal analyst reviews the program to ensure that annual audits are up to date and that financial management systems, inventory, and procurement processes include required elements. Data were not available to assess whether the goal of improving low- income children’s readiness for school has been achieved. However, in its most recent triennial on-site monitoring review, HHS found that Early Childhood Education “operates a very high quality Head Start program.” Additionally, HHS officials in Region IX highlighted progress made by the grantee in constructing modern, new classroom facilities with the help of supplemental grant funds. In May 2003, a team of nine participated in a weeklong review to assess the degree to which services were implemented according to Head Start Performance Standards. The review found that the program provided most required services, and it cited strong community partnerships and a high level of parent involvement and support for the program. Additionally, the review highlighted the literacy program, which utilized locally designed curriculum and materials that incorporated native culture, community, and environment as well as family traditions. Part of the success of this literacy program, according to the review, was attributable to the use of a community lending library and a partnership with a community-based organization, called “Read to Me Samoa,” to promote child and family literacy by emphasizing both English and Samoan languages as well as cultural traditions. The review also found that the Early Childhood Education Program implements a model oral health program, in partnership with LBJ Hospital, in which virtually all children receive dental screenings and follow-up treatment. HHS officials in Region IX also highlighted the tremendous improvements in the quality of the classroom facilities owing to the help of supplemental grant funds. Seven new facilities providing a total of 38 classrooms devoted exclusively to Head Start either have been completed or are in the process of being constructed, according to an American Samoa official. Additional facilities are planned, depending on the future availability of Head Start grant funds. During our visit to American Samoa, we toured several Early Childhood Education classrooms, including those in two of the newer facilities. The classrooms were spacious, well lit, and well ventilated. The addition of these facilities will enable several classes previously held in village homes to move into a modern institutional setting. One of the new facilities has enabled children to be moved from overcrowded classrooms in eight private homes into a modern institutional setting, according to Early Childhood Education officials. Currently, the program relies on 19 village homes and 13 elementary schools to provide classrooms. Although HHS officials viewed the Early Childhood Education Program favorably, some challenges remain. Early Childhood Education is unable to meet the performance standards for providing mental health services to children and their families, and adequate playground space with secure perimeter fencing is lacking. Additionally, a language barrier poses additional challenges for assessing children and acquiring curricular materials. Some of these challenges are as follows: American Samoa does not have mental health professionals available to enable the Early Childhood Education Program to fulfill the HHS performance standard of providing a “comprehensive mental health program that provides prevention, early identification and intervention.” Because of this lack of access to mental health professionals, the program has only requested supplemental training and technical assistance for a consultant to provide training and awareness to Early Childhood Education staff and parents on mental health. Early Childhood Education officials stated that their priority for the use of supplemental grant funds is to continue to build additional classrooms, which leaves no funds for adequate playgrounds or perimeter security fencing. Head Start has a new requirement for assessing the educational achievement of children enrolled in the program; however, the assessment tool is not available in Samoan, the primary language in Early Childhood Education classes. Additionally, because very few curricular materials are available in Samoan, the program must devote additional resources to creating curricular materials locally. Because the American Samoa government did not complete single audits for fiscal years 1998-2003 within the time frame specified in the Single Audit Act, overall accountability for the Head Start Program was limited. Officials from HHS Region IX stated that the grantee met the region’s financial reporting requirements, but they cited the program for the lack of governmentwide single audits. Although the triennial on-site review team includes a fiscal analyst to review the program’s fiscal management, HHS officials explained that this review does not rise to the level of a detailed audit. When the May 2003 on-site review was conducted, the reviewers pointed out that a single audit for federal grants to American Samoa had not been conducted since 1997. HHS accepted the grantee’s response that efforts were under way by the American Samoa government to come into compliance with the Single Audit Act. After the May 2003 review, the American Samoa government completed single audits for fiscal years 1998- 2000 but did not test expenditures in the Head Start Program, according to HHS officials. Most federal grant accounts are managed through American Samoa government Department of Treasury. requests use of funds. invoices and receiving reports to Treasury. Samoa Budget Office and TOFR verify balances for their respective grant accounts. reconciles invoices with grant funds and issues payment to vendors. Samoa Office of Procurement handles requests for goods and services if competitive bidding is necessary. TOFR grantees submit invoices and receiving reports to TOFR for payment. Treasury, TOFR, and independent authorities' financial reports are merged together into the American Samoan Government's Comprehensive Annual Financial Report (CAFR). Annual single audit reports include schedule of federal expenditures for nearly all federal awards managed by Treasury, TOFR, and independent authorities. Independent authorities work through own procurement system. Independent authorities use their own payment systems. ASG Department of Treasury handles large number of grant accounts. Fewer grant accounts are established through TOFR and independent authorities. Independent authorities, such as the Lyndon Baines Johnson Tropical Medical Center, the American Samoa Power Authority, and the American Samoa Community College, operate semiautonomously from the American Samoa government. The following are GAO’s comments on the Department of the Interior’s letter dated October 28, 2004. 1. We did not refer to any cultural biases in our report. The only use of the word “cultural” is in the context of Head Start’s teaching cultural traditions. Neither did we refer to political obstacles. We did note that hospital officials stated that passing local legislation to increase fees would be difficult. 2. See page 21. 3. See footnote 13, page 47, appendix II. 4. DOI implies that we assessed it as having “failed to act” in response to the American Samoa government’s noncompliance with the Single Audit Act. In fact, we judged that DOI was “slow to act” (see pp. 28-31). We recognize the department’s long-standing struggle for accountability in the insular areas; our report refers to most of the measures that DOI has taken to improve accountability in American Samoa. However, as we note in the report, DOI did not set forth a schedule for American Samoa to comply with the Single Audit Act until 2002—almost 3 years after the due date for the fiscal year 1998 report. 5. DOI asserts that it has taken all available actions short of cutting off funds in a high-risk status declaration. It further argues that a high-risk status declaration would imperil funding from other agencies to American Samoa. However, a high-risk declaration does not mean an immediate suspension of U.S. funding. Our recommendation is not that DOI alone declare American Samoa a high-risk grantee, but rather that the federal agencies coordinate a response to lax accountability in American Samoa. Improving federal oversight and monitoring will improve the efficiency and accountability of programs in American Samoa, to the benefit of most American Samoans. The following is GAO’s comment on the Department of Health and Human Service’s letter dated November 18, 2004. 1. The Centers for Medicare & Medicaid Services of the Department of Health and Human Services states that it works with American Samoa to ensure that Medicaid budget and expenditure reports are completed timely and accurately and that it has experienced no significant problems. However, our discussion in appendix VI of accountability at LBJ Hospital—the primary provider of medical services in American Samoa and the primary recipient of Medicaid funds to American Samoa—raises questions about internal controls at the hospital. The hospital’s auditor for fiscal years 1998-2000 was unable to express an opinion, because the hospital declined to present any statements of cash flow. The following are GAO’s comments on the American Samoa Government’s letter dated November 5, 2004. to eliminate duplicate reporting and monitoring. As we note on page 31, ED has already declared American Samoa a high-risk grantee and has implemented increased reporting. ED provides almost 18 percent of the grant dollars we reviewed. ED provides several other grants that were not included in this review. In addition to the individual named above, Eugene Beye, Howard Cott, Adrienne Spahr, Ann Ulrich, Reid Lowe, Mark Dowling, and Mark Braza made significant contributions to this report.
American Samoa, a U.S. territory, relies on federal funding to support government operations and deliver critical services. The Secretary of the Interior has administrative responsibility for coordinating federal policy in the territory. Under the Single Audit Act of 1996, American Samoa is required to perform a yearly single audit of federal grants and other awards to ensure accountability. To better understand the role of federal funds in American Samoa, GAO (1) examined the uses of 12 key grants in fiscal years 1999-2003, (2) identified local conditions that affected the grants, and (3) assessed accountability for the grants. In fiscal years 1999-2003, 12 key federal grants supported essential services in American Samoa. These services included support for government operations, infrastructure improvements, nutrition assistance, the school system, special education, airport and highway infrastructure improvements, Medicaid, and early childhood education. A shortage of adequately trained professionals, such as accountants and teachers, as well as inadequate facilities and limited local funds hampered service delivery or slowed project completion for many of the grants. For example, American Samoa's only hospital lacked an adequate number of U.S.-certified medical staff. Further, the hospital had persistent and serious fire-safety code deficiencies that jeopardized its ability to maintain the certification required for Medicaid funding. American Samoa's failure to complete single audits, federal agencies' slow reactions to this failure, and instances of theft and fraud limited accountability for the 12 grants to American Samoa. The American Samoa government did not comply with the Single Audit Act during fiscal years 1998-2003. The 1998-2000 audit reports, completed in 2003, and the 2001 audit report, completed in 2004, cited pervasive governmentwide and program-specific accountability problems. Despite the audits' delinquency, federal agencies were slow, or failed, to communicate concern to the American Samoa government or to take corrective action. In addition, accountability for all of the grants was potentially undermined by instances of theft and fraud. For example, the American Samoa Chief Procurement Officer, whose office handles procurements for most of the grants GAO reviewed, was convicted of illegal procurement practices.
Section 619 restrictions on engaging in proprietary trading or investing in or sponsoring hedge funds or private equity funds apply to banking entities, which the section defines to include any insured depository institution, company that controls an insured depository institution, company treated as a bank holding company for purposes of Section 8 of the International Banking Act of 1978, and affiliate or subsidiary of such entity. The section defines proprietary trading as engaging as a principal for the trading account of the banking entity, with the term trading account separately defined as an account used principally for the purpose of selling in the near term (or otherwise with the intent to resell in order to profit from short-term price movements). The act’s proprietary trading prohibition provides a number of exemptions for permitted activities, including activities related to market making and underwriting, risk- mitigating hedging, transactions on behalf of customers, and transactions in government securities, among others. The act limits the permissibility of some of these activities to specific purposes and establishes overall criteria prohibiting such activities if they would result in a material conflict of interest, would expose the entity to high-risk assets or trading strategies, or would threaten the institution’s safety and soundness or U.S. financial stability. However, the act does not define the permissible activities themselves. For example, market making-related activity and underwriting are permitted to the extent they are “designed not to exceed the reasonably expected near term demands of clients, customers, or counterparties,” but the provision does not define “market making” or “underwriting.” Similarly, permissible risk-mitigating hedging activities must be designed to reduce “specific risks” related to individual or aggregated positions, contracts or other holdings. The provision does not define what constitutes the practice of risk-mitigating hedging. As a result, regulations that further define what are and are not permitted activities could significantly impact the scope of the new restrictions. Similarly, the act’s restrictions on hedge fund and private equity fund investments allow for a de minimis amount of investment to facilitate customer focused advisory services. This amount cannot exceed 3 percent of the total ownership interests of the fund 1 year after it is established and must be immaterial to the banking entity as defined by the regulators, and no banking entity’s aggregated investments in all such funds may exceed 3 percent of its Tier 1 capital. Section 619 of the act generally requires the appropriate federal banking agencies, the Commodity Futures Trading Commission, and the Securities and Exchange Commission (SEC) to promulgate regulations governing proprietary trading by the entities they regulate. The Federal Reserve will issue regulations for any company that controls an insured depository institution or that is treated as a bank holding company for purposes of Section 8 of the International Banking Act of 1978, any supervised nonbank financial company, and any subsidiary of these companies if another regulator is not the primary financial regulatory agency. The appropriate federal banking agencies are to issue regulations jointly with respect to insured depository institutions, including national banks and federal savings associations regulated by the Office of the Comptroller of the Currency (OCC), state-chartered banks that are not members of the Federal Reserve System and state-chartered thrifts regulated by the Federal Deposit Insurance Corporation (FDIC), and FDIC-insured state banks that are members of the Federal Reserve. The Commodity Futures Trading Commission is to issue regulations with respect to entities it regulates, including futures commission merchants, which are firms that buy and sell futures contracts as agents for customers. Additionally, SEC is to issue rules for the entities it regulates, including registered broker-dealers and investment advisers. To implement the provisions on proprietary trading and hedge fund and private equity fund investments, the act required the Financial Stability Oversight Council (FSOC) to complete a study and make recommendations on implementing the provisions by January, 2011. The study included specific recommendations for regulators to monitor and supervise institutions for compliance. Within 9 months of completing this study—by October 2011—the regulators are to adopt implementing regulations. Also as required by the act, the Federal Reserve issued a final rule on February 9, 2011, regarding the timelines for banking entities to bring their proprietary trading and hedge fund and private equity fund investments into conformance with the restrictions, including the process for the granting of extensions. By October 2011, the Federal Reserve, OCC, and FDIC must jointly issue rules to fully implement the proprietary trading and hedge fund and private equity fund restrictions, with SEC and the Commodity Futures Trading Commission required to issue similar rules that cover the entities for which they have primary oversight responsibilities. In developing and issuing these regulations, the agencies are to consult and coordinate with each other. The chairperson of the FSOC—the Secretary of the Treasury—is responsible for coordinating the regulations required by the act. Proprietary trading and hedge fund and private equity fund investments, like other banking and trading activities, provide revenue and create the potential for losses at banking entities. Financial institutions have conducted proprietary trading at stand-alone proprietary-trading desks and may have also conducted proprietary trading elsewhere in the firm. We analyzed data on activities of the stand-alone proprietary-trading desks of the six largest U.S. bank holding companies from June 2006 through December 2010, but determined through our work that collecting data on other proprietary trading was not feasible because the firms did not separately maintain records on such activities and because of the uncertainty over the types of activities that will be considered proprietary trading until the completion of the required regulatory rulemaking. We also collected data on hedge and private equity fund investments that the bank holding companies believed to be restricted by the act. The revenues from these firms’ stand-alone proprietary trading were generally small in most quarters relative to revenues from all trading and other activities. These activities also resulted in larger losses as a percent of total losses during the financial crisis. Revenues and losses from these firms’ hedge fund and private equity fund investments followed a similar trend. Although stand-alone proprietary trading and hedge fund and private equity fund investments contributed to losses during the crisis, such activities affected these firms’ overall net incomes less during that period than did other activities, such as lending and securitization, including positions in mortgage-backed securities or more complex financial instruments that some view as proprietary trading. Some market participants and observers were concerned that the act’s restrictions could negatively affect U.S. financial institutions and the economy by limiting banks’ ability to diversify their income stream and compete with foreign institutions, and reducing liquidity in asset markets. However, the likelihood of such potential outcomes was unclear. Proprietary trading can take a number of forms. Proprietary traders often take positions in securities or other products that they think will rise or fall in value over a short period of time in order to profit from a trader’s view of the direction of the market. Proprietary traders also use more complex strategies such as relative value, in which a trader identifies differences in prices between two related securities or other financial products and takes positions in those products to make a profit. For example, a proprietary trader might identify a discrepancy between the pricing of a stock index and the pricing of its underlying stocks, and then take a long position in one and a short position in the other to profit when the discrepancy corrects itself. Banking entities can conduct proprietary trading in desks organized with the specific purpose of trading a firm’s own capital (stand-alone proprietary-trading desks), but some have also conducted what could be considered proprietary trading in conjunction with their market making activities by accumulating positions in a particular asset at levels that exceed the amount of the firm’s typical or necessary inventory in that asset used to facilitate customer trades. A trader at a market-making desk may anticipate that the price of a particular stock will increase over the short term and purchase and hold more shares of that stock in order to make a larger profit than he or she would otherwise from buying and selling the product as a market maker. For example, one regulator described the activities of one of the trading desks at one of the bank holding companies we reviewed as making markets for clients but that the firm also allowed that desk’s traders to hold inventory positions exceeding the amount necessary to facilitate client trades when the traders had a particular view on the direction of the market. Also, as discussed later in the report, debate exists about the full scope of activities that should be considered proprietary trading, and some define the term to include not only trading activities but other activities conducted by a firm as a principal, such as long-term investments. Trading activities, including proprietary trading, like other banking activities, can create revenues for bank holding companies. Bank regulators, financial institution representatives, and others noted that such activities provide another source of revenues for banks that can diversify their income from lending and other activities. (We discuss the recent levels of revenue from trading activities—including stand-alone proprietary trading—in the next section of this report.) However, trading also poses several types of risks to bank holding companies:  Market risk: the potential for financial losses due to an increase or decrease in the value or price of an asset or liability resulting from movements in prices, such as interest rates, commodity prices, stock prices, or the relative value of currencies (foreign exchange).  Liquidity risk: the potential for losses or write-downs to occur if an institution has to exit a position but either cannot do so or can do so only at a significantly reduced price because of an illiquid market due to insufficient buyers or sellers.  Counterparty credit risk: the current and prospective risk to earnings or capital arising from an obligor’s failure to meet the term of any contract with the bank or to otherwise perform as agreed.  Reputation risk: the potential for financial losses that could result from negative publicity regarding an institution’s business practices that results in a decline in customers or revenues or in costly litigation.  Operational risk: the potential for loss resulting from inadequate or failed internal processes, people, and systems or from certain external events. These risks will vary depending on the type of product traded. For example, proprietary trading in stocks, which are generally traded in deep and liquid markets, faces lower liquidity risks than trading in less liquid credit and other products, such as some of today’s mortgage-backed securities and collateralized debt obligations (CDO), which would be harder to liquidate quickly in response to a capital shortage at a firm. Hedge fund and private equity fund investments can also pose risks to bank holding companies. Hedge funds, like proprietary trading operations, are subject to market and other types of risk that can result in significant financial losses, and private equity funds are additionally affected by broader changes in the economy that affect the companies in which they have invested. Some failures at other large financial institutions other than bank holding companies illustrate the potential for financial losses at hedge funds. For example, in 1998 following the near collapse of Long- Term Capital Management, a large hedge fund, the Federal Reserve facilitated a private sector recapitalization. It took this action because of concerns that a rapid liquidation of the firm’s trading positions and related positions of other market participants in already highly volatile markets might cause extreme price movements and might cause some markets to temporarily cease functioning. In 2007, two hedge funds required significant cash infusions from their sponsor, Bear Stearns Asset Management, which was a subsidiary of a broker-dealer holding company, when they experienced losses from holdings of CDOs that contained subprime mortgages. Some policymakers and at least one researcher have raised concerns that another risk associated with proprietary trading and hedge fund and private equity fund investments is systemic risk, which is the possibility that an event could broadly affect the financial system rather than just one or a few institutions. The extent to which proprietary trading and hedge fund and private equity fund investments pose systemic risks, if at all, is difficult to measure and could depend on the size of the activity, the extent to which other firms are conducting similar activities, and the level of distress in or concerns already present in the markets. Representatives of the six largest U.S. bank holding companies described a variety of methods they use to oversee the risks associated with proprietary and other trading activities and hedge fund and private equity fund investments. These financial institutions described having risk- management infrastructures that include regular meetings of firm executive staff who set policies and procedures regarding firmwide, business-line, and desk-level trading and risk limits. Among the most prominent ways that firms measure the risks and potential losses associated with their trading activities is by calculating their value-at-risk (VaR), which is an estimate of the likely loss that a portfolio of financial instruments will incur as the result of any changes in the underlying risk factors that could affect the value of the assets in that portfolio, including changes in stock prices, interest rates, or other factors. VaR estimates are typically calculated using historical market prices to represent the likely maximum loss that a portfolio will incur with either a 95 or 99 percent statistical probability, and therefore VaR limits are designed with the expectation that daily losses will exceed the limit as much as 5 percent of the time. VaR calculations, among other inputs, are also used by firms to determine how much regulatory capital they must hold, so that as the amount of money the firm could lose under its VaR calculation increases, so does the amount of regulatory capital required to be held as a buffer against those potential losses. Each bank holding company calculates VaR limits for specific trading desks or business lines and also sets a firmwide VaR limit. This amount is less than the sum of the individual VaRs because of diversification effects across portfolios—that is, the results of different or opposite movements among assets held by groups within a firm whose gains and losses would offset each other in whole or in part. Trading desks and the firm as a whole are expected to hold positions whose VaRs are below the established limits. Financial institutions noted that they do not rely exclusively on VaR, and described other key aspects of their risk-management activities, including stress testing and risk constraints and limits at particular trading desks or business lines. Financial institutions that conduct both proprietary trading and client- focused activities, such as market making, face a number of what financial regulators and industry participants consider to be potential conflicts of interest that could lead financial institutions to put their own interests ahead of their responsibilities to their clients. However, industry participants noted that these potential conflicts of interest are not unique to proprietary trading activities and can occur in other activities conducted by bank holding companies. In many cases, the activities arising from such conflicts are illegal and violate securities laws, depending on the facts and circumstances surrounding the activity. For instance, financial institutions that conduct proprietary trading could potentially use their clients’ order information for their own benefit in a way that disadvantages the client. One example of such prohibited activity is front running, which can occur when a firm receives a buy or sell order from a client and then uses information about that order to execute a trade from its proprietary- trading desk in advance of its customer’s order. A proprietary trader, having received information that a client is about to make a large purchase of stocks, could “front run” that order by buying shares for the firm in advance, driving the price of the stock up. Such a move would harm the client by raising the stock’s purchase price. Another type of illegal activity resulting from conflicts of interest could potentially occur when traders who interact with clients share information with proprietary traders or with other clients about the trading patterns or strategies being used by other clients. In a recent administrative proceeding, SEC found that proprietary traders for a broker-dealer were misusing information about trades done for clients between February 2003 and February 2005. The firm neither admitted to nor denied these practices and agreed to pay a penalty of $10 million and consented to an SEC cease and desist order. As another example, proprietary traders could take advantage of material nonpublic information their firms obtain in other business lines. Financial institutions that engage in hedge fund and private equity fund investments and client-focused activities also face a number of potential conflicts of interest, which could result in financial institutions putting their own interests and revenue ahead of their responsibilities to their clients. For example, bank holding companies’ asset management divisions could potentially have incentives to inappropriately recommend investment in certain funds they sponsor or with which they have a preexisting business relationship. Another potential conflict of interest can involve inequitable trade allocations. That is, a firm might execute trades for a particular asset at different prices but allocate the most profitable trades to its own holdings and the less profitable trades to its client holdings. Such activities, according to SEC, could constitute violations of federal securities law, depending on the facts and circumstances. The bank holding companies we interviewed described a number of procedures they relied on to try to identify and mitigate conflicts of interest related to proprietary trading. Some of the institutions described committees at their firm—made up of senior level business-line, risk management, and compliance executives—that meet to address potential conflicts of interest. These committees create and implement policies and procedures that are designed to identify and mitigate potential conflicts of interest, and management elevates any potential conflicts of interest to this committee. These institutions have also in some cases physically separated proprietary traders from traders engaged in market making in an attempt to prevent market-making information from leaking to proprietary traders. For example, some placed proprietary traders on different floors of their facilities, including sometimes using separate elevator systems, keycard access to doors, and different telephone and computer hardware. According to SEC staff, however, some traders that have conducted activities that may fall within the definition of proprietary trading have done so on the same trading floor as market-making desks. Proprietary trading teams also, in some cases, have different information technology systems, such as software and e-mail systems, that prevent them from communicating with other areas of the firm. According to firms we visited, their stand-alone proprietary-trading desks would in many cases execute their trades through other firms rather than using their own firms’ traders, as a further means to separate their activities. Some financial institutions we interviewed also described certain procedures, such as triggers, that are in place to monitor trading activities to prevent stand-alone proprietary traders and others from executing trades with certain companies that are doing business with other parts of the firm. Finally, institutions also described having policies to prohibit their traders from trading in their own personal accounts using information acquired from their work at the firm. According to regulators, researchers, and our analysis, most proprietary trading among banking entities has been conducted by the largest bank holding companies in the United States. According to our analysis of financial data that bank holding companies report to the Federal Reserve, as of December 31, 2010, the largest six bank holding companies by assets accounted for 88 percent of total trading revenues reported by all bank holding companies. Therefore, we focused our analysis on the 6 largest bank holding companies by assets as of December 31, 2010. To provide information about the extent to which proprietary trading posed risks to these firms, we attempted to gather information on stand-alone proprietary trading as well as other proprietary trading that may have been occurring within other trading activities of the firms. While we gathered information on stand-alone proprietary trading, we determined that collecting information on other proprietary trading was not feasible because the firms did not separately maintain records on such activities and because of the uncertainty over what activities will be considered proprietary trading until the completion of the required regulatory rulemaking. We calculated firms’ combined revenues or losses from stand-alone proprietary trading for each of the 18 quarters between June 2006 to December 2010, and compared the results to trading revenue— which includes revenue from all trading activities including stand-alone proprietary trading—and total bank holding company revenue. The data on stand-alone proprietary trading represented 26 proprietary-trading desks across the six firms over the time period we reviewed. The number of stand-alone proprietary trading desks reported by a single bank holding company ranged from one to eight. These stand-alone proprietary-trading desks included some that traded primarily in one type of financial product, such as commodities or equities, to desks that traded a wide variety of products. The desks also relied on varying strategies for generating returns, including quantitative-based algorithmic trading as well as more traditional trading. As shown in figure 1, stand-alone proprietary trading activities at the six largest bank holding companies produced combined revenues in 13 out of 18 quarters since 2006 and losses in the remaining 5 quarters. While the combined revenue over the period totaled $15.6 billion, the combined losses totaled $15.8 billion. As a result, stand-alone proprietary trading by the six firms over the time period we reviewed resulted in a combined loss of $221 million and a median quarterly revenue for each firm of about $72 million. All of the quarters in which the six firms’ combined stand-alone proprietary activities produced losses occurred from the third quarter of 2007 through the fourth quarter of 2008—the time period leading up to and including the worst financial crisis since the Great Depression. Four of the firms made money, and two lost money, from stand-alone proprietary trading over the 4.5 year time period as reflected in revenues and losses. One of the six bank holding companies was responsible for both the largest quarterly revenue at any single firm from stand-alone proprietary trading since 2006, which was $1.2 billion, and the two largest single-firm quarterly losses of $8.7 billion and $1.9 billion. Stand-alone proprietary trading at the other five bank holding companies resulted in total combined revenues over the time period of $9.4 billion and median quarterly revenue for each firm of about $67 million. At the five bank holding companies, the largest single-firm quarterly revenue throughout the time period was $957 million, and the largest loss was $1 billion. The combined revenues from stand-alone proprietary trading in the 13 revenue-generating quarters since 2006 represented relatively small amounts compared with revenues from all trading activities—which included stand-alone proprietary trading revenue—and from all bank holding company activities (see fig. 2). In the 13 quarters since 2006 in which both stand-alone proprietary trading and all trading and other revenues were positive, the combined revenues from stand-alone proprietary trading represented between a low of about 1.4 percent and a high of 12.4 percent of combined quarterly revenues for all trading and between about 0.2 to 3.1 percent of combined quarterly revenues for all activities at the bank holding companies. In the five quarters in which the firms experienced combined losses from stand-alone proprietary trading, they experienced combined losses for all their trading activities in two of those quarters. In those two quarters, the stand-alone proprietary trading losses were about 66 percent and 80 percent of total trading losses. In addition to analyzing combined revenues and losses, we analyzed all 108 individual firm-quarters of data that the bank holding companies reported and found that stand-alone proprietary trading generally did not significantly increase quarterly trading revenues during positive quarters. However, in quarters when both stand-alone proprietary trading and total trading resulted in losses, stand-alone proprietary trading comprised a substantial portion of total trading losses. As shown in figure 3, in 77 out of 108 firm-quarters (or 71 percent), revenues were positive for both stand-alone proprietary trading and total trading (which included stand- alone proprietary trading.) In five quarters (5 percent), stand-alone proprietary trading helped offset losses in other trading areas or reduced overall trading losses. For these quarters, stand-alone proprietary trading resulted in total revenue of $666 million despite total trading losses of more than $14 billion. In 17 quarters (16 percent), stand-alone proprietary trading resulted in losses despite total trading revenues. For these quarters, stand-alone proprietary trading losses of about $4 billion reduced total trading revenues to about $56 billion. Finally, in nine quarters (8 percent), both stand-alone proprietary and total trading experienced losses, with stand-alone proprietary trading losses comprising 86 percent of total trading losses. Our analysis of revenue, loss, and VaR data from 2006 through 2010 at the six largest bank holding companies indicated that during this period stand-alone proprietary trading required these firms to take greater risks than all trading activities on average to generate the same amount of revenue and that these firms’ VaR risk models were less capable of predicting the actual risks associated with stand-alone proprietary trading. We calculated, for a standardized amount of risk taken, how much revenue bank holding companies produced from stand-alone proprietary trading as compared to all trading activities, which included stand-alone proprietary trading. Stand-alone proprietary trading produced average quarterly revenues of $4.8 million for every $1 million of VaR, while all trading, including stand-alone proprietary trading, produced average quarterly revenues of $21.9 million for every $1 million of VaR. These calculations for specific firm quarters ranged from an average quarterly revenue-per-VaR of $11.5 million to an average quarterly loss-per-VaR of $5.4 million for stand-alone proprietary trading and from an average quarterly revenue-per-VaR of $40.5 million to an average quarterly loss- per-VaR of $7 million for all trading activities. Figure 4 shows these data, which could be considered “risk-adjusted revenues or losses” for both stand-alone proprietary trading and all trading during the time period. In addition, each of these bank holding companies reported to us the number of times each quarter that their actual daily losses exceeded those predicted by these firms’ VaR models—which are known as VaR breaks. For all trading, the actual daily losses incurred by these six firms over the time period exceeded their VaR estimate 161 times, for an average of 1.5 VaR breaks per quarter per firm. However, for their stand- alone proprietary trading, the actual daily losses exceeded their VaR estimate 302 times across the same period, or an average of 3.2 breaks per quarter per firm. The largest number of VaR breaks at any one bank holding company’s individual stand-alone proprietary-trading desk in any one quarter was 42, out of 63 trading days in the quarter. Representatives from some of these bank holding companies told us that the larger number of breaks from stand-alone proprietary trading likely stemmed from the prices of the assets being traded becoming more volatile than their models had predicted. Our analysis of the data reported to us by the six largest U.S. bank holding companies showed that their hedge fund and private equity fund investments also experienced smaller revenues as a percentage of total revenues but with some larger losses compared to those revenues during the period we reviewed. As shown in figure 5, hedge fund and private equity fund investments at these six firms produced combined revenues in 14 out of 18 quarters totaling almost $32 billion. In three quarters, combined losses from these investments were just more than $8 billion, and in the one remaining quarter, the bank holding companies experienced combined revenues in private equity fund investments and a loss in hedge fund investments. As a result, the bank holding companies had combined revenues of about $22 billion from hedge fund and private equity fund investments during this 4.5-year period. During this 4.5-year period, the six largest bank holding companies experienced combined revenues from investments in hedge funds of $8.4 billion, with average and median quarterly firm revenue of about $77 million and $69 million, respectively. The maximum individual firm revenues and losses for any quarter during this period ranged from revenues of $501 million to a loss of $500 million. For private equity fund investments, these bank holding companies experienced combined revenues of about $14 billion over the entire time period, with average quarterly revenue of $125 million and median quarterly revenue of $134 million. The maximum individual firm revenues and losses for any quarter during this period ranged from a revenue of $1.4 billion to a loss of $3.2 billion. The revenues from the six largest bank holding companies’ hedge fund and private equity fund investments were small compared to their total firmwide revenues during 14 of 18 quarters when these investments produced combined revenues (see fig. 6). Revenues from these investments represented between about 0.08 to 3.5 percent of these bank holding companies’ combined revenues during this period. The full profits and losses from all activities at the six largest bank holding companies are represented by their publicly reported net income, which includes all their revenues less all their expenses for all of their business activities. Although the period of June 2006 to December 2010 included the worst financial crisis in 75 years, the firms’ combined net incomes were positive in 16 out of the 18 quarters, even with combined losses from stand-alone proprietary trading in 5 of those quarters. To further examine the impact of stand-alone proprietary trading and hedge fund and private equity fund investment activities on their overall performance during this time period, we determined the change in each quarter from the previous quarter in the combined net income of the six firms—which would be negative when a firm experiences either less revenue or losses in particular business activities—and compared them to changes in revenues or losses from all trading activities, stand-alone proprietary trading, and private equity fund and hedge fund investments. As shown in figure 7, in quarters when the bank holding companies experienced large increases or decreases in firmwide net income from the previous quarter, changes in revenues or losses from stand-alone proprietary trading and hedge fund and private equity fund investment from the previous quarter generally represented only a small portion. During this 4.5-year period, the six firms usually experienced larger revenues and losses from activities other than stand-alone proprietary trading and investments in hedge and private equity funds, including writedowns on the values of these firms’ positions in CDOs and leveraged loans, and potentially including aspects of these and other activities that could be defined as prohibited proprietary trading as part of the rulemaking. One large bank holding company reported almost $21 billion in writedowns in 2008 as the result of subprime CDOs or other subprime-related direct exposures. In addition, the three largest bank holding companies reported combined losses of almost $11 billion in the same year from leveraged lending. Staff at the financial regulators and the financial institutions we interviewed also noted that losses associated with lending and other risky activities during the recent financial crisis were greater than losses associated with stand-alone proprietary trading. For example, one of the firms reported increasing the reserves it maintains to cover loan losses by more than $14 billion in 2008 and another of the firms increased its loan loss reserves by almost $22 billion in 2009. Further, FDIC staff, whose organization oversees bank failures, said they were not aware of any bank failures that had resulted from stand-alone proprietary trading. However, whether certain investment and underwriting activities should or will be restricted has been subject to debate. When expressing concerns over the impact of proprietary trading, some policymakers and at least one researcher include certain types of principal investments or proprietary investment portfolios, which usually refer to the firms’ longer- term investment portfolio activity and in some cases have caused significant losses or failures. For example, according to the examiner in the Lehman Brothers bankruptcy case, part of the failure of Lehman Brothers was largely attributable to that firm’s investments in commercial real estate and private equity investments in other companies, or what the report refers to as principal investments, and what in an interview its author referred to as proprietary trading. An April 2011 staff report by the Senate Permanent Subcommittee on Investigations provided additional information on proprietary trading activities of certain financial institutions and their role in the financial crisis. In addition, policymakers and at least one researcher have raised questions about whether the riskiest tranches of mortgage-backed securities, CDOs, or other securities that were routinely held by the underwriter as part of the securitization and sales process and that contributed to significant CDO losses should be considered proprietary trading. Such losses are reflected in our data on stand-alone proprietary trading only to the extent that they were reported as revenues or losses at stand-alone proprietary-trading desks. However, the extent to which these activities were included in the stand-alone proprietary trading data is not known. Some financial institutions, policymakers, and researchers have expressed concerns about potential negative consequences of the restrictions on proprietary trading and hedge and private equity fund investments. First, some banking industry representatives and other market observers have said that the restrictions could reduce the ability of banks to offset risks in other areas. One bank holding company representative noted that because proprietary-trading desks often use innovative and in some cases countercyclical trading strategies, their activities at banking entities have at times allowed for diversification of risk that has improved the bank holding companies’ overall safety and soundness. Although such an effect may exist, our analysis of the data reported by the six largest bank holding companies found that stand- alone proprietary trading and hedge and private equity fund investment activities represented a small portion of revenues from all trading and bank holding company activities. Also, the revenues and losses from stand-alone proprietary trading were not particularly uncorrelated to overall revenues or losses over the time period we reviewed. In addition, our findings that stand-alone proprietary trading during the period we reviewed required firms to take greater risks than all trading activities on average to generate the same amount of revenue and that these firms’ VaR risk models were less capable of predicting the actual risks associated with stand-alone proprietary trading reduces the potential benefits of such trading to offset other losses. Some market observers believe the restrictions could potentially reduce the competitiveness of U.S. firms by restricting their activities compared to their international competitors. According to interviews with foreign regulatory bodies, many countries are looking at changing capital requirements for proprietary trading activities, but no other industrialized countries in Europe or around the world plan to enact provisions that parallel the U.S. restrictions. The foreign regulators we spoke with indicated that if the U.S. restrictions were implemented in a way that restricts the ability of U.S. banking entities to serve their clients through market-making, underwriting, or in other ways, that U.S. banking entities could lose business to their competitors in Europe and elsewhere. Further, two recent reports issued by a research department of J.P. Morgan Chase—one of the six largest bank holding companies that was included in our analysis and that would be impacted by the proprietary trading restrictions—stated that those restrictions represented a material benefit to certain European financial institutions over those in the United States because of the regulatory arbitrage that would exist across countries. However, this analysis does not incorporate the potential competitive benefits, such as reduced funding costs to these firms if they were less exposed to risks and losses during periods of economic instability, as we saw during the recent crisis. In addition, according to representatives of one foreign financial institution, revisions to international capital standards, and changes to laws in other countries could force competitors of U.S. firms to similarly restrict their trading and fund investment activities, which would minimize the competitive impacts of the U.S. restrictions. According to some market observers, the restrictions may also reduce the amount of liquidity in financial markets, depending on how they are implemented. They say if the restrictions are enforced too strictly and limit activities—in particular the taking of principal positions—that are critical to making markets for various financial instruments, including certain equities, exchange-traded funds, and U.S. corporate bonds, then the effects may be detrimental. Representatives at the six largest bank holding companies and some commentators on the FSOC study explained that in order to effectively make liquid markets, especially for products other than stocks, traders sometimes need to assume principal risk in order to take on inventory and move orders effectively. If the restrictions limited market-makers’ ability to assume such risk, traders could stop providing liquidity in certain markets, making it more difficult or expensive for corporations, state and local governments, or other clients to finance their activities or hedge their investments. A January 2011 report prepared by a consulting group that was commissioned by the Securities Industry and Financial Markets Association—an industry group that represents securities firms, banks, and asset managers—described the importance of implementing the restrictions in a way that did not reduce liquidity associated with permitted activities. In addition, representatives of two bank holding companies expressed concerns that the proprietary trading restrictions could limit their ability to respond to individual instances of severe market illiquidity, such as a flash crash, as occurred in U.S. equity markets on May 6, 2010, or the failure of a large member of a derivatives clearinghouse. They noted that in these instances regulators may need to provide financial institutions with additional flexibility to hold inventories or make purchases that could resemble proprietary trading in order to support market functioning. However, limited research exists on these hypothetical outcomes. The Securities Industry and Financial Markets Association-commissioned study provided little empirical data to indicate the extent to which the restrictions on proprietary trading and investments in hedge and private equity activity might impact the liquidity of financial markets. Finally, some policymakers, researchers, and others have said that the restrictions could push risky trading and other activities to less-regulated financial institutions, such as hedge funds. Financial institutions have begun to shut down stand-alone proprietary trading operations and in at least one case announced plans to spin off the operations to unaffiliated and separately capitalized funds. Opponents of the restrictions argue that proprietary trading could present greater risks to the financial system if much of the activity in the future is conducted out of less-regulated entities, such as hedge funds, whose advisers only recently were required to register and provide data to SEC, rather than banking entities, which are subject to on-site safety and soundness supervision and examination programs. However, losses occurring at hedge funds and other nonbank entities are less likely to pose risks to the U.S. banking system than those occurring within bank holding companies. In addition, to the extent that proprietary trading migrates to entities outside of the banking system, no actual reductions in the level of market liquidity may occur. Federal regulatory oversight has not always been effective in assessing the adequacy of risk management of the largest financial institutions, a key part of overseeing the implementation of Section 619. While implementing proprietary trading and hedge fund and private equity fund restrictions poses challenges, effective data collection will be critical to oversight. The Federal Reserve, OCC, and SEC share primary responsibility for overseeing risks associated with trading and investment activities by large U.S. bank holding companies, including proprietary trading and fund investment activities. Responsibilities for oversight depend on which legal entity is conducting the activity. The Federal Reserve, as the consolidated supervisor of bank and thrift holding companies, plays the primary role in overseeing these activities across the institution, including its subsidiaries, but also largely relies on OCC and SEC to oversee activities conducted out of national bank and broker-dealer subsidiaries of the holding company, respectively. To oversee the risks of trading and other activities, regulators conduct ongoing monitoring and surveillance, meet with financial institution executives and risk management personnel, and conduct targeted risk-based reviews of specific business lines or key controls across the holding company. In some cases, regulators have somewhat different goals in their oversight. For example, OCC focuses on the safety and soundness of the national banks within holding companies, while SEC focuses on regulations intended to promote investor protection, market integrity, and capital formation. To oversee proprietary trading and investments in hedge funds and private equity funds, the staff from the Federal Reserve, OCC, and SEC described following generally similar approaches that focused on how the institutions managed the risks associated with such activities, which are a subset of all trading and investment activities. As part of their risk-based examinations of all trading and investment activities, in recent years these regulators have conducted examinations that in some cases focused on internal controls and specific business lines related to proprietary trading or investments in hedge and private equity funds. However, these reviews were generally designed to test key controls, compliance, or overall risk management in these areas rather than to specifically focus on proprietary trading or investments in hedge and private equity funds. Representatives of these agencies told us that until the enactment of the act, their oversight of trading activities generally did not distinguish between proprietary trading and trading conducted on behalf of customers, because they examined both activities when assessing a firm’s overall management of risk arising from all business lines. As a result, they have generally not had separate procedures in place to examine proprietary trading activities or to distinguish whether financial instruments were bought or sold for proprietary or other purposes. In some limited situations, regulators in the past sought to define market making and distinguish it from proprietary trading or other activities. For example, as part of an effort to implement additional requirements related to short selling—in which a party borrows stock from another party and then sells it in order to profit from declines in its value—SEC developed guidance that defined market making in equities markets as making continuous, two-sided quotes and holding oneself out as willing to buy and sell on a continuous basis; making a comparable pattern of purchases and sales of a financial instrument in a manner that provides liquidity; making continuous quotations that are at or near the market on both sides; and providing widely accessible and broadly disseminated quotes. In addition, bank regulatory manuals in some cases instruct examiners to take steps that would identify proprietary trading, although given the risk-based nature of oversight at the largest bank holding companies, these manuals have served as a reference rather than as specific examination procedures. Finally, OCC examiners said that they had discussed with bank managers the intent behind certain trading activities and then verified through profit and loss and other information that the risk profile is consistent with the financial institution’s stated intent. Federal financial regulators have also taken steps to prevent what they consider conflicts of interest associated with trading and investment activities. For example, banking regulators told us that they rely on their safety and soundness authority to require that financial institutions maintain policies and procedures to address conflicts of interest, including focusing on conflicts that could create possible reputational risks for the institutions. As part of regulating securities broker-dealers, SEC staff oversee compliance with Section 15(g) of the Securities Exchange Act of 1934, which requires all registered broker-dealers to establish, maintain, and enforce written policies and procedures reasonably designed to prevent the misuse of material nonpublic information they obtain. In the past, SEC conducted examinations of the effectiveness of the information barriers that broker-dealers used to prevent “leakage” of information from customer-focused trading desks to proprietary-trading desks, which in part led to the enforcement action discussed earlier. In addition, in 2007, SEC conducted examinations of 11 broker-dealers that, although not directly related to proprietary trading, sought to determine whether certain of these firms were providing nonpublic information about large market- moving orders to certain favored customers, such as hedge funds. According to SEC staff, determining whether broker-dealers were leaking customer order information was difficult, even after an extensive multi- year, data-intensive examination, and SEC closed these investigations without filing charges. Our prior work showed that these financial regulators have been challenged in overseeing large financial institutions’ risk management efforts on a comprehensive basis. Prior to the most recent crisis, the Federal Reserve, SEC, and the Office of Thrift Supervision each had responsibilities for overseeing the largest bank holding companies, investment banks, and thrift holding companies, respectively. In a 2009 review, we found that although these regulators had identified numerous weaknesses in institutions’ risk management systems before the financial crisis began, they had not always taken steps to fully ensure that the institutions adequately addressed the weaknesses. For example, regulators had identified inadequate oversight of institutions’ risks by senior management, but the regulators noted that these institutions had strong financial positions and that senior management had presented the regulators with plans for change. However, the regulators did not take steps to fully ensure that these changes were quickly or fully implemented until the crisis revealed that the systems were still not adequate. Regulators had also identified weaknesses in the quantitative models that these firms used to measure and manage financial risks but may not have taken action to resolve these weaknesses. For example, regulators did not prohibit at least one institution from using untested models to evaluate risks and did not change their assessment of the institution’s risk management program after these findings. Finally, regulators had identified numerous weaknesses in stress testing—scenarios used to model the effects of adverse events or shocks on firms’ portfolios—at several large institutions without having taken aggressive steps to push institutions to better understand and manage risks. In an earlier report, we found that holding company regulators lacked full authority or sufficient tools and capabilities to adequately oversee the risks that these financial institutions posed to themselves and other institutions. The financial crisis also revealed some significant challenges faced by regulators in overseeing trading, investment, and other activities at large U.S. financial institutions. For example, institutions overseen by OCC and the Federal Reserve, including Citigroup and Bank of America, experienced large losses or increases in reserves for anticipated losses during the crisis. The oversight failures of SEC and the Office of Thrift Supervision ultimately resulted in changes that eliminated their role in overseeing holding companies going forward. During the recent crisis, all five of the investment banks that SEC had been overseeing through its voluntary Consolidated Supervised Entities program either failed, were purchased at reduced values by other financial institutions, or became bank holding companies in order to permanently obtain official access to Federal Reserve emergency liquidity going forward. According to SEC staff, the voluntary nature of the Consolidated Supervised Entities Program limited the authority of the agency to enforce new requirements on investment banks that were part of the program. According to the report prepared by the bankruptcy examiner for Lehman Brothers, which failed in September 2008, this broker-dealer had changed its business strategy in 2007 to focus more on making principal investments in commercial real estate, providing funding as part of leveraged lending for mergers and acquisitions, and making more private equity or similar investments in other companies. However, the bankruptcy examiner reported that this firm’s staff had disregarded its risk management policies and limits that had been set for these activities, had not included some of these positions in the calculations it used to measure its total firmwide risk levels, and failed to hedge some of these investments to reduce their risk to the firm. Although aware of some of these actions, the bankruptcy examiner noted that SEC staff had sought only to ensure that the financial institution’s board was informed of and had approved these changes. In testimony on April 20, 2010, in response to the bankruptcy examiner’s report, SEC’s chairman acknowledged that SEC staff should have challenged Lehman Brother’s management more forcefully regarding the types of risks the firm was taking and imposed meaningful requirements or limitations when necessary. Similarly, the Office of Thrift Supervision failed to adequately oversee the credit default swap activities of an American International Group, Inc. (AIG) subsidiary, which added to other regulatory failures to result in serious liquidity issues and necessitated significant government assistance. Beginning in July 2011, the largest U.S. financial institutions will all be holding companies overseen at the holding company level by the Federal Reserve. Although the Federal Reserve retains this responsibility, its failures in identifying and addressing problems at large bank holding companies were revealed during the financial crisis, when some large bank holding companies experienced large losses or required significant capital infusions to remain solvent. Since the crisis, various regulatory changes have been made or are underway that are intended to reduce the risks that trading and other activities pose to the safety and soundness of these large institutions. Regulators told us they are overseeing significant changes that financial institutions are making to their risk management models, including improvements to their stress testing. Representatives of bank holding companies explained that they now use VaR measures with longer time horizons that include a fuller range of economic cycles to increase their models’ accuracy and consistency. According to Federal Reserve staff, the time frames from which the financial institutions’ models drew their historical loss experiences—their look-back periods—and which the regulators used to determine capital adequacy, were not sufficiently long enough to account for periods of varying market returns. Additionally, the staff at one large bank holding company we reviewed told us that they were working to incorporate more complicated, and often illiquid, assets into their firms’ VaR measures. Officials from another institution noted, for example, that it had instituted a new policy to incorporate the warehousing risk from CDOs that arises during the period that an institution is accumulating the underlying securities that will be used to create the CDO securities. They indicated having had such a practice in the past would have helped their firm better identify the risks it was bearing associated with super-senior CDO tranches, which created large losses during the crisis. Financial institutions also told us that they were creating “stress-VaR” models that attempted to model a “doomsday scenario” of dramatic market price movements similar to those that occurred during the 2008 financial crisis. The institutions also noted that they were trying to develop better measurements of returns earned per unit of risk taken. In addition, changes to capital requirements, broadly and with respect to the trading books at financial institutions, should mitigate risks to the financial system. According to the FSOC study and Federal Reserve staff, prior to the crisis, capital requirements were in many cases lower for assets held in trading books (because of an assumed higher amount of liquidity), which caused banks to move many of their riskier assets there. Under new rules that are expected as a result of the July 2009 Basel III international capital accord, mortgage-backed securities, CDOs, and other complex products will face stricter eligibility requirements for inclusion in trading accounts. Those that are included will face higher capital charges to mitigate risks associated with such products. More generally, Basel III aims to increase minimum common equity requirements from 2 to 4.5 percent and tier 1 capital requirements from 4 to 6 percent and to add a new “conservation buffer” of an additional 2.5 percent. Section 171 of the act also requires regulators to establish, on a consolidated basis, leverage and risk-based standards currently applicable to U.S. insured depository institutions for U.S. bank holding companies and nonbank financial companies supervised by the Federal Reserve. Finally, the act’s changes that enhance the Federal Reserve’s oversight of nonbank subsidiaries of bank holding companies and nonbank financial companies should help ensure a more consistent and comprehensive approach to overseeing trading activities at large U.S. financial institutions. Given the significant challenges that regulators have faced in overseeing large financial institutions’ risk management efforts, which includes the risks arising from these firms’ trading and investment activities, the restrictions on proprietary trading and hedge fund and private equity fund investments should reduce the scope of risks that regulators will have to oversee going forward. However, implementing the act’s restrictions to fully ensure that such risks no longer exist at the firms raises new challenges for the regulators. To make recommendations on effectively implementing the act’s restrictions on proprietary trading and hedge fund and private equity investments, FSOC issued a report in January 2011 that included an overview of the key issues financial regulators should consider when they issue rules and specific recommendations on how regulators and financial institutions might monitor and enforce the new rules. Several key challenges remain, however, including distinguishing prohibited proprietary trading from market making and appropriately defining terms associated with the restrictions on hedge fund and private equity fund investments. The FSOC study and our interviews with large U.S. bank holding companies and their regulators found that a key challenge in implementing the proprietary trading restrictions will be disentangling activities associated with market-making, hedging, and underwriting from prohibited proprietary trading activities. For example, when a firm’s traders purchase bonds from a client as part of market making, the position they hold in those bonds poses the same risk of loss to the firm as bonds purchased in a proprietary trade. As a result, regulators face the challenge of monitoring firms’ market-making activities and positions to fully ensure they are sufficiently hedged and that inventories of financial assets being held are appropriate in both size and duration of turnover consistent with market making activities. Representatives of the large U.S. bank holding companies we interviewed expressed a number of concerns about the potential negative consequences of implementing the proprietary trading restrictions in a way that prohibited any principal trading. As mentioned previously, representatives from several financial institutions believed that prohibiting institutions from holding inventory would reduce liquidity, especially for already illiquid markets in which buyers and sellers cannot always be quickly matched. Staff from several institutions said that customer-driven trades were often hedged with a number of off-setting trades rather than with a single matching hedge. For example, a manager of trading at one firm explained that a single large derivatives contract traded between his firm acting as a market maker and one of its clients could result in the firm having to conduct as many as 30 smaller offsetting equities trades to fully hedge the risk. Staff at another financial institution argued that to be effective market makers and get the best prices for their clients, their traders needed current information on pricing (i.e., price discovery) and trends in the marketplace that could be gathered only through active trading. And staff at two firms told us that having regulators attempt to ensure that no proprietary trading has occurred would be resource intensive if not impossible. The FSOC study approached this issue by recommending that firms monitor certain metrics that could indicate when impermissible proprietary trading is occurring within permitted market-making activities. The study suggested a number of potential quantitative metrics related to revenue, risk, inventory, and customer flow, which regulators could require banking entities to implement and review in order to guard against future impermissible activities. For example, using revenue data, regulators could identify instances in which revenue over a certain time-period is outsized compared to recent trends. Regulators could also determine from revenue data whether traders are acting as market makers by making most of their profit at the time positions are taken, or if they are instead profiting from appreciation of assets, which could indicate proprietary trading. They could also use revenue-to-risk measures to distinguish market-making from proprietary trading, because the lower VaR and other risk and volatility measures associated with market- making result in higher revenue-to-risk data than with proprietary trading. In addition, they can use inventory turnover and aging metrics that track the length of time assets remain on a financial institution’s balance sheet, which can help regulators determine whether the holding periods for assets appears consistent with activities undertaken for customers rather than for trying to earn profits for the firm by holding for longer periods. Finally, the FSOC report mentioned that if regulators require institutions to classify their trading between “customer-initiated” and “trader-initiated” transactions, both banking entities and regulators would be able to use this customer-flow data in quantitative metrics and ratios to better identify impermissible proprietary trading. Staff at some financial institutions we spoke with supported this approach, given the difficulties of differentiating between legitimate market-making and proprietary trading. Financial regulators also noted the challenges of such a distinction. FDIC representatives said that in 2005 the regulators tried to define proprietary trading as part of an effort to better oversee such activity but ultimately could not. They noted that preventing proprietary trading required a subjective, case-by-case evaluation. Any other approach, they said, would either be too broad and overly inclusive or too narrow—that is, it would miss some activities. The FSOC study recommended a four-part framework to monitor and enforce the proprietary trading restrictions. First, the study recommended a programmatic compliance regime that would require banking entities to implement policies, procedures, and internal controls designed to ensure that the institutions adhered to the provisions. Second, banking entities would be required to report and provide sufficient data and records to regulators on their market-making and hedging activities so that regulators could determine whether any improper proprietary trading was taking place. Third, the regulatory agencies would periodically review and test the banking entities’ policies and procedures to help ensure that they were in compliance with the proprietary trading restrictions and to address any potential violations. Finally, as part of the supervisory process, banking entities would be subject to penalties or legal actions for violating proprietary trading restrictions. Regulators will also face challenges in defining terms associated with the restrictions on hedge fund and private equity investments. The proprietary trading prohibition defines hedge funds and private equity funds as issuers that rely on certain exemptions from the definition of “investment company” under Section 3 of the Investment Company Act or such similar funds as agencies determine by rule. As the FSOC report noted, those exclusions are used not just by hedge and private equity funds, but also by a wide variety of other legal entities. For example, one financial institution expressed concerns that their firm’s own employee pension funds could meet the definition of the act, which could mean that the restrictions could affect investments that firms made to benefit their retired employees. At the same time, the act’s definition of a covered fund may not capture funds such as commodity pools that invest in oil or other commodities and that pose risks similar to those posed by the covered funds. Staff at one institution also expressed concerns that their investments in certain of their subsidiaries were structured in ways that could mean that they met the definition of a fund in which investment would be restricted. Other firms’ staff noted that by limiting their ability to invest in a fund they have created at levels greater than 3 percent after one year, the act may not give them enough time to prove a fund’s performance track record before seeking outside investors. According to this firm’s staff, many investors expect to see a history of at least 3 years of fund returns before they are allowed to, or are otherwise willing to, invest in a fund. This issue will require regulators to consider the congressional intent behind the restrictions and appropriately define these and other terms. As we have seen, taking steps to ensure that the prohibition on hedge fund and private equity fund investments is implemented without creating a loophole that would exclude funds that should fall under its scope, without inadvertently including under the restrictions other types of funds that were not intended to be included will be important. Clearly regulators face challenges in implementing the new restrictions. Without appropriate monitoring of trading activities, however, financial institutions could also abuse permissible activities, using them to conduct prohibited proprietary trading activities. Our review of the proprietary trading activities of large bank holding companies revealed that some financial institutions have pursued strategies that were a combination of client-focused transactions and proprietary positioning, activities which could be considered impermissible proprietary trading activities but go unnoticed if they were not monitored appropriately. For example, as noted earlier, one regulator summarized the trading activities of one business line of one large bank holding company we reviewed as generating revenue mostly from client flow but noted that the business line also had a trading desk that sought to profit from long-term positioning of inventory based on their traders’ views of the market. Also, according to the description, the financial institution’s customer flow trading desk may hold large inventory positions that exceed the amount necessary to facilitate client trades when the desk has a particular view on the direction of the market. Implementing and enforcing the restrictions to address activities such as this will be difficult. As we have noted, the act requires the Federal Reserve, OCC, FDIC, SEC, and the Commodity Futures Trading Commission to issue final rules to implement the restrictions on proprietary trading and hedge fund and private equity fund investments by October 2011. To inform this process, in recent months regulators have met with and collected general information, but not comprehensive data, from the largest U.S. bank holding companies on their proprietary trading and hedge fund and private equity fund activities.  To inform the FSOC study released in January 2011, officials at Treasury said that they and the regulators had collected information from large institutions on ways the banks could implement the provisions, including ways of adapting their risk management systems to monitor compliance.  Representatives of the Federal Reserve and OCC explained and provided documents supporting that as part of their ongoing monitoring of the largest bank holding companies, they monitor the trading and investment activities of the firms they oversee, including proprietary trading and other activities that may be restricted.  At our request, Federal Reserve and OCC examination staff gathered some general information on the trading activities at each of the six firms. These financial regulators initially considered collecting specific data on the nature and volume of proprietary trading and investment activity at the largest firms as part of the FSOC study. However, they instead focused on meeting with representatives of the largest financial institutions to gather qualitative information about how the entities monitor and manage the risks of trading and investment activities. As a result, the regulators have not compiled specific data on the nature of and volume of trading at stand-alone proprietary-trading desks, nor have they attempted to get a more comprehensive understanding of the extent to which the firms are taking proprietary positions as part of conducting other trading or investment activities. Having such information, including more complete data on the amounts of revenue and VaR levels of these firms’ market- making desks that may be conducting proprietary trading now would help regulators monitor the changes the bank holding companies make and provide them with a comparative baseline to assist in quantitatively observing that the firms’ trading inventories and revenues change in the ways expected once the act’s restrictions are in place. While examiners have collected some information on certain trading and fund activities, they have yet to collect comprehensive information. Staff from some of these regulators told us that they have not collected more comprehensive information because they have not yet written the final rules to define with greater specificity the types of trading and investment activities that will be prohibited. Indeed, collecting such information before the rule is finalized would be difficult without more specificity about permissible activities and the scope of coverage of certain types of fund investments. However, such an effort could be effective if regulators identified and collected information on a broader set of activities than may be prohibited to help ensure they are aware of all trading and funds that could potentially be covered. Such a process would almost certainly inform the regulators about definitional and other issues that could be useful as part of the rulemaking. Such information could also be collected after the rules are finalized but would likely require each regulator to obtain data from the firms they are responsible for that covers a sufficiently long enough period prior to the implementation of the rules to fully ensure they have a sufficient baseline of activity to understand and be able to better assess whether the firms are changing their activities as required by the rules. Conversely, FSOC could direct the Office of Financial Research, which was created within the Department of the Treasury by the act to facilitate more robust and sophisticated analysis of the financial system, to collect such information and share it with regulators as authorized under the act. The ability of financial institutions to conduct stand-alone proprietary trading and investments in hedge funds and private equity funds had advantages and disadvantages. While the activities produced a steady—if small—revenue stream for the institutions, they also contributed to losses during the financial crisis, which added to even greater losses from their lending and securitization activities. The extent that proprietary trading activities occur elsewhere in the firms remains unknown. Further, these activities opened the door to potential conflicts of interest that in some cases resulted in enforcement actions against some firms. While some market participants expressed concerns that the restrictions on proprietary trading activities could negatively affect U.S. financial institutions and the economy by reducing banks’ ability to diversify their income and compete with foreign institutions and reducing liquidity in asset markets, the actual potential for such effects remain unclear. While the regulators have started to take steps to improve their oversight, the recent crisis revealed the challenges financial regulators face in overseeing trading and investment activities at large financial institutions. One challenge for regulators in implementing the act’s restrictions will be to be mindful of possible unintended consequences. In addition, regulators will face the challenge of identifying and monitoring permissible activities that can create risks similar to those posed by proprietary trading and fund investments. For example, we found that many of the largest losses experienced by these firms were in activities such as lending and underwriting. For these reasons, and because of the uncertainty over whether some activities are or are not proprietary trading, regulators can best ensure the overall safety of the U.S. financial system by remaining vigilant about all activities that pose risks to large financial institutions regardless of whether such activities fall under the definitions of proprietary trading and hedge fund and private equity fund investments that regulators develop as part of the required rulemaking. However, implementing the restrictions, and in particular clarifying and requiring monitoring to better ensure that only permissible activities occur, will be difficult because of these and other challenges that must be addressed. To date, the regulators have taken some positive steps to ready themselves to prepare rules and supervise compliance with the act’s restrictions. Completing a more in-depth review of activities that may be covered by the act could provide information on the potential impact of the restrictions, how firms are preparing for them, whether there are efforts to evade the restrictions, and how to improve monitoring and enforcement. Because the regulators—either individually or through the Office of Financial Research—have yet to collect more complete information on the number and nature of trading desks where proprietary trading could be occurring, or firms’ hedge fund and private equity fund investment activities, they risk not being able to most effectively implement the restrictions. In order to improve their ability to track and effectively implement the new restrictions on proprietary trading and hedge fund and private equity fund investments, we recommend that the Chairperson of FSOC direct the Office of Financial Research, or work with the staffs of the Commodity Futures Trading Commission, FDIC, Federal Reserve, OCC, and SEC, or both, to collect and review more comprehensive information on the nature and volume of activities that could potentially be covered by the act. We provided a draft of this report to the Department of the Treasury, whose Secretary serves as the chairperson of FSOC; Commodity Futures Trading Commission; FDIC; Federal Reserve; OCC; Office of Thrift Supervision; SEC; and representatives of the six bank holding companies from which we collected data. The Commodity Futures Trading Commission, Department of the Treasury, FDIC, Federal Reserve, OCC, and SEC provided written responses, which are reprinted in appendixes II through VII. Some of the agencies and bank holding companies provided technical comments that we incorporated as appropriate. The letters from the Commodity Futures Trading Commission, Department of the Treasury, FDIC, OCC, and SEC stated that the agencies will consider our recommendation as part of their Section 619 rulemaking process. The Commodity Futures Trading Commission, Department of the Treasury, FDIC, Federal Reserve, and OCC stated that, as noted in the FSOC study, the collection of and analysis of information about trading activities is an important part of understanding trading activities and identifying prohibited proprietary trading. The Department of the Treasury, FDIC, and OCC said that as part of this process they would consider whether certain metrics or other data could be collected during the conformance period. The Department of the Treasury, FDIC, OCC, and SEC stated, as we did in our report, that collecting information before the rule is finalized would be difficult without more specificity about permissible activities and the scope of coverage of certain types of fund investments. Although we acknowledge the difficulties of identifying and collecting additional information, gathering more comprehensive information on the nature of and volume of trading at stand-alone proprietary-trading desks, as well as where the firms may be conducting prohibited proprietary trading at market-making desks or elsewhere in the firm, would assist the regulators in implementing the act’s restrictions in various ways. Having such information, including more complete data on the amounts of revenue and VaR levels of these firms’ desks that may be conducting proprietary trading now, would help regulators monitor the changes the bank holding companies make and provide them with a baseline to help observe whether the firms’ trading inventories and revenues change in the ways expected once the act’s restrictions are in place. The agencies’ ongoing supervision and regulation of these firms, which for some agencies includes on-site examiners conducting ongoing monitoring, provides a valuable mechanism for collecting such baseline information going forward. Finally, the Department of the Treasury, FDIC, Federal Reserve, and OCC noted that the relevant agencies (or in some letters “some or all of the relevant agencies”) responsible for implementing and enforcing Section 619 are in the best position to collect and review relevant information on the nature and volume of activities that could be covered by Section 619. Our recommendation provides the Chairperson of the FSOC the flexibility to direct the Office of Financial Research, or work with staff of the agencies, or both, to collect more comprehensive information. We are sending copies of this report to the appropriate congressional committees; the Department of the Treasury, whose Secretary serves as the chairperson of the FSOC; Commodity Futures Trading Commission; FDIC; Federal Reserve; OCC; Office of Thrift Supervision; SEC; and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. To describe what is known about the risks and conflicts of interest associated with proprietary trading and hedge and private equity fund investments, we collected and analyzed data and documents from, and interviewed federal agency officials, financial institutions, economists, researchers, and others. These included: the federal financial regulators, including the Board of Governors of the Federal Reserve System (Federal Reserve), the Office of the Comptroller of the Currency, the Securities and Exchange Commission, the Federal Deposit Insurance Corporation, and the Commodity Futures Trading Commission; the Financial Industry Regulatory Authority, which is the self-regulatory organization that oversees broker-dealers; industry associations; policy research organizations; and consumer advocacy organizations. We conducted site visits and teleconferences to interview senior management and observe trading desks at the six largest U.S. bank holding companies as of December 31, 2010, which accounted for 88 percent of the total trading revenues reported by all bank holding companies as ranked by total assets reported in bank regulatory filings. We also collected documents from and interviewed representatives of foreign regulators and research bodies about the U.S. restrictions and whether their countries were likely to enact similar restrictions. In addition, to describe the risks associated with proprietary trading and investments in hedge and private equity funds, we reviewed and analyzed data from the six bank holding companies. To obtain information about the extent to which proprietary trading posed risks to these firms, we attempted to gather information on stand-alone proprietary trading as well as other proprietary trading that may be occurring within other trading activities of the firms. We gathered information on stand-alone proprietary trading, but determined that collecting information on other activities that might constitute proprietary trading was not feasible because the firms did not separately maintain records on such activities and because of the uncertainty over the types of activities that will be considered proprietary trading by the regulators upon completion of the required regulatory rulemaking. From this, we obtained data from all firms covering both their stand-alone proprietary and total trading activities, including quarterly data on profits, losses, Value-at Risk (VaR) estimates, and how often their losses exceeded their VaR estimates, for the time period from third quarter 2006 to fourth quarter 2010, or 18 quarters. The bank holding companies also provided us with data on those hedge and private equity funds that they believed would be restricted by the Dodd-Frank Wall Street Reform and Consumer Protection Act (the act). We asked firms to self-identify any activities involving acquiring or retaining any equity, partnership, or ownership interest in or sponsoring private equity funds, as defined in Section 619 of the act. They provided quarterly data on revenue from such activities, for the third quarter 2006 through the fourth quarter 2010, or 18 quarters. We also analyzed selected public filing information collected from the companies’ 10K and 10Q filings and through SNL Financial, a company that aggregates public filing information. The data provided by firms was self-reported, and while we did not verify every data element’s accuracy, we took steps to help ensure that the data were complete and sufficiently reliable for our purposes. Specifically, we checked the data for such things as missing data, outliers, and for internal consistency. We also discussed the data provided with the companies and made follow-up requests for data and explanations as necessary to better ensure that we analyzed sufficiently complete and consistent information across all firms. In addition, we discussed with the Federal Reserve on-site examiners of the six bank holding companies the reliability of the information systems used to generate the data the companies reported to us, as well as the magnitude and ranges of that data provided. Finally, we reviewed information from each bank holding company about the reliability of their management information systems, which contained the computer-generated data they provided. While we determined that the data were sufficiently reliable for the purposes of this report, we present these data in our report as representations made to us by these six largest bank holding companies. To describe how regulators oversee proprietary trading and hedge fund and private equity fund investment activity, we analyzed selected examination and other regulatory documents from and interviewed federal financial regulators. We reviewed our past reports that addressed risks at large institutions and how their regulators have overseen such risks. We also reviewed the comments submitted to the Financial Stability Oversight Council as part of its study required by Section 619 of the act. Finally, we interviewed representatives of the six largest bank holding companies to learn how they interacted with their regulators and discussed regulatory oversight with researchers, financial industry representatives, consumer advocacy organizations, and policy organizations. We conducted this performance audit from August 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Orice Williams Brown, (202) 512-8678 or [email protected]. In addition to the contact named above, Cody Goebel, Assistant Director; Rudy Chatlos; Randy Fasnacht; James Lager; Thomas McCool; Jon Menaster; Marc Molino; David Rodriguez; Paul Thompson; and Winnie Tsen made key contributions to this report.
In addition to trading on behalf of customers, banks and their affiliates have conducted proprietary trading, using their own funds to profit from short-term price changes in asset markets. To restrain risk-taking and reduce the potential for federal support for banking entities, the Dodd-Frank Wall Street Reform and Consumer Protection Act (the act) prohibits banking entities from engaging in certain proprietary trading. It also restricts investments in hedge funds, which actively trade in securities and other financial contracts, and private equity funds, which use debt financing to invest in companies or other lessliquid assets. Regulators must implement these restrictions by October 2011. As required by Section 989 of the act, GAO reviewed (1) what is known about the risks associated with such activities and the potential effects of the restrictions and (2) how regulators oversee such activities. To conduct this work, GAO reviewed the trading and fund investment activities of the largest U.S. bank holding companies and collected selected data on their profits, losses, and risk measures. GAO also reviewed regulators' examinations and other materials related to the oversight of the largest bank holding companies. Proprietary trading and investments in hedge funds and private equity funds, like other trading and investment activities, provide banking entities with revenue but also create the potential for losses. Banking entities have conducted proprietary trading at stand-alone proprietary-trading desks but also have conducted such trading elsewhere within their firms. GAO determined that collecting information on activities other than at stand-alone proprietary trading desks was not feasible because the firms did not separately maintain records on such activities. As a result, GAO did not analyze data on broader proprietary trading activity but analyzed data on stand-alone proprietary-trading desks at the six largest U.S. bank holding companies from June 2006 through December 2010. Compared to these firms' overall revenues, their stand-alone proprietary trading generally produced small revenues in most quarters and some larger losses during the financial crisis. In 13 quarters during this period, stand-alone proprietary trading produced revenues of $15.6 billion--3.1 percent or less of the firms' combined quarterly revenues from all activities. But in five quarters during the financial crisis, these firms lost a combined $15.8 billion from stand-alone proprietary trading--resulting in an overall loss from such activities over the 4.5 year period of about $221 million. However, one of the six firms was responsible for both the largest quarterly revenue at any single firm of $1.2 billion and two of the largest single-firm quarterly losses of $8.7 billion and $1.9 billion. These firms' hedge and private equity fund investments also experienced small revenues in most quarters but somewhat larger losses during the crisis compared to total firm revenues. Losses from these firms' other activities, which include lending activities and other activities that could potentially be defined as proprietary trading, affected their overall net incomes more during this period than stand-alone proprietary trading and fund investments. Some market participants and observers were concerned that the act's restrictions could negatively affect U.S. financial institutions by reducing their income diversification and ability to compete with foreign institutions and reducing liquidity in asset markets. However, with little evidence existing on these effects, the likelihood of these potential outcomes was unclear, and others argued that removing the risks of these activities benefits banking entities and the U.S. financial system. Financial regulators have struggled in the past to effectively oversee bank holding companies. While the act's restrictions reduce the scope of activities regulators must monitor, implementing them poses challenges, including how to best ensure that firms do not take prohibited proprietary positions while conducting their permitted customer-trading activities. Regulators have yet to gather comprehensive information on the extent, revenues, and risk levels associated with activities that will potentially be covered, which would help them assess whether expected changes in firms' revenues and risk levels have occurred. Without such data, regulators will not know the full scope of such activities outside of stand-alone proprietary trading desks and may be less able to ensure that the firms have taken sufficient steps to curtail restricted activity. As part of implementing the new restrictions, regulators should collect and review more comprehensive information on the nature and volume of activities potentially covered by the act. Treasury and the financial regulators agreed to consider this as part of their rulemaking.
The Social Security Act of 1935 authorized the SSA to establish a recordkeeping system to help manage the Social Security program, and resulted in the creation of the SSN. Through a process known as “enumeration,” unique numbers are created for every person as a work and retirement benefit record for the Social Security program. Today, SSNs are generally issued to most U.S. citizens and are also available to noncitizens lawfully admitted to the United States with permission to work. Lawfully admitted noncitizens may also qualify for a SSN for nonwork purposes when a federal, state, or local law requires a SSN to obtain a particular welfare benefit or service. SSA staff collect and verify information from such applicants regarding their age, identity, citizenship, and immigration status. Most of the agency’s enumeration workload involves U.S. citizens who generally receive SSNs via SSA’s birth registration process handled by hospitals. However, individuals seeking SSNs can also apply in person at any of SSA’s field locations, through the mail, or via the Internet. The uniqueness and broad applicability of the SSN have made it the identifier of choice for government agencies and private businesses, both for compliance with federal requirements and for the agencies’ and businesses’ own purposes. In addition, the boom in computer technology over the past decades has prompted private businesses and government agencies to rely on SSNs as a way to accumulate and identify information for their databases. As such, SSNs are often the identifier of choice among individuals seeking to create false identities. Law enforcement officials and others consider the proliferation of false identities to be one of the fastest growing crimes today. In 2002, the Federal Trade Commission received 380,103 consumer fraud and identity theft complaints, up from 139,007 in 2000. In 2002, consumers also reported losses from fraud of more than $343 million. In addition, identity crime accounts for over 80 percent of social security number misuse allegations according to the SSA. As we reported to you last year, federal, state, and county government agencies use SSNs. When these entities administer programs that deliver services and benefits to the public, they rely extensively on the SSNs of those receiving the benefits and services. Because SSNs are unique identifiers and do not change, the numbers provide a convenient and efficient means of managing records. They are also particularly useful for data sharing and data matching because agencies can use them to check or compare their information quickly and accurately with that from other agencies. In so doing, these agencies can better ensure that they pay benefits or provide services only to eligible individuals and can more readily recover delinquent debts individuals may owe. In addition to using SSNs to deliver services or benefits, agencies also use or share SSNs to conduct statistical research and program evaluations. Moreover, most of the government departments or agencies we surveyed use SSNs to varying extents to perform some of their responsibilities as employers, such as paying their employees and providing health and other insurance benefits. Many of the government agencies we surveyed in our work last year reported maintaining public records that contain SSNs. This is particularly true at the state and county level where certain offices such as state professional licensing agencies and county recorders’ offices have traditionally been repositories for public records that may contain SSNs. These records chronicle the various life events and other activities of individuals as they interact with the government, such as birth certificates, professional licenses, and property title transfers. Generally, state law governs whether and under what circumstances these records are made available to the public, and they vary from state to state. They may be made available for a number of reasons, including the presumption that citizens need key information to ensure that government is accountable to the people. Certain records maintained by federal, state, and county courts are also routinely made available to the public. In principle, these records are open to aid in preserving the integrity of the judicial process and to enhance public trust and confidence in the judicial process. At the federal level, access to court documents generally has its grounding in common law and constitutional principles. In some cases, public access is also required by statute, as is the case for papers filed in a bankruptcy proceeding. As with federal courts, requirements regarding access to state and local court records may have a state common law or constitutional basis or may be based on state laws. Although public records have traditionally been housed in government offices and court buildings, to improve customer service, some state and local government entities are considering placing more public records on the Internet. Because such actions would create new opportunities for gathering SSNs from public records on a broad scale, we are beginning work for this Subcommittee to examine the extent to which SSNs in public records are already accessible via the Internet. In our current work, we found that some private sector entities also rely extensively on the SSN. Businesses often request an individual’s SSN in exchange for goods or services. For example, some businesses use the SSN as a key identifier to assess credit risk, track patient care among multiple providers, locate bankruptcy assets, and provide background checks on new employees. In some cases, businesses require individuals to submit their SSNs to comply with federal laws such as the tax code. Currently, there is no federal law that generally prohibits businesses from requiring a person’s SSN as a condition of providing goods and services. If an individual refuses to give his or her SSN to a company or organization, they can be refused goods and services unless the SSN is provided. To build on previous work we did to determine certain private sector entities use of SSNs, we have focused our initial private sector work on information resellers and consumer reporting agencies (CRAs). Some of these entities have come to rely on the SSN as an identifier to accumulate information about individuals, which helps them determine the identity of an individual for purposes such as employment screening, credit information, and criminal histories. This is particularly true of entities, known as information resellers, who amass personal information, including SSNs. Information resellers often compile information from various public and private sources. These entities provide their products and services to a variety of customers, although the larger ones generally limit their services to customers that establish accounts with them, such as entities like law firms and financial institutions. Other information resellers often make their information available through the Internet to persons paying a fee to access it. CRAs are also large private sector users of SSNs. These entities often rely on SSNs, as well as individuals’ names and addresses to build and maintain credit histories. Businesses routinely report consumers’ financial transactions, such as charges, loans, and credit repayments to CRAs. CRAs use SSNs to determine consumers’ identities and ensure that incoming consumer account data is matched correctly with information already on file. Certain laws such as the Fair Credit Reporting Act, the Gramm-Leach- Bliley Act, and the Driver’s Privacy Protection Act have helped to limit the use of personal information, including SSNs, by information resellers and CRAs. These laws limit the disclosure of information by these entities to specific circumstances. In our discussion with some of the larger information resellers and CRAs, we were told that they take specific actions to adhere to these laws, such as establishing contracts with their clients specifying that the information obtained will be used only for accepted purposes under the law. The extensive public and private sector uses of SSNs and availability of public records and other information, especially via the Internet, has allowed individuals’ personal information to be aggregated into multiple databases or centralized locations. In the course of our work, we have identified numerous examples where public and private databases has been compromised and personal data, including SSNs, has been stolen. In some instances, the display of SSNs in public records and easily accessible Web sites provided the opportunity for identity thieves. In other instances, databases not readily available to outsiders have had their security breached by employees with access to key information. For example, in our current work, we identified a case where two individuals obtained the names and SSNs of 325 high-ranking U.S. military officers from a public Web site, then used those names and identities to apply for instant credit at a leading computer company. Although criminals have not accessed all public and private databases, such cases illustrate that these databases are vulnerable to criminal misuse. Because SSA is the issuer and custodian of SSN data, SSA has a unique role in helping to prevent the proliferation of false identities. Following the events of September 11, 2001, SSA began taking steps to increase management attention on enumeration and formed a task force to address weaknesses in the enumeration process. As a result of this effort, SSA has developed major new initiatives to prevent the inappropriate assignment of SSNs to noncitizens. However, our preliminary findings to date identified some continued vulnerabilities in the enumeration process, including SSA’s process for issuing replacement Social Security cards and assigning SSNs to children under age one. SSA is also increasingly called upon by states to verify the identity of individuals seeking driver licenses. We found that fewer than half the states have used SSA’s service and the extent to which they regularly use the service varies widely. Factors such as costs, problems with system reliability, and state priorities have affected states’ use of SSA’s verification service. We also identified a key weakness in the service that exposes some states to inadvertently issuing licenses to individuals using the SSNs of deceased individuals. We plan to issue reports on these issues in September that will likely contain recommendations to improve SSA’s enumeration process and its SSN verification service. SSA has increased document verifications and developed new initiatives to prevent the inappropriate assignment of SSNs to noncitizens who represent the bulk of all initial SSNs issued by SSA’s 1,333 field offices. Despite SSA’s progress, some weaknesses remain. SSA has increased document verifications by requiring independent verification of the documents and immigration status of all noncitizen applicants with the issuing agency—namely DHS and the Department of State (State Department) prior to issuing the SSN. However, many field office staff we interviewed are relying heavily on DHS’s verification service, while neglecting standard, in-house practices for visually inspecting and verifying identity documents. We also found that while SSA has made improvements to its automated system for assigning SSNs, the system is not designed to prevent the issuance of a SSN if field staff by-pass essential verification steps. SSA also has begun requiring foreign students to show proof of their full-time enrollment, and a number of field office staff told us they may verify this information if the documentation appears suspect. However, SSA does not require this verification step, nor does the agency have access to a systematic means to independently verify students’ status. Consequently, SSNs for noncitizen students may still be improperly issued. SSA has also undertaken other new initiatives to shift the burden of processing noncitizen applications from its field offices. SSA recently piloted a specialized center in Brooklyn, New York, which focuses exclusively on enumeration and utilizes the expertise of DHS document examiners and SSA Office of Inspector General’s (OIG) investigators. However, the future of this pilot project and DHS’ participation has not yet been determined. Meanwhile, in late 2002, SSA began a phased implementation of a long-term process to issue SSNs to noncitizens at the point of entry into the United States, called “Enumeration at Entry” (EAE). EAE offers the advantage of using State Department and DHS expertise to authenticate information provided by applicants for subsequent transmission to SSA who then issues the SSN. Currently, EAE is limited to immigrants age 18 and older who have the option of applying for a SSN at one of the 127 State Department posts worldwide that issue immigrant visas. SSA has experienced problems with obtaining clean records from both the State Department and DHS, but plans to continue expanding the program over time to include other noncitizen groups, such as students and temporary visitors. SSA also intends to evaluate the initial phase of EAE in conjunction with the State Department and DHS. While SSA has embarked on these new initiatives, it has not tightened controls in two key areas of its enumeration process that could be exploited by individuals seeking fraudulent SSNs. One area is the assignment of SSNs to children under age one. Prior work by SSA’s Inspector General identified the assignment of SSNs to children as an area prone to fraud because SSA did not independently verify the authenticity of various state birth certificates. Despite the training and guidance provided to field office employees, the OIG found that the quality of many counterfeit documents was often too good to detect simply by visual inspection. Last year, SSA revised its policies to require that field staff obtain independent third party verification of the birth records for U.S. born individuals age one and older from the state or local bureau of vital statistics prior to issuing a SSN card. However, SSA left in place its policy for children under age one and continues to require only a visual inspection of documents, such as birth records. SSA’s policies relating to enumerating children under age one expose the agency to fraud. During our fieldwork, we found an example of a noncitizen who submitted a counterfeit birth certificate in support of a SSN application for a fictitious U.S. born child under age one. In this case, the SSA field office employee identified the counterfeit state birth certificate by comparing it with an authentic one. However, SSA staff acknowledged that if a counterfeit out-of-state birth certificate had been used, SSA would likely have issued the SSN because of staff unfamiliarity with the specific features of the numerous state birth certificates. Further, we were able to prove the ease with which individuals can obtain SSNs by exploiting SSA’s current processes. Working in an undercover capacity our investigators were able to obtain two SSNs. By posing as parents of newborns, they obtained the first SSN by applying in person at a SSA field office using a counterfeit birth certificate and baptismal certificate. Using similar documents, a second SSN was obtained by our investigators who submitted all material via the mail. In both cases, SSA staff verified our counterfeit documents as being valid. SSA officials told us that they are re- evaluating their policy for enumerating children under age one. However, they noted that parents often need a SSN for their child soon after birth for various reasons, such as for income tax purposes. They acknowledge that a challenge facing the agency is to strike a better balance between serving the needs of the public and ensuring SSN integrity. In addition to the assignment of SSNs to children under the age of one, SSA’s policy for replacing Social Security cards also increases the potential for misuse of SSNs. SSA’s policy allows individuals to obtain up to 52 replacement cards per year. Of the 18 million cards issued by SSA in fiscal year 2002, 12.4 million, or 69 percent, were replacement cards. More than 1 million of these cards were issued to noncitizens. While SSA requires noncitizens applying for a replacement card to provide the same identity and immigration information as if they were applying for an original SSN, SSA’s evidence requirements for citizens are much less stringent. Citizens applying for a replacement card need not prove their citizenship; they may use as proof of identity such documents as a driver’s license, passport, employee identification card, school identification card, church membership or confirmation record, life insurance policy, or health insurance card. The ability to obtain numerous replacement SSN cards with less documentation creates a condition for requestors to obtain SSNs for a wide range of illicit uses, including selling them to noncitizens. These cards can be sold to individuals seeking to hide or create a new identity, perhaps for the purpose of some illicit activity. SSA told us the agency is considering limiting the number of replacement cards with certain exceptions such as for name changes, administrative errors, and hardships. However, they cautioned that while support exists for this change within the agency, some advocacy groups oppose such a limit. Field staff we interviewed told us that despite their reservations regarding individuals seeking excessive numbers of replacement cards, they were required under SSA policy to issue the cards. Many of the field office staff and managers we spoke to acknowledged that the current policy weakens the integrity of SSA’s enumeration process. The events of September 11, 2001, focused attention on the importance of identifying people who use false identity information or documents, particularly in the driver licensing process. Driver licenses are a widely accepted form of identification that individuals frequently use to obtain services or benefits from federal and state agencies, open a bank account, request credit, board an airplane, and carry on other important activities of daily living. For this reason, driver licensing agencies are points at which individuals may attempt to fraudulently obtain a license using a false name, SSN, or other documents such as birth certificates to secure this key credential. Given that most states collect SSNs during the licensing process, SSA is uniquely positioned to help states verify the identity information provided by applicants. To this end, SSA has a verification service in place that allows state driver licensing agencies to verify the SSN, name, and date of birth of customers with SSA’s master file of SSN owners. States can transmit requests for SSN verification in two ways. One is by sending multiple requests together, called the “batch” method, to which SSA reports it generally responds within 48 hours. The other way is to send an individual request on-line, to which SSA responds immediately. Twenty-five states have used the batch or on-line method to verify SSNs with SSA and the extent to which they use the service on a regular basis varies. About three-fourths of the states that rely on SSA’s verification service used the on-line method or a combination of the on-line and batch method, while the remaining states used the batch method exclusively. Over the last several years, batch states estimated submitting over 84 million batch requests to SSA compared to 13 million requests submitted by on-line users. States’ use of SSA’s on-line service has increased steadily over the last several years. However, the extent of use has varied significantly, with 5 states submitting over 70 percent of all on-line verification requests and one state submitting about one-third of the total. Various factors, such as costs, problems with system reliability, and state priorities affect states’ decisions regarding use of SSA’s verification service. In addition to the per-transaction fees that SSA charges, states may incur additional costs to set up and use SSA’s service, including the cost for computer programming, equipment, staffing, training, and so forth. Moreover, states’ decisions about whether to use SSA’s service, or the extent to which to use it, are also driven by internal policies, priorities, and other concerns. For example, some of the states we visited have policies requiring their driver licensing agencies to verify all customers’ SSNs. Other states may limit their use of the on-line method to certain targeted populations, such as where fraud is suspected or for initial licenses, but not for renewals of in-state licenses. The nonverifying states we contacted expressed reluctance to use SSA’s verification service based on performance problems they had heard were encountered by other states. Some states cited concerns about frequent outages and slowness of the on-line system. Other states mentioned that the extra time to verify and resolve SSN problems could increase customer waiting times because a driver license would not be issued until verification was complete. Indeed, weaknesses in SSA’s design and management of its SSN on-line verification services have limited its usefulness and contributed to capacity and performance problems. SSA used an available infrastructure to set up the system and encountered capacity problems that continued and worsened after the pilot phase. The capacity problems inherent in the design of the on-line system have affected state use of SSA’s verification service. Officials in one state told us that they have been forced to scale back their use of the system because they were told by SSA that their volume of transactions were overloading the system. In addition, because of issues related to performance and reliability, no new states have used the service since the summer of 2002. At the time of our review, 10 states had signed agreements with SSA and were waiting to use the on-line system and 17 states had received funds from the Department of Transportation for the purpose of verifying SSNs with SSA. It is uncertain how many of the 17 states will ultimately opt to use SSA’s on-line service. However, even if they signed agreements with SSA today, they may not be able to use the service until the backlog of waiting states is addressed. More recently, SSA has made some necessary improvements to increase system capacity and to refocus its attention to the day-to-day management of the service. However, at the time of our review, the agency still has not established goals for the level of service it will provide to driver licensing agencies. In reviewing SSA’s verification service, we identified a key weakness that expose some states to issuing licenses to applicants using the personal information of deceased individuals. Unlike the on-line service, SSA does not match batch requests against its nationwide death records. As a result, the batch method will not identify and prevent the issuance of a license in cases where an SSN name and date of birth of a deceased individual is being used. SSA officials told us that they initially developed the batch method several years ago and they did not design the system to match SSNs against its death files. However, in developing the on-line system for state driver licensing agencies, a death match was built into the new process. At the time of our review, SSA acknowledged that it had not explicitly informed states about the limitation of the batch service. Our own analysis of one month of SSN transactions submitted to SSA by one state using the batch method identified at least 44 cases in which individuals used the SSN, name, and date of birth of persons listed as deceased in SSA’s records to obtain a license or an identification card. We forwarded this information to state investigators who quickly confirmed that licenses and identification cards had been issued in 41 cases and were continuing to investigate the others. To further assess states’ vulnerability in this area, our own investigators working in an undercover capacity were able to obtain licenses in two batch states using a counterfeit out-of-state license and other fraudulent documents and the SSNs of deceased persons. In both states, driver licensing employees accepted the documents we submitted as valid. Our investigators completed the transaction in one state and left with a new valid license. In the second state, the new permanent license arrived by mail within weeks. The ease in which they were able to obtain these licenses confirmed the vulnerability of states currently using the batch method as a means of SSN verification. Moreover, states that have used the batch method in prior years to clean up their records and verify the SSNs of millions of driver license holders, may have also unwittingly left themselves open to identity theft and fraud. The use of SSNs by both public and private sector entities is likely to continue given that it is used as the key identifier by most of these entities and there is currently no other widely accepted alternative. To help control such use, certain laws have helped to safeguard such personal information, including SSNs, by limiting disclosure of such information to specific purposes. To the extent that personal information is aggregated in public and private sector databases, it becomes vulnerable to misuse. In addition, to the extent that public record information becomes more available in an electronic format, it becomes more vulnerable to misuse. The ease of access the Internet affords could encourage individuals to engage in information gathering from public records on a broader scale than they could before when they had to visit a physical location and request or search for information on a case-by-case basis. SSA has made substantial progress in protecting the integrity of the SSN by requiring that the immigration and work status of every non-citizen applicant be verified before an SSN is issued. However, without further system improvements and assurance that field offices will comply fully with the new policies and procedures this effort may be less effective than it could be. Further, as SSA closes off many avenues of unauthorized access to SSNs, perpetrators of fraud will likely shift their strategies to less protected areas. In particular, SSA’s policies for enumerating children and providing unlimited numbers of replacement cards may well invite such activity, unless they too are modified. State driver license agencies face a daunting task in ensuring that the identity information of those to whom they issues licenses is verified. States’ effectiveness verifying individuals’ identities is often dependent on several factors, including the receipt of timely and accurate identity information from SSA. Unfortunately, design and management weaknesses associated with SSA’s verification service have limited its effectiveness. States that are unable to take full advantage of the service and others that are waiting for the opportunity to use it remain vulnerable to identity crimes. In addition, states that continue to rely primarily or partly on SSA’s batch verification service still risk issuing licenses to individuals using the SSNs and other identity information of deceased individuals. This remains a critical flaw in SSA’s service and states’ efforts to strengthen the integrity of the driver license. GAO is preparing to publish reports covering the work I have summarized within the next several months, which will include recommendations aimed at ensuring the integrity of the SSN. We look forward to continuing to work with this Subcommittee on these important issues. I would be happy to respond to any questions you or other members of the Subcommittee may have. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, or Dan Bertoni, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Individuals making key contributions to this testimony include, Andrew O’Connell, John Cooney, Tamara Cross, Paul DeSaulniers, Patrick DiBattista, Jason Holsclaw, George Ogilvie, George Scott, Jacquelyn Stewart, Robyn Stewart, and Tony Wysocki.
In 1936, the Social Security Administration (SSA) established the Social Security Number (SSN) to track worker's earnings for social security benefit purposes. However, the SSN is also used for a myriad of non-Social Security purposes. Today, the SSN is used, in part, as a verification tool for services such as child support collection, law enforcement enhancements, and issuing credit to individuals. Although these uses of SSNs are beneficial to the public, SSNs are also a key piece of information in creating false identities. Moreover, the aggregation of personal information, such as SSNs, in large corporate databases, as well as the public display of SSNs in various public records, may provide criminals the opportunity to commit identity crimes. SSA, the originator of the SSN, is responsible for ensuring SSN integrity and verifying the authenticity of identification documents used to obtain SSNs. Although Congress has passed a number of laws to protect an individual's privacy, the continued use and reliance on SSNs by private and public sector entities and the potential for misuse underscores the importance of identifying areas that can be strengthened. Accordingly, this testimony focuses on describing (1) public and private sector use and display of SSNs, and (2) SSA's role in preventing the proliferation of false identities. Public and some private sector entities rely extensively on SSNs. We reported last year that federal, state and county government agencies rely on the SSN to manage records, verify eligibility of benefit applicants, and collect outstanding debt. SSNs are also displayed on a number of public record documents that are routinely made available to the public. To improve customer service, some state and local government entities are considering placing more public records on the Internet. In addition, some private sector entities have come to rely on the SSN as an identifier, using it and other information to accumulate information about individuals. This is particularly true of entities that amass public and private data, including SSNs, for resale. Certain laws have helped to restrict the use of SSN and other information by these private sector entities to specific purposes. However, as a result of the increased use and availability of SSN information and other data, more and more personal information is being centralized into various corporate and public databases. Because SSNs are often the identifier of choice among individuals seeking to create false identities, to the extent that personal information is aggregated in public and private sector databases it becomes vulnerable to misuse. As the agency responsible for issuing SSNs and maintaining the earnings records for millions of SSN holders, SSA plays a unique role in helping to prevent the proliferation of false identities. Following the events of September 11, 2001, SSA formed a task force to address weaknesses in the enumeration process and developed major new initiatives to prevent the inappropriate assignment of SSNs to non-citizens, who represent the bulk of new SSNs issued by SSA's 1,333 field offices. SSA now requires field staff to verify the identity information and immigration status of all non-citizen applicants with the Department of Homeland Security (DHS), prior to issuing an SSN. However, other areas remain vulnerable and could be targeted by those seeking fraudulent SSNs. These include SSA's process for assigning social security numbers for children under age one and issuing replacement social security cards. SSA also provides a service to states to verify the SSNs of driver license applicants. Fewer than half the states have used SSA's service and the extent to which they regularly use it varies. Factors such as cost, problems with system reliability, and state priorities and policies affect states' use SSA's service. We also identified a weakness in SSA's verification service that exposes some states to fraud by those using the SSNs of deceased persons.
Lebanon is a small, religiously diverse country on the Mediterranean Sea that borders Israel and Syria. (See fig. 1.) Religious tensions among Lebanon’s Maronite Christians, Sunni Muslims, and Shiite Muslims, among others, along with an influx of Palestinian refugees into Lebanon, have fueled Lebanon’s internal strife and conflicts with its neighbors. During the civil war between 1975 and 1990, both Syrian and Israeli forces occupied the country. In the midst of the civil war and Israel’s continued occupation of southern Lebanon, Hezbollah emerged in Lebanon as a powerful Islamic militant group. Throughout the 1990s, Hezbollah, funded by Iran and designated by the United States and Israel as a terrorist organization, pursued its military campaign against Israeli forces occupying Lebanon while also participating in Lebanon’s political system. In 2000, Israeli forces withdrew from southern Lebanon. In 2005, with pressure from the international community, Syrian forces withdrew from Lebanon following the assassination of Lebanon’s prime minister. The subsequent parliamentary elections that year resulted in a member of Hezbollah holding a cabinet position for the first time. In the summer of 2006, Hezbollah and Israel entered into a month-long conflict that ended with the adoption of United Nations Security Council Resolution 1701 by both the Israeli and Lebanese governments. The resolution called for Israeli withdrawal from southern Lebanon in parallel with the deployment of Lebanese and United Nations forces and the disarmament of all armed groups in Lebanon, among other things. Since 2011, instability in neighboring Syria may have exacerbated sectarian conflict within Lebanon. See figure 2 for a timeline of selected political events in Lebanon. The United States has provided Lebanon with assistance, such as emergency humanitarian aid during the civil war and training for military forces under the International Military Education and Training (IMET) program. The United States and Lebanon have historically enjoyed a good relationship in part because of cultural and religious ties, a large Lebanese-American community in the United States, and the pro- Western orientation of Lebanon, particularly during the Cold War. Following the Syrian withdrawal in 2005 and the 2006 Israeli-Hezbollah war, the United States increased its security assistance to Lebanon. The United States provided assistance to the LAF, which is generally responsible for providing border security, counterterrorism, and national defense, and to the ISF, or police force, which is generally responsible for maintaining law and order in Lebanon. The fiscal year 2006 appropriations marked the first time since 1984 that U.S. agencies allocated Foreign Military Financing grants to help modernize and equip the LAF. Since then, the United States has provided security assistance to Lebanon through the Foreign Military Financing program; IMET; International Narcotics Control and Law Enforcement (INCLE) program; the Nonproliferation, Antiterrorism, Demining, and Related (NADR) programs; and Section 1206 and Section 1207 authorities for training and equipping foreign militaries and security forces and for reconstruction, stabilization, and security activities in foreign countries, respectively. The NADR security assistance for Lebanon has been provided through three programs: Antiterrorism Assistance, Counterterrorism Financing, and Export Control and Related Border Security (EXBS). Table 1 describes these security assistance programs. In addition, to these security assistance programs, U.S. Special Forces units have provided specialized training to LAF Special Forces units, according to agency officials. The United States has kept strategic goals for Lebanon constant since 2007. These goals include supporting the Government of Lebanon in establishing stability and security against internal threats from militant extremists and countering destabilizing influences. U.S. agencies have adjusted security assistance in response to Lebanon’s political and security conditions. For example, State and DOD have delayed releasing funds and limited the types of equipment provided. Both agencies have also implemented additional assistance programs since fiscal year 2007. Since 2007, U.S strategic goals for Lebanon have been to support the nation as a stable, secure, and independent democracy. According to DOD and State officials, the overarching priorities for Lebanon remain focused on supporting Lebanese sovereignty and stability and countering the influence of Syria and Iran. Security-related goals for Lebanon focus on counterterrorism and regional stability or internal security. Programs activities seek to support development of the LAF and the ISF as the only legitimate providers of Lebanon’s security. The goals and objectives of the individual security assistance programs are intended to support the U.S. strategic goals and overarching priorities, as the following examples illustrate: Goals for the Foreign Military Financing, IMET, and 1206 programs in Lebanon since the departure of Syrian forces are to bolster the capability of the LAF, nurture the bilateral military relationship between the United States and the LAF, and continue encouraging establishment of a stable, legal, and pro-U.S. civil government. Goals for the INCLE police training program are (1) to build Lebanon’s operational capacity to combat crime, and prevent and respond to terror attacks; and (2) to assist Lebanon in developing the ISF into a competent, professional, and democratic police force with the necessary training, equipment, and institutional capacity to enforce the rule of law in Lebanon, cement sovereign Lebanese government control over its territory, and protect the Lebanese people. Goals for the Antiterrorism Assistance program in Lebanon are to develop and build the Lebanese government’s capacities in border security, mid- and senior-level leadership development, and counterterrorism investigations. Goals for the Counterterrorism Financing Program are to deny terrorists access to money, resources, and support. Goals for EXBS in Lebanon focus on strengthening the capability of Lebanese enforcement agencies to effectively control cross-border trade in strategic goods. Goals for the 1207 program were to strengthen Lebanon’s internal security forces after armed conflicts in 2006 and 2007. The goals and objectives of U.S. security assistance to Lebanon have continued to focus on supporting Lebanese sovereignty and stability and countering the influences of Syria and Iran. However, according to State and DOD officials, the agencies have changed how they implement programs based on changes in the political and security situation, for example, by delaying the release of funds or limiting the types of equipment provided. In one instance, State delayed committing fiscal year 2010 Foreign Military Financing funds as a result of an incident on August 3, 2010, in which the LAF opened fire on an Israeli Defense Force unit engaged in routine maintenance along the Blue Line, alleging that it had crossed into Lebanese territory. Two Lebanese soldiers, a journalist, and an Israeli officer were killed. According to State, in response to concerns raised by a member of Congress, State delayed committing $100 million of fiscal year 2010 Foreign Military Financing funds for Lebanon, citing the need to determine whether equipment that the United States provided to the LAF was used against Israel. According to a State official, State committed the funds in November 2010 after consulting with the member of Congress. In addition, according to State officials, State and DOD decided to place a temporary hold on lethal assistance to the LAF in January 2011, after the collapse of Prime Minister Saad Hariri’s government. In March 2012, the agencies decided to lift the hold on lethal assistance based on the Mikati government’s adherence to key international obligations to the Special Tribunal for Lebanon and United Nations Security Council Resolution. The U.S. government has also implemented new security assistance programs in Lebanon since fiscal year 2007 in response to changing security assistance needs, according to State officials. For example, State began implementing the EXBS program in Lebanon in fiscal year 2009— after State performed an assessment to identify deficiencies in the country’s ability to detect and interdict weapons of mass destruction and advanced conventional weapons, according to officials of State’s Bureau of International Security and Nonproliferation. State determined that Lebanon did not have a comprehensive system to regulate trade in strategic goods and technologies for the purpose of preventing the proliferation of such weapons, and lacked the necessary legal and institutional elements to manage strategic trade consistent with international standards. In addition, in fiscal year 2010, State’s Bureau of Counterterrorism began implementing its Counterterrorism Financing program in Lebanon. According to bureau officials, the goals of the program are to build the foreign capacity of the Government of Lebanon to develop laws and regulations to deny terrorists access to funds. State has also changed the focus of some of the security assistance programs. For example, the INCLE police training program is no longer providing basic training to cadets, according to the Bureau of International Narcotics and Law Enforcement Affairs. The bureau turned the basic training over to the ISF as of July 2012 and is providing leadership and management training. The bureau is working with the Federal Bureau of Investigation and the Drug Enforcement Administration to develop specialized training to increase the ISF’s investigatory skills, according to U.S. embassy officials. The bureau is also expanding its community policing program, according to bureau officials. In addition, State’s Bureau for Counterterrorism has changed the focus of its antiterrorism training, according to officials of State’s Bureau of Diplomatic Security whom we interviewed in Lebanon. They explained that when the program began in 2007, it focused on training Lebanese security details to protect national leaders. The trained protection details have demonstrated a mastery of the skills for which they were trained. As a result, the Bureau for Counterterrorism and the Bureau of Diplomatic Security consider the protection mission to be complete and since 2010 has shifted its focus to border security, management and leadership, and counterterrorism investigations, according to the Diplomatic Security officials. Finally, while there have been no major changes to the EXBS program in Lebanon, U.S. embassy officials stated that the United States has increased the program’s emphasis on convincing the government to change its export control laws. U.S. agencies have allocated over $925 million in security assistance for Lebanon from fiscal years 2007 through 2012, with funds varying by year and program. The majority of funds—69 percent—was from the Foreign Military Financing program, though State and DOD also utilized seven other programs. State has disbursed the majority of funds it allocated for each fiscal year, while DOD has committed the majority of funds for Foreign Military Financing. In order to help achieve U.S. strategic goals, State and DOD have allocated more than $925 million in security assistance for Lebanon’s LAF and ISF from fiscal years 2007 through 2012. This funding peaked in fiscal year 2007 at about $323 million, declined in fiscal year 2008 to about $32 million, and has fluctuated from fiscal years 2009 through 2012. (See fig. 3.) In fiscal year 2007, the $325 million allocated for security assistance for Lebanon followed the 2006 war between Hezbollah and Israel and assisted Lebanon in fulfilling its obligations under the United Nations Security Council Resolution 1701. This funding level represented a significant increase over fiscal year 2006, when U.S. security assistance for Lebanon totaled only about $28 million. See appendix II for information on allocation, obligation, and disbursement or commitment of security assistance for Lebanon by program and year. U.S. agencies allocated the majority (approximately 69 percent, or $641 million) of security assistance for Lebanon from fiscal years 2007 through 2012 through the Foreign Military Financing program, which provides grants and loans to the Lebanese Armed Forces to purchase U.S. equipment, services, and training. Figure 4 shows the distribution of U.S. security assistance allocated for Lebanon by program. The Foreign Military Financing program funded the purchase of various equipment and services. For example, Lebanon received trucks, truck tractors, trailers, ambulances, cargo-troop carriers, and armament vehicles through the Foreign Military Financing program. Other equipment purchased through the Foreign Military Financing program included helicopters, ships, radios, and spare parts. Appendix III presents selected equipment and services provided to Lebanon by the U.S. government and by other governments from fiscal years 2007 through 2012. Generally, State disbursed the majority of funds allocated during the same fiscal year for security assistance programs providing funds for Lebanon, and DOD committed the majority of Foreign Military Financing funds for Lebanon. See appendix II for additional information on allocation, obligation, and disbursement or commitment of security assistance for Lebanon by program and year. Specifically, State disbursed about 78 percent of total allocations from fiscal years 2007 through 2012 for the IMET, INCLE, Antiterrorism Assistance, Counterterrorism Financing, and EXBS programs and the Section 1206 and 1207 authorities. Although State has disbursed the majority of these allocated funds, State has not disbursed some funds allocated for the INCLE and Antiterrorism Assistance programs for fiscal years 2007 through 2011. According to State officials, some obligated funds are waiting to be disbursed and all of the unobligated funds are no longer available for obligation. DOD has committed 87 percent of Foreign Military Financing funds for Lebanon for fiscal years 2007 through 2012. As of February 2013, U.S. agencies had evaluated only the INCLE police training program for Lebanon and did not have firm plans to evaluate the other six ongoing security assistance programs. State’s evaluation policy requires periodic evaluations of certain programs, consistent with standards established in U.S. law. Evaluations can be facilitated by collecting data on appropriate performance indicators. However, we and other audit agencies have previously reported deficiencies in how State and DOD measure program performance. For example, we found in 2011 that the IMET program evaluation efforts had few of the elements commonly accepted as appropriate for measuring progress and did not objectively measure how IMET contributes to long-term desired outcomes. We have also reported deficiencies in how DOD defined performance measures for the 1206 program. In response to those reports, State and DOD concurred with our recommendations and described agency efforts to develop better performance measures. State’s evaluation policy, which is partly based on the Government Performance and Results Modernization Act of 2010, requires that all large programs, projects, and activities be evaluated at least once in their lifetime or every 5 years, whichever is less. According to State, the 2010 act strengthened the mandate to evaluate programs, requiring agencies to include a discussion of evaluations in their strategic plans and performance reports. State established its evaluation policy in February 2012 in part to comply with the requirements of this act. In addition, according to State, the policy supports State’s goal of connecting evaluation to its investments in diplomacy and development to ensure that they align with the agency’s overarching strategic goals and objectives. State’s evaluation guidance requires each bureau to evaluate two to four projects, programs, or activities over a 24-month period beginning with fiscal year 2012, depending on the size, scope, and complexity of the programs being evaluated and the availability of funding. State’s evaluation policy also requires all bureaus to complete a bureau evaluation plan and to update it annually. State requires the bureaus’ plans to align with evaluation policy guidance and to assist each bureau in assessing the extent to which its efforts contribute to achieving its intermediate objectives and, by extension, its longer-term goals. Program evaluations are individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working. As a key component of effective program management, evaluations assess how well a program is working and help managers make informed decisions about current and future programming. An evaluation provides an overall assessment of whether a program works and identifies adjustments that may improve its results. Depending on their focus, evaluations may examine aspects of program operations (such as in a process evaluation) or factors in the program environment that may impede or contribute to its success. Program evaluations may systematically compare the effectiveness of alternative programs aimed at the same objective. Types of evaluation include process (or implementation), outcome, and impact evaluations, as well as cost-benefit and cost-effectiveness analyses. Performance measurement, by contrast, is the ongoing monitoring and reporting of program accomplishments, particularly progress toward pre- established goals. Performance measurement is typically conducted by program or agency officials and focuses on whether a program has achieved its objectives, expressed as measurable performance indicators. As indicators of progress toward goals, performance measures can inform overall program evaluations. Program evaluations typically examine a broader range of information on program performance and its context than is feasible to monitor on an ongoing basis. Both forms of assessment—program evaluation and performance measurement—aim to support resource allocation and other policy decisions to improve service delivery and program effectiveness. But performance measurement, because of its ongoing nature, can serve as an early warning system to management and as a vehicle for improving accountability to the public. As of February 2013, U.S. agencies had evaluated only one of their security assistance programs in Lebanon. The Bureau of International Narcotics and Law Enforcement Affairs contracted with an external organization to conduct an evaluation of the INCLE police training program for the ISF. The evaluation was conducted between November 2010 and May 2011. The bureau commissioned the evaluation to establish the relevance, effectiveness, impact, and sustainability of the program from 2008 through 2010. The evaluation covered five training courses, and the evaluation team collected data by means of focus groups, semistructured interviews, structured observation, learning tests, field cluster research, and document review. The INCLE police training evaluation demonstrated the importance of conducting a program evaluation. According to the evaluation, the training was effective, with clear evidence that the ISF trainees had both learned and been able to retain the core knowledge taught in all five courses. The program also succeeded in reaching ISF personnel from throughout the country. The evaluation further determined that the training program made a relevant contribution to international donor assistance to the ISF. However, the evaluation concluded that the program did not reach its principal stated objective in the program period. The evaluation found no evidence to suggest that the performance of the ISF had systematically improved as a result of the training program. The evaluation also concluded that, although trainees learned and retained knowledge successfully, they were generally not able to apply the skills taught by the program in their daily duties. The evaluation identified the failure to apply skills as the key impediment to program effectiveness and attributed the failure to a flaw in the program design, which failed to sufficiently engage the ISF in the inception stage to clarify objectives and secure broad consensus on program goals. Moreover, the design of the training was not informed by a systematic assessment of the ISF’s training needs. The final evaluation report included five recommendations to the Bureau of International Narcotics and Law Enforcement Affairs, including a joint recommendation with the ISF, and two additional recommendations for the ISF. The recommendations included a number of subcomponents. The report recommended that the bureau take the following actions: continue the training based on renewed program consensus; closely coordinate program transition with ISF on the operational level; expand senior leadership training; strengthen internal capacity for program design and management; and in addition to quantitative performance measures, design meaningful qualitative performance indicators in cooperation with the ISF to provide relevant and timely progress information throughout the program period. The evaluation report also outlined steps to transition the training program to full ISF control. Bureau officials stated that the bureau has adopted many of the report’s recommendations, including the transition of one course to full ISF control in July 2012, the creation of a training coordination process for INL and other donors to maximize synergies and avoid duplication of effort, and the creation of a strategic planning cell within the ISF to coordinate directly with the bureau and other donors. Although the Section 1206 program in Lebanon has not been evaluated, DOD—under the direction of the Under Secretary of Defense for Policy— is implementing a new assessment process for the Section 1206 program. The objectives of the assessment process are to measure implementation of Section 1206 programs to build partner assess quality and timeliness of program implementation, measure the impact of programs, and estimate return on investment. DOD is implementing this new process in part to respond to our 2010 recommendation that DOD develop and implement specific plans to monitor, evaluate, and report routinely on Section 1206 project outcomes and their impact on U.S. strategic objectives. DOD conducted a pilot test of its assessment process of counterterrorism-oriented Section 1206 programs in the Philippines and stability operations-oriented Section 1206 programs in Georgia in 2012. DOD also assessed Section 1206 programs in Djibouti, Tunisia, and Poland from March through June 2012. DOD officials stated in December 2012 that they were not able to assess the program in Lebanon because of the security situation, but they plan to include Lebanon as soon as the security situation permits. Although DOD has not evaluated the effectiveness of the 1206 program or any other security assistance programs in Lebanon in which it participates, it conducts various types of assessments of the LAF. For example, since 2010, the U.S. Central Command and the LAF have participated in reviews—known as joint capabilities reviews—to assess the progress of the LAF based on eight broad critical capabilities. The critical capabilities—for example, Land and Border Defense and Security—are linked to the broad strategic goals described in State’s Mission Strategic and Resource Plans for Lebanon (now Mission Resource Requests), according to DOD Central Command officials. The joint capabilities reviews include milestone dates from 2013 through 2015. The U.S. Central Command also conducts annual assessments linked to the Theater Campaign Plan for Lebanon. These assessments grade the LAF on desired outcomes and make recommendations for course corrections. According to the DOD Central Command officials, the assessments are organized by lines of effort, such as defeating violent extremist organizations and building partner capacity. These lines of effort are not linked to specific security assistance programs, however. Although bureau evaluation plans were due in May 2012, State bureaus responsible for security assistance programs were at varying stages of implementing State’s evaluation policy at the time of this report. For example, State’s Bureau of Political-Military Affairs, which is responsible for the Foreign Military Financing and IMET programs, created an evaluation plan for fiscal year 2013 which does not include an evaluation of the programs in Lebanon, according to a bureau official. State’s Bureau of Counterterrorism, which manages the Counterterrorism Financing program and—along with the Bureau of Diplomatic Security—is responsible for the Antiterrorism Assistance program, created an evaluation plan for fiscal year 2013, according to a Bureau of Counterterrorism official. This plan does not include an evaluation of the programs in Lebanon. The Bureau of Counterterrorism will evaluate the Antiterrorism Assistance programs for two other partner nations, Bangladesh and Morocco. According to a Bureau of Counterterrorism official, the bureau will consider the Antiterrorism Assistance Program in Lebanon for the next tranche of evaluations beginning in fiscal year 2014. State’s Bureau of International Security and Nonproliferation, which implements the EXBS program, submitted its bureau evaluation plan on April 10, 2012. The plan, which covers fiscal years 2012 through 2015, does not include an evaluation of the EXBS program. The bureau does periodically assess the legal, regulatory, and institutional components of a country’s strategic trade control systems using a 419-point assessment methodology. The bureau conducted such an assessment for Lebanon in 2010, according to bureau officials. However, according to the bureau’s evaluation plan, the assessments do not assess enforcement capacity to the extent preferred and may not fully satisfy the intent of State’s evaluation policy. The bureau is working with the Office of U.S. Foreign Assistance Resources to determine how to integrate the assessment methodology into program evaluations that would meet State’s evaluation policy. The bureau may evaluate EXBS within the fiscal years 2013 to 2016 time frame. Although State bureaus are at varying stages of developing their evaluation plans, State has developed department-wide guidance for standardizing the way it measures the effectiveness of its programs and has entered into contracts with five contractors for monitoring and evaluation services, according to State officials. State bureaus can use these contracts to evaluate their programs. We have reported previous deficiencies in performance measurement of State’s and DOD’s security assistance programs, and these deficiencies may inhibit agencies’ ability to conduct program evaluations. As we have previously reported, performance measurement of State and DOD security assistance programs exhibited several deficiencies, including a lack of specific, measurable, and outcome-oriented performance indicators. In 2008, we reported that State did not systematically assess the outcomes of the Antiterrorism Assistance program and, as a result, could not determine the effectiveness of this program. More recently, a 2012 State Office of Inspector General assessment of the antiterrorism programs in certain countries reported that the Bureau of Diplomatic Security’s Office of Antiterrorism Assistance could not determine the Antiterrorism Assistance program’s effectiveness in part because they had not developed specific, measurable, and outcome- oriented program objectives. In 2011, we reported that State’s and DOD’s ability to assess IMET’s effectiveness was limited by several weaknesses in program monitoring and evaluation, including the lack of a performance plan for IMET that explained how the program was expected to achieve its goals and how progress could be assessed through performance measures and targets. In 2010, we reported that DOD and State had incorporated little monitoring and evaluation into the Section 1206 program and had not consistently defined performance measures for Section 1206 projects. According to State and DOD officials, assessing the impacts of security assistance programs is challenging, but they continue to seek improvements in performance measurement. For example, officials of the Bureaus of Counterterrorism and Diplomatic Security stated that it is an enormous task to come up with concrete, meaningful performance indicators for the Antiterrorism Assistance program, which addresses aviation security, investigations, and response techniques. In addition, officials at the U.S. Embassy in Beirut stated that State and DOD are struggling to develop specific, measurable performance indicators and still have a long way to go. DOD officials stated that developing metrics to measure the results of security assistance programs in Lebanon is difficult. The officials stated that some indicators may be subjective and difficult to quantify—for example, U.S. influence in building relationships or the willingness of the LAF to fight. Furthermore, they said that it is difficult to quantify what would have happened in the absence of U.S. assistance. Because the agencies do not have specific, measurable, and outcome- oriented performance indicators, U.S. officials have cited anecdotal evidence that they believe demonstrates the effectiveness of U.S. security assistance programs in Lebanon. For example, some officials have cited the LAF’s success in taking control of a Palestinian refugee camp from Al-Qaeda-inspired militants as an indication of the impact of the assistance. The United States provided ammunition and other supplies to assist the LAF during the 3-month engagement. As other indications of effectiveness, U.S. officials also mentioned the LAF’s actions in arresting members of a Shiite clan involved in kidnappings, the professionalism demonstrated by LAF Special Forces units in training exercises, and increases in drug seizures. However, the use of only positive anecdotal evidence does not provide sufficient context and scope to help officials judge the effectiveness of these programs. State and DOD officials described agency efforts to improve performance measurement. For example, the Bureau of International Narcotics and Law Enforcement Affairs is shifting from reporting output measures, such as numbers of students trained, to outcome measures, according to bureau officials. The bureau plans to collect more accurate performance metrics that focus on measuring the impact of the INCLE police training program in Lebanon. The bureau also plans to consider such factors as the quality of the officers and the extent to which women are integrated into the ISF, according to bureau officials. Moreover, the Bureau of International Narcotics and Law Enforcement Affairs is working with the ISF to improve the performance indicators in the ISF’s strategic plan, according to officials at the U.S. Embassy in Beirut. The current plan includes some output measures, such as number of traffic stops made by police. Embassy officials also stated that they are working with the LAF to develop a military cooperation plan that will include performance measures. In addition, according to DOD Central Command officials, efforts are under way to develop a new framework for its annual assessments of the LAF that will include improved performance measures. The new framework will examine the effects of DOD efforts in Lebanon. Since 2007, U.S. agencies have allocated almost $1 billion in security assistance for Lebanon, supporting its efforts to build a stable, secure, and independent democracy following the withdrawal of Syrian forces and the Israeli-Hezbollah war. U.S. security assistance for Lebanon has presented certain risks because of the influence wielded by the militant group Hezbollah, which is now a member of the Lebanese government. Ensuring that U.S. security assistance for Lebanon is effective in achieving U.S. strategic goals is now more important than ever—to help Lebanon resist the influence of Iran, which funds Hezbollah, and to withstand the potential spillover of conflict in Syria. Program evaluation and performance measurement are key management tools to ensure that U.S. security assistance is effective in achieving U.S. strategic goals. However, the U.S government has evaluated the effectiveness of only one security assistance program in Lebanon, a program to which it allocated about 14 percent of the nearly $1 billion in security assistance allocations since 2007. Neither State nor DOD has completed plans or time frames to evaluate the remaining six ongoing U.S. security assistance programs in Lebanon. Without evaluations of all of its security assistance programs in Lebanon, the U.S. government cannot show that the programs have been effective in achieving their specific objectives or that they constitute the best mix of security assistance for Lebanon to support U.S. strategic goals for the country. Moreover, because State and DOD currently measure program effectiveness using performance indicators that are not specific, measurable, and outcome-oriented, program measurement cannot facilitate evaluations. While State and DOD officials described general agency efforts to develop better performance indicators, they have not yet produced plans with specific time frames for completing their efforts. To enhance the U.S. government’s ability to determine if security assistance programs in Lebanon have been effective in achieving their specific objectives and that they constitute the best mix of security assistance to support U.S. strategic goals for the country, and to help State and DOD track progress toward established goals for Lebanon, we recommend that 1. the Secretary of State, in consultation with the Secretary of Defense, complete plans to evaluate the effectiveness of security assistance programs in Lebanon, including milestone dates for implementing the plans; 2. the Secretary of State develop performance indicators for State’s security assistance programs for Lebanon that are specific, measurable, and outcome-oriented; and 3. the Secretary of Defense develop performance indicators for DOD’s security assistance programs for Lebanon that are specific, measurable, and outcome-oriented. We provided a draft of this report to State and DOD for comment. State and DOD provided written comments which are reprinted in appendixes IV and V, respectively. State also provided technical comments, which we have incorporated into the report, as appropriate. In their comments, State and DOD generally concurred with the report’s findings and recommendations. In its written comments, State said that it recognized that a robust, coordinated, and targeted evaluation function is essential to its ability to measure and monitor program performance; make decisions for programmatic adjustments and changes; document program impact; identify best practices and lessons learned; help assess return on investment; provide inputs for policy, planning, and budget decisions; and assure accountability to the American people. State said that Lebanon is a good example of a security assistance program that involves complex program goals and therefore requires a carefully designed evaluation framework. State noted, however, that the qualitative versus quantitative nature of security assistance makes formal evaluation of such programs a unique challenge. State said that, short of formal program evaluation relative to Lebanon, it relies on feedback from other sources, such as periodic Joint Capabilities Reviews involving the LAF and the U.S. government; Mission Resource Requests from the U.S. Embassy in Beirut; and capability assessments from the U.S. Central Command. We agree that these reviews may provide useful input but are limited because they are not linked to specific security assistance programs and do not substitute for formal evaluations, which State agreed would improve its ability to determine the effectiveness of U.S. security assistance. In its written comments, DOD stated that it would coordinate with State to evaluate the effectiveness of Lebanon’s security forces with clear metrics that can be evaluated and communicated back to Congress. DOD also agreed to improve assessment and evaluation metrics established to measure the results of assistance provided under Section 1206 and Section 1207 authority. DOD commented that the draft report did not consider current DOD evaluating programs that utilize key baseline documents, such as the Joint Capabilities Review. The draft report, however, acknowledged that DOD assesses the capabilities of the LAF in other types of assessments but stated that, while these are useful, they do not provide the program-specific evaluation that is needed to ensure that U.S. security assistance is effective in achieving U.S. strategic goals. DOD's 2012 implementation guidance for assessing Section 1206 programs also recognizes this limitation of other types of assessments. This guidance states that DOD must be able to demonstrate the return on investment that Section 1206 training and equipment provide to DOD and to the U.S. government, and that the assessments are a means for DOD leadership to determine which types of programs are more successful and which partner nations make demonstrable progress toward the objectives of the Section 1206 programs. DOD also stated that, in the future, it requests that GAO extend it the courtesy of a formal out brief prior to releasing the draft report to the Congress. We acknowledge the importance of holding exit meetings with agency officials. As such, our protocol is to offer an exit meeting with agency officials after our data collection and analysis are complete. We requested an exit with DOD on January 28, 2013. However, due to the limited availability of the designated DOD policy official with oversight of U.S. efforts in Lebanon and a subsequent change in the designated official, we were not able to hold the exit meeting until February 15, 2013. DOD was provided a draft of the report in advance of the exit meeting. DOD’s technical comments provided at this meeting were incorporated in the report, as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of State and Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We assessed the extent to which the U.S. government, from fiscal years 2007 through 2012, (1) adjusted its strategic goals and security assistance programs in Lebanon, (2) funded assistance programs for Lebanese security forces, and (3) evaluated the effectiveness of security assistance programs in Lebanon. To assess the extent to which the U.S. government has adjusted its strategic goals and security assistance programs in Lebanon, we reviewed documents from the Department of State (State) and the Department of Defense (DOD) for fiscal years 2007 through 2012, including Mission Strategic and Resource Plans for Lebanon, congressional budget justifications, country plans, and other relevant documents. We also interviewed knowledgeable officials from State and DOD. At State headquarters in Washington, D.C., we spoke with officials from the Bureaus of Counterterrorism, Diplomatic Security, International Narcotics and Law Enforcement Affairs, International Security and Nonproliferation, and Political-Military Affairs, and from the Office of U.S. Foreign Assistance Resources. Within DOD, we met with officials from the Office of the Secretary of Defense and the Defense Security Cooperation Agency in the Washington, D.C., area, as well as officials of the U.S. Central Command and U.S. Special Operations Command in Tampa, Florida. In addition, we met with U.S. officials and officials of the Lebanese Armed Forces and the Lebanese Internal Security Forces at the U.S. Embassy in Beirut, Lebanon. To assess the extent to which the U.S. government funded security assistance programs for Lebanon Armed Forces and Lebanese Internal Security Forces from fiscal years 2007 through 2012, we analyzed budget and funding data from State and DOD. Recognizing that different agencies and bureaus may use slightly different accounting terms, we provided State with the definitions from GAO’s A Glossary of Terms Used in the Federal Budget Process (GAO-05-734SP) and requested that it provide the relevant data according to those definitions. State provided data on the status of allocations, obligations, unobligated balances, and disbursements as of September 30, 2012, for funding accounts that supported security assistance in Lebanon: International Narcotics Control and Law Enforcement; International Military Education and Training; Nonproliferation, Antiterrorism, Demining, and Related Programs; and the Section 1206 and Section 1207 authorities. State collected the data directly from each bureau if it was a State-implemented account. However, because Foreign Military Financing funds are budgeted and tracked in a different way than other foreign assistance accounts, DOD provided us with data on allocations and commitments. All data pertain to overt activities and are nominal numbers that have not been adjusted for inflation. We also discussed the types and amounts of assistance provided with various officials of the Lebanese Armed Forces and the Lebanese Internal Security Forces. To assess the reliability of the data provided, we requested and reviewed information from officials from each agency regarding the agency’s underlying financial data system or systems and the checks, controls, and reviews used to ensure the accuracy and reliability of the data provided. We determined that the data provided were sufficiently reliable for the purposes of this report. To assess the extent to which the U.S. government has evaluated the effectiveness of its security assistance programs in Lebanon, we reviewed relevant State and DOD documents, including an independent evaluation of the Bureau of International Narcotics and Law Enforcement Affairs’ police training program in Lebanon, DOD’s Section 1206 Assessment Handbook, and various State and DOD program and country assessments. We also examined State evaluation guidelines, Bureau of International Narcotics and Law Enforcement Affairs evaluation guidance, various bureau evaluation plans, and other documents. Furthermore, we reviewed relevant GAO reports, including those that discussed how agencies measure program performance. We also reviewed special GAO publications on performance measurement and evaluation. In addition, we interviewed officials from State and DOD in Washington, D.C., and at the U.S. Embassy in Beirut, Lebanon, and DOD officials at the U.S. Central Command and U.S. Special Operations Command in Tampa, Florida. We conducted this performance audit from June 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional information on Department of State (State) and Department of Defense (DOD) programs used to provide security assistance to Lebanon from fiscal years 2007 through 2012. State and DOD utilized eight programs to provide more than $925 million in security assistance to Lebanon from fiscal years 2007 through 2012. The majority of security assistance (95 percent) was provided through three programs: Foreign Military Financing, International Narcotics Control and Law Enforcement (INCLE), and the Section 1206 authority. Specifically, this appendix provides additional information on the Foreign Military Financing, INCLE, and the Section 1206 authority such as selected equipment and training provided and the status of funds including allocation, obligation, and commitment or disbursement of funds for the other five programs. From fiscal years 2007 through 2012, State allocated a total of about $641 million in Foreign Military Financing for Lebanon. DOD reported that for fiscal years 2007 through 2012, the majority of allocated funds had been committed (see fig. 5). For Lebanon, Foreign Military Financing has provided basic equipment such as tactical radios, ammunition, rifles, helmets, body armor, and trucks. Lebanon has also received night vision goggles, missiles, and helicopters. From fiscal years 2007 through 2012, State allocated a total of $130 million in INCLE funds for Lebanon. While most of the allocated funds have been obligated, about 40 percent of total allocated funds from fiscal years 2007 through 2012 have been disbursed (see fig. 6). According to State, fiscal year 2007 marked the first allocation of INCLE funds for Lebanon. In Lebanon, the INCLE program has funded the construction of a police station building and the purchase of riot helmets and batons, pistols, and police vehicles, among other equipment. In addition, the INCLE program has funded training and technical assistance to the Lebanese Internal Security Forces, including, for example, courses in community policing. From fiscal years 2007 through 2012, DOD allocated a total of $111 million through the Section 1206 program for Lebanon (see fig. 7). State reported that all allocated funds from fiscal years 2007 through 2012 have been obligated and disbursed. For Lebanon, Section 1206 funding has provided vehicle spare parts, ammunition, and other basic supplies to the Lebanese Armed Forces (LAF). In particular, equipment provided under Section 1206 was used to restock the LAF arsenal with basic ammunition after the 2007 siege at Nahr al-Bared Palestinian refugee camp and to begin to build the LAF’s first secure communications system. State and DOD jointly administer Section 1206 assistance. Funded from the DOD operations and maintenance accounts, Section 1206 funds remain available for 1 year for obligation. Once the period of availability for new obligations expires, the funds are available for an additional 5 years to liquidate the obligations. In addition to the Foreign Military Financing, INCLE, and Section 1206 programs, State and DOD provided security assistance to Lebanon through five other programs: Antiterrorism Assistance, International Military Education and Training, Section 1207 Train and Equip, Export Control and Related Border Security, and Counterterrorism Financing. See tables 2 through 6 for details on allocation, obligation, and disbursement of funds by these programs for security assistance for Lebanon from fiscal years 2007 through 2012. Table 7 presents selected security equipment and services provided by the United States to the Lebanese Armed Forces (LAF) through the Foreign Military Financing and Section 1206 programs from fiscal years 2007 through 2012. According to data provided by the Defense Security Cooperation Agency, 78 cases were approved for Lebanon during this time frame and ranged in value from $18,000 to $38.2 million. The defense articles or services are listed in descending order based on the total estimated case value as reported by the Defense Security Cooperation Agency. Table 8 presents selected security equipment or funding provided by countries other than the United States to the LAF, as reported by the LAF. Countries are listed in descending order based on the estimated value of security assistance provided. In addition to the contact named above, Jeff Phillips (Assistant Director); Jenna Beveridge; Teakoe S. Coleman; David Dayton; and La Verne Tharpes made key contributions to this report. Martin de Alteriis, Grace Lui, and Jeremy Sebest provided additional technical assistance.
Following Syria’s withdrawal from Lebanon in 2005 and war between Israel and Hezbollah in 2006, U.S. agencies increased their allocations of security assistance for Lebanon from $3 million in 2005 to about $28 million in 2006. This assistance included training and equipment funded and implemented by State or DOD for the Lebanese Armed Forces and Internal Security Forces of Lebanon. However, questions remain regarding the effectiveness of security assistance as a tool of U.S. policy in Lebanon, including concerns about the influence of foreign actors, primarily Syria and Iran, and extremist militant groups operating in Lebanon. GAO was asked to review U.S. security assistance to Lebanon. GAO’s review, covering fiscal years 2007 through 2012, assessed the extent to which the U.S. government (1) adjusted its strategic goals and security assistance programs in Lebanon, (2) funded assistance programs for Lebanese security forces, and (3) evaluated the effectiveness of security assistance programs in Lebanon. GAO reviewed budgetary data and planning documents and interviewed U.S. and Lebanese government officials in Washington, D.C.; Tampa, Florida; and Beirut, Lebanon. The United States has kept strategic goals for Lebanon constant since 2007 and adjusted security assistance in response to political and security conditions. Since 2007, U.S. strategic goals for Lebanon have been to support the nation as a stable, secure, and independent democracy. According to U.S. officials, U.S. policy priorities include supporting the Government of Lebanon in establishing stability and security against internal threats from militant extremists and the influence of Iran and Syria. U.S. programs to help achieve these priorities include Foreign Military Financing, International Military Education and Training (IMET), International Narcotics Control and Law Enforcement (INCLE), Antiterrorism Assistance, Counterterrorism Financing, Export Control and Related Border Security, and Section 1206 and 1207 authorities. While strategic goals have not changed, program implementation has changed to meet conditions on the ground, according to U.S. officials. For example, the Department of State (State) delayed committing Foreign Military Financing funds to Lebanon for 3 months in 2010, following an exchange of fire between the Lebanese Armed Forces and Israeli forces. U.S. agencies allocated over $925 million for security assistance programs for Lebanon from fiscal years 2007 through 2012; State has disbursed and the Department of Defense (DOD) has committed the majority of the funds. To date, State has evaluated only one of its security assistance programs for Lebanon, the INCLE program; neither State nor DOD has completed plans or established time frames to evaluate the other programs. State's evaluation policy requires that certain programs be evaluated periodically. Without such evaluations, State and DOD have little objective evidence to show that the programs have been effective or what the proper mix of programs should be. Evaluations can be facilitated through appropriate performance measurement. However, GAO and other agencies have previously reported deficiencies in how agencies measure program performance. For example, GAO found in 2011 that the IMET program evaluation efforts had few of the elements commonly accepted as appropriate for measuring performance. State and DOD are undertaking efforts to develop better performance indicators. State and DOD should complete plans with milestone dates to evaluate security assistance programs in Lebanon and develop better performance indicators to facilitate evaluation. State and DOD concurred.
Most federal student financial aid programs are authorized under the Higher Education Act of 1965. Federal student financial assistance exceeded $30 billion in academic year 1993-94, and most assistance came from two programs—the Pell grant and Federal Family Education Loan (FFEL) programs. The Pell grant program, which primarily targets low-income students, accounted for about $5.7 billion, while the FFEL program comprised over $21 billion of the total federal aid. Maximum annual awards to students in each program are capped: In 1993-94, the maximum Pell grant was $2,300, and the maximum subsidized Stafford loan—the largest of the FFEL loan programs—ranged from $2,625 for freshmen to $5,500 for seniors. The Department of Education administers the Pell grant program in accordance with eligibility criteria and authorized maximum award amounts set by the Congress. In addition, the Congress effectively limits actual maximum Pell award amounts each year through the appropriations process. Actual maximum awards have been less than the authorized levels each year since 1980. For example, in 1993-94, the authorized maximum Pell grant was $3,700, but the appropriation for the program limited the actual maximum award to $2,300. The composition of student financial assistance has changed dramatically over the past two decades. Although total federal aid has increased since the late 1970s, loan aid has increased far faster than grant aid. From 1977 to 1980, grant aid exceeded loan aid, but since 1985 loan aid has exceeded grant aid by about twice as much (see fig. 1). Budgetary concerns and program changes have limited grant aid for low-income students. As the deficit rose during the 1980s, policymakers’ awareness of budgetary trade-offs and the need to leverage resources grew. Loans are a less expensive form of aid for federal budgetary purposes than grants because the budget accounts only for the cost of interest subsidies and default payments. Thus, for a given federal expenditure, the government can offer more aid if it is provided as loans.In addition, the 1978 Middle Income Student Assistance Act extended eligibility for Pell grants to higher income students; however, appropriations did not allow for commensurate increases in program dollars. Consequently, more students now receive Pell grants, but the actual maximum award has remained approximately constant in nominal dollars since 1986. Cost pressures for low-income families have increased since the late 1970s, as the average cost of 4-year colleges and universities has increased faster than the inflation rate. Between 1978 and 1992, the average tuition, room, and board charge at 4-year public colleges and universities rose by 26 percent in real terms. This had two distinct effects. First, college expenses at the average public university absorbed 11 percent of median family income in 1978 and 14 percent in 1992. For families at the 20th-income percentile, this charge increased even more, from 22 to 31 percent of income. Second, the actual maximum Pell grant, which covered over half the costs at the average public 4-year school in 1985, now covers less than 40 percent. Low-income students are underrepresented among college students. Low-income students enroll in college at lower rates than high-income students, although enrollment rates have been rising for all income groups (see fig. 2). We found no data on students’ degree completion by income group. However, minorities are overrepresented among low-income families, so their rates serve as a reasonable proxy for low-income students’ graduation rates. Sample data show that minority students are less likely to stay in school and graduate than white students. For example, in one sample of students entering 4-year colleges in 1983, 1984, or 1985, 56 percent of white students completed degrees within 6 years, but only 41 percent of Hispanic students and 32 percent of African American students did so. Policymakers have raised the possibility that reduced grant aid, relative to the soaring costs of a college education, have adversely affected graduation rates for students at the low end of the income scale. The composition of financial aid packages and the timing of particular aid components influence education outcomes. Our results indicated that, for low-income students, grant aid was effective in reducing dropouts, but loan aid was not. In addition, grant aid for low-income students was most effective in the first year, with efficacy decreasing in the second and third years. The results of the university frontloading program strengthened our confidence in this finding. Students who received frontloaded grants had a lower dropout probability than other comparable students. Results of our statistical work showed the following. Grants versus loans: Grants significantly reduced dropout probabilities for low-income students. In the High School and Beyond database sample, an additional $1,000 in grant aid for a low-income student reduced the dropout probability by 14 percent for the award year. Loans did not have a statistically significant effect for this group—a commensurate increase in loans did not significantly affect the student’s probability of dropping out. First-year students: Grants were most effective in reducing low-income students’ dropout probabilities in the first year. For these students, an additional $1,000 grant in the first year reduced the dropout probability by 23 percent. In the second year, the additional grant reduced the dropout probability by 8 percent while, in the third year, it had no statistically discernable effect. Frontloading grants: The university’s program for high-need freshmen, which included frontloading grants, had a significant effect on reducing dropouts. Program participants were 39 percent less likely to drop out in a year than nonparticipants. For the lowest income students, those below the poverty line, the program reduced the dropout probability by 64 percent. These results, with certain qualifications, indicate that frontloading grants for college students, especially low-income students, could improve dropout rates. The results pertain only to 4-year college students and thus have no implication for students at 2-year schools. Also, the frontloading experiment took place at a university that combined it with other programs to reduce dropouts, and the results are not generalizable beyond that school. However, we believe the sum total of our results shows that frontloading grants holds promise. (For detailed information on the analyses that led to these results, see app. II.) Comments from financial aid directors and students we interviewed helped us interpret the statistical results. One opinion arising in the directors’ panels, for example, was that some low-income students are reluctant to borrow, especially during their first year or two in college. This observation is consistent with our statistical findings about grants being more effective than loans in increasing the likelihood that first-year, low-income students will stay in school. The directors we spoke with were generally positive about the potential benefits of frontloading grants, several saying it could help low-income students stay in college by giving them time to become acclimated to college and reducing financial pressures when students are most vulnerable to dropping out. One concern about frontloading was that students might perceive it as a bait-and-switch policy because it would involve reducing grant awards in later years. In the student interviews, we sometimes heard that borrowing was initially difficult for students and that grant aid made the difference in their being able to start college. Another theme among the students was that year-to-year consistency was important in their aid packaging, so that they could plan ahead without disruptions, and that frontloading seemed contrary to the principle of consistency. (For a discussion of the full range of comments from financial aid directors and students we spoke with, see app. III.) We discussed with Department officials the value and feasibility of their conducting a pilot frontloading program. They thought frontloading held promise and expressed an interest in such a pilot program. They told us that they might have authority under current law (20 U.S.C. § 1094a(d) (1988 and Supp. IV 1992)) to conduct a pilot. This law authorizes the Department to designate institutions that volunteer to participate as “experimental sites.” These institutions help evaluate the impact and effectiveness of proposed regulations or new management initiatives. The Secretary of Education may exempt participating institutions from legal requirements as necessary to conduct the experiments. The officials said that they had not yet determined whether this authority would permit a pilot frontloading program and that they might need specific authority from the Congress to conduct a pilot. We defer to the Department on whether it currently has the authority to do so or needs additional authority. Such a pilot program, moreover, would need to address several implementation issues. The potential benefits of frontloading could be lost if institutional aid policies were changed to offset the federal change. Schools would need to be encouraged to ensure that overall grant aid, meaning federal and institutional aid combined, were frontloaded. Also, because eligibility for federal financial aid is based in part on annual income and other family resources that change over time, the amount of aid a student qualifies for changes each year. Frontloading would entail estimating a 4-year package, requiring methods not currently employed in aid determination. It would also involve adjusting loan limits for third- and fourth-year students at pilot schools and developing aid award rules for students who transfer between pilot and nonpilot schools. In evaluating a pilot program, changes in dropout rates would have to be interpreted carefully. A policy of frontloading grants might attract students to college who would not have attended otherwise. Although some of these students would graduate, on the whole their dropout rate could be higher than that of the current student population. Frontloading might reduce the number of dropouts among students who now attend college, but high dropout rates among this new college population could leave the overall dropout rate unchanged or higher. Our statistical analysis indicates that loans and grants are not equal substitutes in terms of affecting education outcomes for low-income students. Aid packages with relatively high grant levels may improve low-income students’ access to higher education more than packages that rely more on loans. In addition, our analysis indicates that the earlier low-income students receive grant assistance, the more likely they are to stay in college. Departure from the conventional approach to dispersing student financial aid—relatively proportionate amounts each year—could further improve low-income students’ dropout rates. Given that the dropout rate is highest in students’ first 2 years, frontloading grants would appear to provide low-income students with the most effective means of financial support when they are most likely to benefit from it. Restructuring federal grant programs to feature frontloading could improve low-income students’ dropout rates without changing any student’s overall 4-year allocation of grants and loans. Given our statistical results, the mixed views of aid directors and students we spoke with, and the limited experience with frontloading, we believe the Department needs to shed light on the matter by undertaking a pilot program. We recommend that the Department of Education conduct a pilot program of frontloading federal grants at a limited number of 4-year schools to evaluate the impact of frontloading on reducing dropouts among low-income college students. If, upon review, the Department concludes that it lacks authority to conduct this pilot, we recommend that the Department seek legislation from the Congress to authorize the pilot. As agreed, we did not obtain written comments on the report from the Department of Education, but we discussed our findings with program officials. The officials generally agreed with our results and made suggestions, which we incorporated in the report as appropriate. We conducted our review between March 1993 and December 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Education, congressional committees, and other interested parties. Please call Cornelia M. Blanchette or me on (202) 512-7014 if you or your staff have any questions about this report. Other GAO contacts and contributors are listed in appendix IV. To examine the effects of grants and loans on the probability of students’ staying in college or dropping out, we analyzed two databases: (1) High School and Beyond, a national survey of students begun in 1980, and (2) financial aid data from a large public university. The two databases included different information, but they both contained year-by-year totals for grants and loans each student received, tuition the student paid, and background information on the student. In addition, we could determine the number of years a student remained in school and if and when that student dropped out. We used duration analysis to determine the factors affecting the probability of staying in college or dropping out. To help understand the reasoning and decision-making behind our statistical results, we conducted discussion panels with financial aid directors and interviews with students at selected schools. Our analysis covered only students in 4-year undergraduate programs; we did not include community colleges, proprietary schools, or graduate or professional programs. The High School and Beyond survey was first conducted in 1980. Graduating high school seniors were asked questions about family background, educational attainment, and future plans. To obtain information on activities since high school, these same students were then reinterviewed in 1982, 1984, and 1986. This provided longitudinal information on students in the initial sample. We selected for analysis those students who began college full time at a 4-year school immediately after high school. We followed these students through their college years and noted whether they continued from year to year or dropped out. Our sample consisted of 3,652 students. The High School and Beyond survey oversampled African American and Hispanic students. This oversampling resulted in sufficient observations on these populations for meaningful results to be obtained for them. We weighted our sample data so that the proportions of African Americans, Hispanics, whites, and others would match population proportions.Except when we analyzed data separately by race, we reported weighted means and regression results in all cases. We analyzed financial aid records from a large public university that recently implemented a new financial aid packaging strategy, which included frontloading grant money for certain first-year students. The university designated a group of “high-need” first-year students, who required additional support because they came from economically or academically disadvantaged backgrounds or both. After these students had received Pell grants, Supplemental Education Opportunity Grants (SEOG), and a small Perkins loan, they received university grants to cover remaining need. In the second and later years, their financial aid packages were weighted with more loans. Some of the high-need freshmen were less academically prepared than the university’s average enrollee, officials at the university said, but we could not identify these students separately in the data. Therefore, to measure differences in student academic readiness for college, our analysis included controls for a student’s score on the Scholastic Aptitude Test. In addition, program participants received additional academic and administrative support, such as precollege course work in their first yearand structured advice on course schedules and financial aid options. We thus do not atttribute program outcomes solely to frontloading. The university gave us 5 years of data on a cohort of students that began as full-time, first-year students under the new system in the 1988-89 academic year. We constructed records on the students for the 5 years, noting the type and amount of aid received each year and how long they remained in school. We also had student background data that remained constant over time. The data provided by the university did not indicate whether students who left before graduation had transferred to another school. To identify transfer students, we matched student records with Pell grant and Stafford loan data supplied by the Department of Education. For students who received Pell grants or Stafford loans within three semesters of leaving the university, we recoded the dependent variable so that we would not count them as dropouts. For our analysis, we selected students whose family incomes in their senior year of high school were below 300 percent of the poverty line. We did this to ensure that students in and out of the high-need program were somewhat comparable, although those in the high-need group were still, on average, from poorer families. Our duration analysis examined the probability of a student’s dropping out in a particular year, given that he or she attended school up to the beginning of that year. Duration analysis, also known as hazard analysis, is typically used to estimate factors that result in someone’s remaining in a particular state (for example, “unemployed” or “in college”) for a short or long period of time. As some students leave the database by dropping out, the sample becomes smaller each period. For example, in our data, the first-year dropout probability was computed for all students in the sample, but the third-year dropout probability was computed only for those who completed the first 2 years in college. Our analysis was a modified hazard model. A hazard model treats the length of time as the dependent variable. In our analysis, we would have regressed the number of years in school on the explanatory variables we chose. However, because we included some independent variables whose values changed over time, specifically financial aid levels received each year, this type of hazard model would have been complicated to construct. Instead, we set the data up so that each person-year was an observation. A student in college for 1 year, who then dropped out, appeared in the database only once; someone in school for 4 years appeared as four separate observations. The dependent variable in our regressions was whether or not the student dropped out in a given year. We used a logit model to analyze the resulting database. The independent variables of interest were grants and loans. To see whether the impact of grants and loans varied by certain factors, we analyzed subsamples of the database based on income group, race, and year. We judgmentally selected 12 colleges and universities in three areas. We chose six public and six private schools, and we selected schools that varied by such factors as size, tuition, and urbanicity. The schools we selected are shown in table I.1. Fairfax, Va. Washington, D.C. Washington, D.C. College Park, Md. New Brunswick, N.J. Swarthmore, Penn. Private Philadelphia, Penn. Public University of Pennsylvania Philadelphia, Penn. Private Seattle, Wash., area Pacific Lutheran University Tacoma, Wash. Seattle, Wash. Seattle, Wash. To allow interaction between the financial aid directors, we used discussion panels. For the students, we thought a discussion or focus group might inhibit some from telling us about their financial situations, so we interviewed them individually. We did not project from financial aid director or student responses because we knew our sample was not representative. Instead, we used the comments to illustrate some of the thinking that might have led to our quantitative results. We held three discussion panels with financial aid directors, bringing together the four directors in each region. We asked them to describe changes in federal aid policy that they had observed over time and how these changes had affected institutional or other patterns of financial aid. We also asked them whether the changes had affected student decisions to remain in college until graduation. Finally, we asked their opinions on whether grants are more or less effective than loans and their thoughts on how students made the trade-off between a small grant and a larger loan and on frontloading grants. The colleges and universities identified students for us to interview. We asked them to select both current students and those who had dropped out, but none of the schools could give us names and addresses of dropouts. We did, however, interview some students who had taken time off and returned to school as well as some transfer students. In the interviews, we asked students to describe the role of grants and loans in financing their education and in year-to-year decisions to stay in school. We asked how they would describe the trade-off between grants and loans: that is, in general, did they prefer a small grant or a larger loan? We tried to determine whether debt burden is a major concern and whether it had caused them to reconsider staying in school, which major to choose, or whether to go on to graduate or professional school. We also asked them about work, either as work-study or an outside job, including the effect on their studies of spending time at work. Grants significantly reduced dropout probabilities for most groups of students, including low-income, first-year, and minority students, according to our analysis of the High School and Beyond database. On the other hand, loans reduced dropout probabilities overall and for middle-income students but not for others. The database contained information on the tuition students paid and their grant and loan awards. It also contained a wide variety of student background information, including family characteristics and academic achievement, which we included as controls in our regressions. The variable definitions, as well as means and standard errors for first-year observations—that is, for the initial sample before anyone dropped out—appear in table II.1. Categorical variables (equal 1 if condition is true) Family income grouping (1993 dollars) Lowest income (below $12,300) Second lowest income ($12,300-$21,000) Third lowest income ($21,000-$28,100) Middle income ($28,100-$35,100) Third highest income ($35,100-43,800) Second highest income ($43,800-$66,600) Highest income (over $66,600) Student went to high school in an urban area (continued) Region of United States in which student attended high school Equals zero for all first-year observations. Before conducting our regression analysis, we examined crosstabulations of certain variables with dropouts, the outcome variable. Low-income students were more likely to drop out of college than middle- and high-income students. In addition, in our sample, second-year students were more likely than first- or third-year students to drop out. We also examined those who dropped out in the first year to determine their income group; low-income students were again the most likely to drop out. The sample dropout probabilities are shown in table II.2. Dropout rate (percent) Grants reduced dropout probabilities more than equal-sized loans in the baseline model, although both grants and loans had a statistically significant effect (see table II.3). Because “dropout” is the dependent variable, the negative coefficient for grants and loans means that an increase in the value of either variable led to a reduced probability of dropping out. Results for other variables are as expected: students with good high school grades and test scores, with parents who went to college, and from higher income families were the least likely to drop out of college. Two other dollar variables had significant effects on dropouts. First, tuition was negatively associated with the probability of dropping out. Holding all else constant, higher tuition might be expected to lead to a greater likelihood of dropping out. However, we did not have a measure of the quality of the college the student was attending; high tuition might, in fact, have been a proxy for a high-quality college. If high-tuition colleges enrolled relatively better quality students who would be less likely than average to drop out, then tuition would be negatively correlated with the probability of a student’s dropping out. Second, cumulative loans had a positive effect on dropping out. This result indicates that although loans in the current year helped students stay in school, accumulation of loans over several years may have led students to drop out. The year 2 coefficient was positive and the year 3 coefficient negative, indicating that dropouts were more likely in the second year than the first but least likely of all in the third year. Because we used a logit regression, changes in independent variables could not be directly interpreted as changes in the probability of dropping out. Instead, we made a set of assumptions about a student, computed the probability of that student’s dropping out, and then changed the assumptions, one at a time, to examine the effects of individual variables. We first took the baseline results and, holding other variables constant, changed the amount of grants and loans by $1,000 each. We then examined the effects of differences in other variables, such as income and race. Under our initial assumptions, a student had a 9.9 percent probability of dropping out of college in a given year. If the student received $1,000 in additional grants in the year, the dropout probability fell to 9.0 percent, or by 9 percent (see fig. II.1). With an additional $1,000 loan, on the other hand, the probability fell to 9.4 percent, a 4-percent decline. Differences in values of other variables significantly affected dropout probabilities as well. For example, a student from the lowest of the seven income groups had a 57 percent greater probability of dropping out than a middle-income student; one from the highest income group had a 50 percent lower probability. Dropout Rate (percent) All results are based on coefficients statistically different from zero at the 5-percent level. To further analyze the impacts of grants and loans on different types of students, we performed regressions for subsamples of our sample. We set the regressions up in the same way as the baseline regressions except for including only certain observations in each regression: for example, only students from the two lowest income groups, only first-year observations, or only African American or Hispanic students. We report first the coefficient results for selected variables and then probability results. For low-income, minority, and first-year students, grants were more effective than loans in reducing dropout probabilities, although the differences between the effects of the two types of aid varied across groups (see table II.4). For low-income students, grants reduced the dropout probability the most in the first year, and they were decreasingly effective in the second and third years. Loans never significantly reduced the dropout probability for low-income students and actually increased the probability in the third year. The regressions include all other relevant variables from the baseline regression, but results for these variables are omitted from this table. Low income (income categories 1 and 2) Middle income (income categories 3 through 5) High income (income categories 6 and 7) (continued) Low income: year 1 Low income: year 2 Low income: year 3 (Table notes on next page) Omitted from regression because value was zero for all observations. For low-income students, grants decreased the probability of dropping out, while loans did not (see table II.5). These calculations are based on the regression results shown in table II.4. Dropout probability (percent) Change in probability from baseline (percent) Low income (income categories 1 and 2) We analyzed financial aid data provided to us by a public university to examine (1) the relationship between different types of financial aid and whether students remained in college from year to year and (2) the effectiveness of a program involving an alternative form of financial aid packaging. Beginning in the 1988-89 academic year, this university embarked on a program of giving some of its high-need freshmen aid consisting entirely, or almost entirely, of grants and having those students take on loans only in later years. In addition, these students received additional academic and administrative support. We refer to this group as the high-need group and the special aid program as the high-need program. Our sample consisted of 1,414 first-year students in 1988-89 whose families had incomes below 300 percent of the poverty line, and we followed this cohort for 4 years. We restricted the sample so that the students in and out of the high-need program would be somewhat comparable. We analyzed the effect of grants, loans, and participation in the program, controlling for income, Scholastic Aptitude Test (SAT) score, and other factors. Variable definitions, means, and standard deviations for first-year students are shown in table II.6. Continuous variables (thousands of 1993 dollars) Categorical variables (equal 1 if condition is true) Participant in the university’s high-need program Family income grouping Lowest income (below poverty level) Middle income (between poverty and twice poverty level) Highest income (between two and three times poverty level) Lowest score (lower than 800 combined math and verbal) Middle score (from 800 to 1190 combined math and verbal) Highest score (1200 or higher combined math and verbal) In our sample, students in the high-need program received more grants than those not in the program during all 4 years of college, and their loan amounts were generally low in the first year (see figs. II.2 and II.3). These students were more likely to come from relatively low-income families compared with those not in the program (see table II.7), so the higher overall amount of grants is not surprising. The differences between students in and out of the program hold even though our entire sample was restricted to students from families with income below three times the poverty line. Lowest income (percent) Middle income (percent) Highest income (percent) Total (percent) Participation in the high-need program reduced the probability of dropping out, even controlling for financial aid (see table II.8). The coefficient result for the high-need program means that a student in this program is 39 percent less likely to drop out than one not in the program if other factors are held constant. As in the High School and Beyond data, grants were more effective in reducing the dropout probability than loans. The coefficient on grants translates into a 25-percent reduction in the probability of dropping out for a $1,000 increase in grants. Loans did not have a statistically significant effect. We also analyzed subsamples of the data separately. Participation in the high-need program was a significant factor for second-year students and the lowest and highest income groups (see table II.9). For the lowest income students, a program participant was 64 percent less likely to drop out than a nonparticipant. Grants were more effective than loans in reducing the probability of dropping out for students in all years, except for the fourth year when neither had a significant effect, and in all income groups. Loans did not significantly reduce dropout probabilities for any of these groups. (continued) Omitted from regression because value was zero for all observations. Some of the financial aid directors in our discussion panels told us that reductions in federal grants have required students to borrow and work more while in college. They said that some low-income students are reluctant to borrow, especially during their first year or two in college. Some low-income students we talked to told us that borrowing was initially difficult. Several of them said that with less grant aid, they would have either not attended or chosen a lower cost college. The directors we spoke to were generally positive about potential benefits of frontloading grants, while students tended to emphasize the importance of year-to-year consistency in their aid packages. The federal financial aid shift from grants to loans has put cost pressures on students, according to financial aid directors we spoke to. Students have generally borrowed more and, in some cases, worked more than in the past to meet their educational expenses, according to some of the directors, but most of them did not believe dropout rates had increased as a result of the changes. However, some told us this was only because of institutional aid increases, while one noted that the effects of very recent increases in borrowing have not yet been felt. Individual student attitudes toward debt vary, many directors said, and these attitudes can change over students’ years in college. Some directors expressed concern about increases in students’ working and the effects on their studies. The directors generally reacted positively to the idea of frontloading grants, and they told us some potential advantages as well as pitfalls of such a program. Many directors noted that in the last 10 to 15 years, federal funding for postsecondary student financial aid at their institutions, especially grant aid, has been level, actually decreasing in constant dollars. This decrease has put pressure on educational institutions, states, and students to fill the gap between the federal financial aid available and rising costs. Schools have responded by substantially increasing their institutional grant budgets, some directors said. In addition, some schools have redefined “high-need students,” recognizing that they can adequately serve only those with the very highest need. Some directors expressed concern about (1) reaching the limits of their abilities to draw upon endowment and other outside resources to bolster their financial aid budgets or (2) the impact this may have on diversity goals for their student body. Several directors said that some states have responded by developing strong state financing programs. Because most state treasuries are not in a position to fill the gap created by declining federal dollars, discussion in the financial aid community has explored new and innovative financing strategies for public institutions. One example is a high-tuition/high-financial-aid model, under which tuitions are raised and some of the increased revenue is used to aid students who could no longer afford to attend. Students tend to borrow more and, at some schools, work more now than in the past, some directors said. One director observed that students are borrowing more to make up the gap between what his school expects them to save during the summer and what they are able to save, based on their earnings. Several directors also noted a large upturn in borrowing in the last year or two. Some directors believe students now hold jobs during the school year more than they used to, but others said that students work at about the same rate as in the past. In general, most directors said that changes in federal financial aid have not greatly affected dropout rates for the student population as a whole, but some directors were concerned about dropout rates for specific groups. They also stressed the efforts their schools make to retain students, and they noted that students leave school for nonfinancial as well as financial reasons. Some directors were most concerned about educational access for specific types of students, such as those from working-class families; minorities, particularly African American males; and out-of-state students at public institutions. One director expressed concern that some low-income and minority students do not even apply for college because of perceptions about high costs and limited financial aid. Another director stressed that it is not the prospect of large loan balances that deters low-income students from enrolling at his institution but the difference between the total education costs and the available financial aid. Another director stated that the federal government needs to (1) concentrate grant dollars on needy families who do not have other resources and (2) give loan money to those who have the means to pay it back. For needy students, he said, more loans may not help as much as more grants, but, for less needy students, more loans may help keep them in school. Some of the directors discussed their schools’ efforts to retain students and suggested that changes in federal financial aid would have hurt students more if their schools had not made such efforts. One director described his school’s posture toward retaining students as aggressive. He explained that if a student is withdrawing from his institution for financial reasons, the financial aid office will work with the student and try to come up with a solution that will allow him or her to stay. Another director stated that his office is also geared toward keeping students in school and that at least for students considering dropping out for financial reasons, the school is successful approximately 99.5 percent of the time. Nonfinancial factors also cause students to leave school. For example, one director said, some students, particularly first-generation students, must contend with competing demands from their families. Academic performance also affects whether students remain in college. In fact, another director stated, students’ grade-point averages are the best retention indicators because they reflect how well the students are doing academically and how well students like their academic programs. Finally, some directors described factors that cause students to take longer to graduate than they used to, even if they do not drop out. For example, students may take time off to work because of concern about indebtedness, or they may change their fields of study several times. Student attitudes towards debt vary, financial aid directors said: Some students are very concerned about borrowing, while others borrow large amounts to finance their education. Some directors said that low-income or minority students are more reluctant to borrow than other students. Directors’ opinions also varied on the effects of larger debt on the choices students make, such as field of study or postgraduate plans. Directors’ opinions varied on the degree to which students worry about accumulating debt. A student’s concern about borrowing depends on a number of factors, several directors noted, including the student’s year in college and individual or family attitudes toward accruing debt. For example, one director thought that students become more concerned about borrowing as they approach graduation; a second director was convinced that students are more concerned about debt at the beginning of their college careers and, over time, develop more confidence about their ability to support themselves (and therefore pay off debt). In general, certain types of students—older, independent students, students from low-income backgrounds, and graduate or professional students—tend to be more concerned about indebtedness than others, according to several of the directors. One director noted that many college students do not pay attention to how much they have borrowed; they are just trying to get registered at the beginning of each semester. Another director stated that parents are more concerned about borrowing than students, especially once students hit the maximum loan levels. The directors discussed some of the barriers to higher education for minority students. For example, one director said low rates of high school graduation reduce the pool of minority candidates. In addition, some directors saw the need for these students to borrow more to finance higher education as a barrier. According to one director, placing greater emphasis on loans to finance higher education has clearly negatively impacted on his public school’s capacity to recruit talented minority nonresident students. The high-need minority students who get a full range of aid, including grants, work-study, and loans, are reluctant to borrow, but they borrow anyway and enroll; those who get only loans, however, tend not to come. He and another director also pointed out that the minority students who are highly qualified academically are sought after by many schools; therefore, these students are likely to receive attractive financial aid packages. This director and one other mentioned that reluctance to borrow is found not only among minority students and their families but also among many first-generation, low-income college students. These students and their families fear the unknown, including what their investment in higher education will actually achieve for them and whether they will be able to repay the loan after graduation. Several directors thought that financial considerations were not significantly affecting students’ choice of majors, although directors opinions varied. One director observed that, for the first 2 years, college is not the real world for students—it is an extension of their comfortable family life—and that students do not address career issues and the effect of their debt until later. She observed that while most students are concerned with just getting a job after graduation, many also want to “study their passion” and will not choose a field just for high pay. A second director explained that his institution has not experienced changed enrollment patterns at either the undergraduate or the graduate/professional level; one would expect to see such changes if students were greatly concerned about their debt. In contrast, other directors said students are looking for higher paying jobs because of high debt levels. At one institution, this has resulted in increased enrollment in fields such as engineering and decreased enrollment in fields such as education and nursing. Finally, a few directors cannot tell what effect increased indebtedness will have on student choices because much of the increase has taken place in the last year, they said. It is thus hard to tell what impact the cumulative debt will have on students when they graduate. Students were not currently changing majors because of concerns about increased debt, these directors said, but they were not sure what impact the current increases in loans will have on students in the future. Several directors provided a variety of reasons—both financial and nonfinancial—that students work while attending school. Opinions varied on whether students are working too much, but some directors agreed that working more than 20 hours per week is too much. Some directors spoke enthusiastically about the benefits of the work-study program. Directors mentioned several finance-related reasons why many students today work while attending school, for example, to help pay educational costs, to keep debt levels as low as possible, and to make up for gaps in their expected family contribution. Directors also mentioned that students work for nonfinancial reasons—for example, to gain work experience. Whether students choose to work while in school can also vary by the individual student’s residence, according to one director—commuters tend to work more than students living on campus. Finally, a few directors stated that it is unfortunate that some federal financial aid programs—specifically the Pell grant program—contain disincentives for students to work because increased earnings can decrease a student’s financial aid award. Working too many hours is only a problem in isolated cases, one director said. Even if students are working more, according to another, no evidence shows that this has had a qualitative impact on their academic work. Another director listed the risks students face when they choose to work more than 20 hours per week while attending school full time, including eroding the quality of their academic experience, isolating them from campus activities, and extending the time it takes them to graduate. Some directors cited the numerous and varied benefits of the work-study program, especially compared with off-campus work. Several directors mentioned that work-study helps students stay in college because they become more connected to the institution, develop relationships with mentors, and learn more about the school. Other benefits they mentioned included higher academic achievement because hours are controlled and tend to be more flexible; new skill acquisition, if the work is related to career goals; and reduced resentment among other students toward financial aid recipients because they are not being given something for nothing. Several directors mentioned the immediate need for more work-study dollars at their schools. Frontloading grants could be beneficial, according to some directors. A potential benefit of frontloading, according to one director, is that it would give students more confidence in their ability to manage their debt: They would not need to borrow until they were sure they could do the work and finish their college education. Some directors also mentioned that frontloading could help with retention and increase accessibility for students from special populations. Some noted that it could assist institutions in maintaining a more consistent aid policy over time and result in more uniform aid packages across institutions. Some directors raised concerns about frontloading. One concern was that it could be perceived as a bait-and-switch policy, because students were attracted to schools with large grants only to find that those grants were not available for all 4 years. Other concerns were that frontloading might still waste federal resources when students drop out, might not work in the absence of additional support services, and could concentrate federal grant dollars in 2-year institutions. In addition, the idea of frontloading is not based on data about the impact of grants on helping students stay in college, one director noted. One director’s recommendations for structuring a frontloading program included targeting specific populations and combining the program with income-contingent loans in the later years. Income-contingent loans, for which monthly repayment amounts are adjusted depending on income, would make it easier for graduates in low-paying jobs to repay. Another director stressed that whatever option the federal government chooses, it is important to stick to it. According to her, one of the most difficult problems students face, when beginning their college careers, is that they cannot be confident that the financial aid they receive the first year will be available in subsequent years; therefore, they cannot plan accordingly. Many students we interviewed said that without grants they would not be in school or would not be at that particular school. Although they generally preferred grants to loans, their answers varied when asked to choose between small grants and larger loans. The students were generally concerned about the levels of debt they were accumulating during college, but a number of them did not believe their debt levels would affect future decisions they made about careers or postgraduate work. They wanted to keep loan amounts as low as possible, but many of them would borrow whatever necessary to finish their college education because they knew the value of the degree, they said. Most of the students we interviewed worked while attending school, and many cited benefits of working in addition to earning money. However, some said the amount they worked threatened their ability to focus on schoolwork or hurt their grade-point averages. Students we spoke to generally preferred grants to loans, and grant availability sometimes influenced their choices. Some students, however, indicated they would prefer larger loans to smaller grants, simply because they needed the larger amount of money to remain in school each year. Students also had different opinions on whether grants early in their college careers were more important than grants as they approached graduation, with many saying year-to-year consistency was important to them. For some students, the availability of grants helped determine the schools they attended. Some of these students chose private schools and said they would not be at those schools without grants. These students often received grants from their schools of over $10,000, or more than four times the maximum federal Pell grant, and many of them specifically mentioned public or community colleges as alternatives if they had not received these grants. Some students at public schools also said that they might not have been able to stay in school without grants. Finally, several students said that they would have worked more or taken time off, extending the time they needed to complete college. The absence of grants also affected some students’ choices. Some of the students we interviewed began postsecondary education at a community college, and then transferred to a 4-year school, to minimize costs or debt. One student who transferred to a public school said that if more grants had been available in her first year, she would have started at the 4-year school she now attends. However, other students began at community colleges for nonfinancial reasons. For example, one student said she may have started at the community college even if more grant money had been available her first year because she was returning to school after being out for several years and the community college made for an easier transition. We asked students what they would do if their aid package left them short of what they needed and they had to choose either a small grant or a larger loan to complete their financial aid offer. The federal cost of a grant is three to four times that of a loan per dollar of aid, meaning that a grant of $1,000 costs the federal government about the same as a $3,000 to $4,000 loan. We asked students which they would prefer. A small grant might not be enough to keep them in school, some students said; if their need was such that only a large loan would be what they needed to pay for costs through the year, they would choose the loan. Some specifically mentioned not wanting to work more than they already did. Several others said they did not know how they would raise additional money to cover the remaining gap. Others would choose small grants over large loans, preferring to make up the difference with either additional work, reduced living expenses, or an increased parental contribution. Some of these students said they would do whatever they could to avoid borrowing more than necessary. We asked students whether they would have preferred an aid packaging scheme that frontloaded grant aid. Many students said they would not favor such a packaging plan, preferring grants “spread out” over their college years. Some saw frontloading as a departure from “consistency” in aid packaging and said consistency from year to year was important to them. On the other hand, some students saw advantages to such aid packaging. One student said that it would be an incentive to start school and that after 2 years students know the system better and know how to succeed. Another student reacted positively to the idea that students would have an opportunity to “prove you can do the work” before borrowing. Students we interviewed generally did not like borrowing to finance their education, but many expected to have to do so and thought the education or degree they would receive was worth going into debt for. Some students told us that their anticipated debt at graduation worried them or influenced what they planned to do after graduation, but others said that their choice of major or planned career was not at all influenced by earnings potential or the need to repay their loans. Among those whose attitudes toward borrowing had changed over time, some said that they had become more reluctant to borrow; others said that they found borrowing easier as they went further in school. Many students said they were aware of the need to borrow before they began college. They sometimes mentioned that a parent, sibling, or other acquaintance had borrowed to attend college; some said borrowing was “expected,” “the price you have to pay,” or “a necessary evil.” On the other hand, several students were the first in their families to go to college, so they told us that accumulating educational debt was a new phenomenon for them. Generally, students were in agreement that borrowing was worthwhile, given the rewards of higher education. Many students spoke of it as an “investment.” In addition, a number of students said they would borrow whatever was necessary to remain in school. Many students told us that they selected a major or career without regard to potential earnings or ability to repay loans, but repayment did affect the choices others made. Some of these students said that they were studying a certain field because they had always wanted to. Several said that they would worry about repayment when the time comes. Others, however, were either worried that their field was not high paying and loan repayment would thus be difficult or were not concerned because they knew their field was high paying. Debt levels also played a role in some students’ plans for further education in graduate or professional schools. One student said that she was considering going to law school immediately after graduating because she knew she could then defer repayment of her undergraduate loans. Other students, however, wanted to take jobs immediately and begin paying off their accumulated debt. Some said that they knew they would need to borrow for additional schooling and (1) did not want to borrow any more than necessary while undergraduates or (2) wanted to work to repay their undergraduate loans before borrowing more for postgraduate work. Students’ thinking about loans often changed while they were in college. For some students, borrowing grew easier as time went on. For these students, taking out the first loan was “scary” and the families were hesitant. Several students who began in community colleges and then transferred to 4-year schools mentioned that they did not need to borrow until they got to the 4-year school. One of these students, as well as another who began at a 4-year school, mentioned that it is easier to borrow with several years of completed schooling under one’s belt. Others said that borrowing became easier or “routine” after the first loan. Other students, however, found that borrowing became more difficult as they approached graduation. For these students, the first loan came at a time when repayment was far in the future, but repayment loomed much closer at graduation. One mentioned that the thought of loans accumulating stayed in the back of her mind; others said the cumulative amount of their debt, not the amount they borrowed in any one year, was what concerned them. Work played a large role in the lives of most of the students we interviewed. Students worked for the money they earned but also for other benefits, such as learning about time management, gaining job experience, and discovering networking opportunities with different offices or departments in their schools. Those in on-campus jobs told us that they had more flexibility in scheduling their work hours and also were better able to make contacts with their schools. Some students said that they worked as much as they could, but others said that they were working too much and their studies were suffering. Earning money was the benefit of working that students cited most often. Many students said they used the money they earned for living expenses. Others saw work more as one component of their overall plan to finance their education. For example, several students mentioned that working helps reduce the amount they need to borrow. Several students we talked to were concerned enough about finances that they chose jobs not for convenience but for higher pay, turning down work-study jobs because of fears the money would run out or because other jobs paid more. Another student, however, said she wanted a work-study job because it paid more than the job she currently held. Many students said that working helped them budget or manage their time or that it adds structure or discipline to their schedule. Another benefit of working was gaining job experience, including particular job skills. Some students worked in an office setting for the first time and learned to work with computers. Several also said that work looks good on their resumes or when applying for future jobs. A number of the students we talked to said they preferred on-campus jobs to off-campus ones. These students said that the main advantages to working on campus were convenience and flexibility. Some on-campus jobs allowed students to shift their hours if their school work or other demands became burdensome; others allowed students to set their own schedules around their class schedules. Several students working on campus told us that they could sometimes study while at work, although at least one student with an off-campus job was also able to study occasionally. Finally, several students said that the on-campus location was important simply because they did not have to spend time commuting to work. For example, one student said she did not have a car and would not have been able to work off campus. On-campus jobs also gave some students a sense of being more connected with their school. Through work, these students said, they made contacts with campus offices that helped them later on. For example, working in the financial aid office helped some students learn more about the financial aid system. In contrast, students working off campus sometimes mentioned inconveniences associated with their work. The time spent commuting to work was an important concern for some. In addition, off-campus jobs tended to have less flexible hours and schedules. Some students, however, preferred off-campus work because they could earn more than in an on-campus work-study job. Some students said that working affected their academic studies. These students worked a range of weekly hours, one student as few as 8 hours but most over 15 hours per week. One student said that he worked 40 hours per week in 1 year because he had no other financial support; he had to drop out because of bad grades that year. He transferred, received financial aid, and was currently working 22 hours, earning a 3.7 grade-point average. Other students, with a similar range of hours worked, said that working did not affect their study time or hurt their grades. Although their hours varied, these students generally worked fewer than 20 hours per week. In addition, several said that (1) if they were not working, they would probably not be using that time for additional studies or (2) students who work perform better academically than those who do not. We asked some of the students how much they could work without hurting their studies or how much would first begin to hurt their studies. Most responses were in the 15- to 20-hour per week range. In addition to those named above, the following individuals made important contributions to this report: Jane A. Dunkel assisted conducting the interviews and discussion panels and helped write the report, Charles M. Novak assisted with the original conceptualization of the study, Thomas L. Hungerford developed the statistical model and conducted initial analyses, Steven R. Machlin and Robert Bifulco performed additional statistical analyses, Elsie A. M. Picyk performed data processing, and Luann Moy and Linda Stinson provided methodological advice on the interviews and discussion panels. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how student financial aid affects low-income students' dropout rates, focusing on whether: (1) the timing of loan and grant aid influences students' dropout rates; and (2) restructuring federal grant programs could improve low-income students' dropout rates. GAO found that: (1) grants and loans do not have the same effects on reducing low-income college students' dropout rates; (2) although grant aid generally lowers low-income students' dropout rates, loans have no significant impact on these students' dropout rates; (3) the timing of grant aid greatly influences students' dropout rates; (4) grant aid to low-income students is more effective during the first school year than in subsequent years; (5) although financial aid program participants have substantially lower dropout rates than other comparable students, financial aid directors and students have mixed views on the potential efficacy of frontloading aid packages; (6) a pilot program could be valuable in evaluating the cost effects of frontloading student aid for low-income college students; and (7) Department of Education officials need to further review their legislative authority to determine whether they are authorized to conduct such a pilot project.
Primarily through its telephone, website, and, to a much lesser extent, face-to-face operations, IRS provides tax law and account assistance, limited tax return preparation assistance, tax forms and publications, and outreach and education. Taxpayers can call IRS to speak directly with a customer service representative (CSR), or use automated telephone lines to obtain information quickly. IRS has 10 automated telephone lines, which allow, for example, taxpayers to interactively inquire about the status of a refund, order a transcript of their return or account information, and request a personal identification number (PIN) to file electronically. IRS’s 149 Teletax lines provide prerecorded messages on tax law topics ranging from alternative filing methods to what a taxpayer can itemize. CSRs are also responsible for responding to paper correspondence. IRS tries to minimize the percent of overage paper correspondence (generally correspondence that is more than 45 days old). IRS staff provides face-to-face assistance at 401 walk-in sites or Taxpayer Assistance Centers (TAC) where taxpayers can get basic tax law questions answered, review their accounts, and have returns prepared if their annual income is $49,000 or less. IRS also has volunteer partners that staff over 12,000 volunteer sites. Volunteers at these Volunteer Income Tax Assistance (VITA) and Tax Counseling for the Elderly (TCE) sites prepare tax returns for traditionally underserved taxpayers, including the elderly, low-income, disabled, and those with limited English proficiency. These sites also provide other services such as helping taxpayers without a bank account get into the banking system and financial literacy education. In addition to the services IRS provides to taxpayers during the filing season, paid tax preparers and tax software development companies play an important role answering taxpayers’ questions and filing tax returns. In 2011, for the first time, paid preparers who expected to prepare 100 or more returns were generally required to file the returns electronically. As part of IRS’s Business System Modernization (BSM) program, over the next few years IRS plans to make major changes to how it processes individual income tax returns to facilitate faster refund processing and maintain more up-to-date account information. For example, IRS is replacing the legacy Individual Master File (IMF) and current Customer Account Data Engine (CADE) systems that it uses to process individual income tax returns with CADE 2. IRS plans to implement its CADE 2 program in three phases beginning in 2012. The first phase includes two projects: (1) processing tax returns daily, rather than weekly, using the IMF (known as IMF daily); and (2) implementing the CADE 2 database. IRS is also replacing its legacy e-filing system with the Modernized e- File (MeF) system, and it plans to retire the legacy e-file system in October 2012. IRS cannot accept electronically filed returns directly from taxpayers. Rather, IRS authorizes e-file providers, such as large tax preparation firms, to transmit returns to IRS electronically using either the legacy e-file or MeF system. The benefits of MeF include accepting or rejecting individual tax returns faster, providing a clearer explanation of why a return was rejected, and accepting prior-year returns. The system also allows taxpayers to attach portable document format (PDF) files to their tax returns (the legacy e-filing system cannot accept additional documentation, requiring such returns to be submitted on paper). Last year we reported that IRS has been taking actions to improve its website by identifying steps to enhance taxpayer service.effort, IRS plans to spend $320 million to upgrade and maintain its website over the next 10 years. These plans include introducing a new website by the 2013 filing season. IRS’s 5-year strategic plan for improving service to taxpayers identified five website-management control gaps, such as content management, website design and usability, and frequently asked questions.IRS has three existing web portals accessible to the public at large, registered users, and IRS employees, respectively. According to IRS, its new portal environment investment will replace the current environment that has reached the end of its useful life, and provide streamlined, web-based services to taxpayers, business partners, IRS employees, and other government agencies. IRS plans to begin phasing in the new portal environment in 2012. While processing returns, IRS validates key pieces of information and corrects returns before issuing refunds, which allows it to avoid auditing taxpayers after returns have been processed and refunds sent to Such audits, which are costly to IRS and burdensome for taxpayers.taxpayers, may result in the assessment of interest and penalties, and may require IRS to collect amounts due. Correcting errors may result in taxpayers receiving larger refunds. Prerefund compliance checks help ensure that taxpayers submit required information to the IRS with their returns. In conducting prerefund compliance checks, IRS does the following: Completes automated and relatively low-cost (compared to audits) MEA is statutory authority granted to IRS by Congress to correct calculation errors and other obvious instances of noncompliance, such as claims above income and credit limits, and assess additional tax based on such errors without having to issue a statutory notice of deficiency. Electronic Fraud Detection System (EFDS), which detects returns at a high risk for fraud. IRS is currently replacing EFDS with the Return Review Program (RRP). In addition, millions of taxpayers may choose to obtain their tax refunds through a RAC or RAL, which are offered to taxpayers by paid preparers or banks in connection with federal or state tax refunds, or both. RACs are a refund delivery option where refunds are directly deposited into a temporary bank account set up by a financial institution or tax preparer on behalf of the taxpayer. The tax return preparation fee, along with other fees, is generally withheld from the refund and the remaining funds are available to the taxpayer. RALs are short-term, high-interest-rate bank loans that allow taxpayers to get their refunds faster and also allow taxpayers to pay return preparation and other fees out of their refunds. In contrast to RALs, RACs are not loans. During 2011, taxpayers also could choose to receive their tax refund from IRS through (1) their banks’ direct deposit program, (2) paper checks, or (3) debit cards, which could be obtained from a participating bank, through Treasury’s pilot program on debit cards, or at VITA/TCE sites. Since 2009, IRS has worked with partner organizations at VITA/TCE sites to encourage taxpayers not requesting a direct deposit of their refund to opt to receive it on a debit card sponsored by a participating financial institution. Last year we reported that less than 3 percent of eligible taxpayers at VITA/TCE sites elected to receive refunds on debit cards. Separately, in 2011, Treasury launched a pilot program offering about 800,000 low-income taxpayers tax refunds on debit cards. Although targeting the same demographic group, the VITA site offers are made in person and the Treasury offer was made through the mail. Even though both programs are relatively small in scale, they are important because they are intended to identify ways to reduce the cost of delivering refunds to taxpayers, provide faster refunds compared to paper checks, reduce transaction costs, and provide individuals who might not otherwise have access to a bank account with banking services. During the 2011 filing season, the percentage of returns e-filed increased considerably and IRS expects new systems to speed refunds to taxpayers. As a result of these improvements, IRS’s refund timeliness measure and goal, which relates only to paper returns, is outdated. In addition, the high call volume and amount of paper correspondence highlight the need to improve taxpayer service through providing additional self-service tools. Although fewer taxpayers receive face-to- face assistance than in other ways, IRS is taking steps to improve service at TAC and VITA sites. IRS processed about 140 million returns and almost reached its e-file goal of 80 percent (78 percent of individual returns were e-filed and 22 percent filed on paper), established by Congress in 1998. E-filing increased about 13 percent compared to last year, as table 1 shows. E- filing has many benefits for taxpayers, such as higher accuracy rates and faster refunds, and it also provides IRS with significant cost savings through eliminating the need for manual transcription of paper returns, which is labor intensive and introduces errors. According to IRS, in fiscal year 2010, it cost 17 cents to process an e-filed return and $3.66 for returns filed on paper. IRS officials and representatives from major tax preparation firms attributed the increase in the e-file rate to factors including the e-filing requirement for paid preparers and the fact that IRS did not mail out hard-copy tax forms to taxpayers.the states also have e-file mandates, which may also encourage taxpayers to e-file federal returns. Further, IRS met all eight returns processing goals, summarized in appendix IV. IRS’s current refund timeliness measure and goal, which it routinely uses in budget justification documents and to assess its performance, do not include e-filed returns. The IRS Restructuring and Reform Act of 1998 requires IRS to report to Congress on how it has maintained processing times of 40 days or less for paper returns, in addition to implementing a plan to increase electronic filing, and IRS maintains a goal of issuing Due to improvements refunds for returns filed on paper within 40 days.in the percentage of e-filed returns (nearly 80 percent of returns are now filed electronically) the measure does not apply to the majority of returns filed by taxpayers. IRS last made significant changes to the refund timeliness measure in 2003. Since that time IRS has made important changes to facilitate faster refund processing. For example, the number of e-filed returns has more than doubled, IRS implemented new systems including current CADE, and, in 2012, IMF daily processing should allow IRS to issue refunds within 4 business days for direct deposit and 6 business days for paper checks after it processes the return and posts the return data to the taxpayer’s account. As a result, the goal of issuing 97 percent of refunds within the 40 days for paper returns does not give IRS a meaningful indicator of how quickly it is disbursing refunds. According to IRS officials, the refund timeliness goal captures the percentage of refunds issued for returns filed on paper within 40 days, which is 5 days before IRS generally must begin paying interest on the refund. However, as we previously reported, performance measures and goals should provide useful information for decision-making to track how programs can contribute to attaining the organization’s goals and mission. We have also stressed that agencies need to consider differing needs of various stakeholders, including Congress, to ensure that performance information will be both useful and used. IRS has a variety of options for updating its refund timeliness measure. For example, more meaningful measures could include identifying the percentage of refunds issued during given periods (such as the percentage of refunds issued in 10 or 20 days) or creating separate measures for returns filed on paper and returns filed electronically. Doing so would not preclude IRS from continuing to meet its requirement of reporting the percentage of refunds for returns filed on paper issued within 40 days. Without developing a new refund timeliness measure and goal to more appropriately reflect current capabilities, IRS is missing opportunities to better measure its actual performance and provide useful information to Congress for decision- making purposes. Since 2007, IRS has struggled to respond to high call volume, which has adversely affected access to telephone service. In 2011, IRS received 83 million calls as of June 30, compared to about 57 million through the same date in 2007 (see app. VI). Over the same years, as table 2 shows, taxpayers’ ability to gain access to CSRs, IRS’s live telephone assistors, deteriorated. In 2011, 72 percent of taxpayers seeking live telephone assistance got through to a CSR, compared to 81 percent in 2007. The deterioration in access is also reflected in the length of time taxpayers must wait before speaking to a CSR. In 2011, average wait time was almost 12 minutes; in 2007 it was less than 5 minutes. IRS officials attribute the higher call volume over the years to a number of factors including tax law changes made very late in the year that generated a lot of taxpayer questions Table 2 also shows that, as performance declined, IRS reduced its goals for access to CSRs and increased its goal for telephone wait time. Despite less chance of getting through to CSRs and longer wait times, this year IRS met its goal for providing live assistance and almost met it for wait time. IRS sets its telephone performance goals based on the expected volume and complexity of calls (complexity affects the time required to respond to a taxpayer), resource availability, and the anticipated volume of paper correspondence that CSRs handle. Even though IRS has reduced its goals for phone service, the number of full- time equivalents (FTE) dedicated to answering the phones has actually increased from about 8,000 in fiscal year 2007 to about 8,800 in fiscal year 2011. A positive aspect of IRS’s telephone service in 2011 was the accuracy of CSRs answers. As shown in table 2, IRS’s accuracy rate estimates for CSR answered calls remained well over 90 percent. In the past we have reported that IRS officials attribute these high accuracy rates to automated interactive tax law assistance tools that CSRs use to provide answers to taxpayers. IRS also attributes the high accuracy rate to the use of contact analytics—a tool used to identify reasons why taxpayers call IRS and evaluate how CSRs interact with taxpayers. Key to improving telephone access, given the high volume of taxpayers calling the IRS and resource constraints, is shifting as many calls as appropriate to self-service tools, such as interactive automated telephone lines or the IRS website. Providing automated answers to taxpayer questions reduces the demand to speak to a CSR and also reduces IRS’s costs. In 2011, through June 30, CSRs answered over 22 million calls at a cost of about $30 per call, for a total of about $660 million. Conversely, IRS said this year it cost $0.36 to answer an automated phone call. We identified two types of calls that could likely be answered through automation, but are instead answered by CSRs—calls about the status of amended tax returns and callers asking for the location of a TAC or VITA site. GAO, 2009 Tax Filing Season: IRS Met Many 2009 Goals, but Telephone Access Remained Low, and Taxpayer Service and Enforcement Could Be Improved, GAO-10-225 (Washington, D.C.: Dec. 10, 2009). Where’s My Refund online self-service tool. However, taxpayers who filed an amended return must speak to a CSR. Last year, IRS assessed the need to create an automated telephone line that gives taxpayers the status of their refund if they filed an amended return, similar to the current Where’s My Refund automated line. IRS officials do not track the exact number of calls they receive related to amended returns, but believe it to be a significant number. IRS received over 4.3 million amended returns in fiscal year 2011 alone, and according to IRS’s assessment, an automated self-service tool for checking on the status of a refund from an amended return could potentially serve as many as 5 million taxpayers annually. IRS officials acknowledged that creating this line would free up CSRs to answer other lines, and submitted an internal request to create such an automated line to allow taxpayers to determine the status of their amended return refund, which has yet to be funded. IRS officials were uncertain exactly how much it would cost to develop the line but said that it would probably cost less than $1 million. In addition, from January 1 through June 30, 2011, CSRs answered over 60,000 calls from taxpayers inquiring about the location of a TAC or VITA site. During the same time period last year, IRS received over 35,000 calls to the lines and, in 2009, IRS received more than 60,000 calls to these lines. However, IRS does not have an automated telephone line for those taxpayers to call. As a result, taxpayers must go online or call IRS and wait to speak to a CSR. Without offering an automated phone line for taxpayers inquiring about the status of their amended return or the location of a TAC or VITA site, CSRs will continue to answer calls that could be addressed through automation. In determining whether to create additional automated lines and which lines to prioritize, IRS would need to compare the up-front costs associated with creating such applications with the projected benefits over time. IRS is unsure how much it would cost to automate the amended return telephone line, but IRS’s preliminary estimates suggest that the benefits may outweigh the cost. IRS officials acknowledged they have not determined whether the cost of automating the TAC and VITA locator lines would be worth the benefits. As a result, IRS may be missing opportunities to provide taxpayers with more self-service tools, save resources, and provide better access to taxpayers. Managing the trade-offs between responding to paper correspondence and providing live telephone service illustrates the challenges IRS faces in improving taxpayer service. In 2011, IRS dedicated about 4,700 FTEs to providing paper correspondence. We previously reported that the age of taxpayer paper correspondence had risen steadily since 2005, and recommended that IRS establish a performance measure that includes providing timely correspondence service to taxpayers. IRS agreed, but the recommendation has not been fully implemented. Table 3 shows that the overall amount of paper correspondence is about the same as last year and the percentage of overage inventory (paper correspondence older than 45 days) increased again in 2011. The volume and percentage of overage paper correspondence further highlight the need to provide additional automated services and maximize resources. Although far fewer taxpayers visit TAC and volunteer sites than call IRS or use its website, these sites represent an important service. As of April 30, 2011, IRS received 2.85 million taxpayer contacts at its 401 TACs, compared to about 2.78 million contacts during the same period last year. The accuracy of accounts and tax law assistance provided at TACs stayed about the same as last year, as table 4 shows. IRS is implementing service improvements, including self-service tools, at TAC sites intended to increase access, reduce wait time, and extend the effectiveness of its employees. IRS expanded its pilot program of extended Saturday and evening hours to 36 TACs to increase taxpayer access. IRS officials said they would like to expand this program but need to renegotiate a letter of understanding with the National Treasury Employees Union so they can adjust employee schedules without incurring overtime costs. To reduce wait time and improve customer service, IRS officials told us there were 100 Facilitated Self Assistance kiosks located at 37 TAC sites to encourage clients with less complex questions to use them. For 2011, the number of taxpayer contacts at the TAC kiosks more than doubled to about 21,500 from about 9,550 in 2010. For 2012, as part of IRS’s efforts to increase self-service and improve the productivity of its employees, IRS is piloting a virtual assistance program at 12 TACs that would allow employees to interact with walk-in clients through a video terminal at other sites when the employee is not occupied at their home site. Also beginning in 2012, IRS will be able to measure wait time at TAC sites on a nationwide basis. Highlights of VITA/TCE site performance include the following: The number of volunteers at VITA/TCE sites increased slightly to over 88,500, up from about 87,600 last year. Volunteers prepared 3.2 million tax returns, up from 2.9 million last year. Return preparation accuracy by volunteers increased to 87 percent, a gain from 85 percent in 2010.IRS placed employees at 31 VITA sites to assist with return preparation and answer questions from about 6,800 taxpayers, up from 27 sites and about 5,500 taxpayers in 2010. Due to budgetary constraints, IRS does not plan to expand the number of IRS employees or sites supported in 2012. IRS is supporting its volunteer site partners as they work with taxpayers to promote financial education and asset building, which includes efforts to bring taxpayers without a bank account into the banking system. The number of taxpayers requesting direct deposit at VITA sites has risen in each of the last 5 years, and by a total of about 50 percent since 2007 (from about 970,000 in 2007 to about 1.5 million in 2010). Later in this report, we discuss the options available for taxpayers, particularly those without bank accounts, to receive refunds. Visits to IRS’s website (www.irs.gov) and the use of self-service tools continue to increase since last year, as table 5 shows. IRS officials believe that the increase in the use of the search tool over the years is due in part to site visitors not being able to easily locate the information they are seeking. IRS acknowledged that the existing manner in which IRS manages content on its website contributes to more searches because of duplicative and outdated information. Currently, content is developed on IRS’s website by about 300 employees throughout IRS, and Content Area Administrators provide oversight. In September 2011, IRS developed a draft business case for a new content management strategy that is expected to significantly reduce duplication and greatly improve user search results. According to IRS, centralizing certain elements of content management is a key piece of its Internet strategy, and removing old content should make the site more efficient and improve content consistency, quality control, and the user experience. As we noted earlier, IRS has begun spending a planned $320 million on its website over a 10-year period. IRS awarded the contract for the new website in August 2011 and has begun developing the website, which it plans to introduce in 2013. IRS’s investment plans include, among other things, introducing new, more secure portals for taxpayers to access information. However, IRS does not have concrete plans that define what additional online services the new website will ultimately provide and how much the services will cost. To their credit, IRS officials have begun developing a roadmap that identifies some online services they would like to provide, and IRS has periodically added new online services in the past. However, the roadmap omits several fundamental elements. For example, it does not include an assessment of the costs and benefits for new services identified, time frames for when these online services would be created and available for taxpayer use, or specific plans to periodically revisit the strategy and make revisions based on IRS’s priorities. Online tools, much like automated telephone lines, are a partial substitute for calling IRS and perhaps speaking to a CSR. The extent to which taxpayers can be diverted to the web will allow IRS to assist them at a much lower cost and more quickly. Federal guidance suggests that a strategy to guide website development within an agency is important. For example, the guidance states that a strategic plan is an essential part of web management; performance goals and time frames are necessary elements to be included within the strategic plan; cost estimates are necessary to support decisions about funding one program over another, evaluate resource requirements at key decision points, and develop performance measurement baselines; and agencies should also revisit plans periodically and update them to reflect changes in priorities and capabilities. Recent organizational changes within IRS, including the addition of a Director of Online Services at the agencywide level and a reorganization of IRS’s online management team, offer opportunities to develop a more comprehensive approach to website development. The Director of Online Services told us that he plans to further develop the strategy based on taxpayers’ needs and develop online services in an iterative manner. However, IRS has not yet developed an initial schedule for implementing online services and it is not clear the IRS plans to develop a more comprehensive Internet strategy. Without a comprehensive Internet strategy in place that IRS revisits on a regular basis, IRS risks not getting the greatest possible benefits from the $320 million and any additional funding for online services that it proposes to spend. In addition, taxpayers and Congress do not have complete information about the online services IRS intends to provide in return for making these investments. Unlike online services offered by two states we identified, taxpayers who visit IRS’s website cannot view and update personal tax account information online. Online services are a substantially less expensive means for IRS to conduct business with taxpayers compared to telephone or paper correspondence, making it important for IRS to promote interactive website services. Table 6 compares online services offered by tax agencies in New York and California compared to IRS. New York and California state tax officials said they expect taxpayers to increasingly transition from the phones to the web for information, driving down their operating costs. In addition, officials from New York also reported that the anticipated savings greatly outweigh the up-front costs in their case. According to IRS officials, they have not allowed taxpayers to view and update elements of their personal tax account online because of outdated technology and federal regulations requiring secure access to account information. However, IRS has recently taken steps that may allow it to meet federal requirements for electronically authenticating (e- authenticating) users online so that taxpayers can access information securely. By 2013, IRS plans to have online security features in place that would allow taxpayers to access more account information online. Nevertheless, IRS has not assessed the need for allowing taxpayers to view and update elements of their personal account information online nor has it conducted an assessment of the risks associated with doing so—all of which could be completed in conjunction with the development of a more comprehensive Internet strategy discussed above. Without making these determinations, IRS is missing opportunities to reduce costs and provide the most beneficial online services to taxpayers. In April 2011, the Commissioner of Internal Revenue said that IRS should develop a long-term vision to perform more prerefund checks of returns by requiring earlier submission of information provided by third parties to match the data with taxpayer returns. He acknowledged that implementing such a strategy would require a fundamental shift in how IRS conducts its business and would likely need to take place over a significant period of time. In more recent remarks in October 2011, the Commissioner stated that after conducting an initial review of the steps IRS would need to take to achieve his long-term vision, he believes that implementing the vision may be even more feasible than initially thought. Figure 2 illustrates IRS’s current prerefund process for e-filed and paper returns. As we mentioned earlier, the key information systems that support the process are further outlined in appendix III. As IRS develops this strategy, it is also undertaking shorter-term initiatives, including upgrading information systems, expanding data collection, and identifying areas where it could use additional MEA or make changes to existing processes to enhance prerefund compliance checks. Upgrading Information Systems: In 2012, IRS plans to implement IMF daily processing and, in 2013, to make MeF the sole system for accepting electronic returns, and transition to CADE 2 for returns processing at a future date. IRS also plans to replace the EFDS— which applies specific fraud criteria to tax returns filed with the IRS to identify questionable tax returns with refunds—with RRP. RRP is a more modern database that IRS views as a critical piece in its prerefund compliance activities because it will allow IRS to more effectively identify fraudulent schemes early in the filing season. Although IRS believes that EFDS is obsolete and too risky to maintain past 2014, procurement delays and a change in vendors will likely result in extending the implementation of RRP beyond 2014. We plan to further assess RRP’s progress as part of our annual budget and information systems reviews. Expanding Data Collection: We reported earlier that IRS does not transcribe all data from paper filed returns due to cost constraints. As IRS only runs automated compliance checks on data present in both e-filed and transcribed paper returns, the amount of data for enforcement activities is limited. In October 2011, we reported that because an increasing percentage of returns are e-filed, IRS could be at the point where the benefits of digitizing additional data from paper returns are greater than costs. Identifying Additional MEA or Process Changes: IRS works with Treasury on a case-by-case basis to identify areas where it may need MEA. Although we have suggested that granting IRS broader MEA would help ensure compliance before refunds are issued, we recognize that MEA may not always be the most effective or appropriate prerefund compliance tool (see apps. I and II for a list of current and proposed MEA for IRS). For example, in our October 2011 report on IRS’s administration of the adoption tax credit, we recommended that IRS determine whether existing processes could be used to reduce the number of costly audits conducted rather than obtain MEA. Although these short-term efforts to enhance prerefund compliance checks are important, the potential long-term benefits of matching information provided to IRS by third parties to tax return data during the filing season may generate benefits to taxpayers and IRS that far exceed those from current prerefund compliance checks. In 2010 (the last year with data available), RACs greatly outnumbered RALs, 18 million to 2 million, as shown in figure 3. Since 2005 the total number of RACs and RALs issued has not changed much but the distribution has. Although less is known about RACs than controversial RALs, some have expressed concerns that RACs have similar features to RALs and are used by the same categories of taxpayers. For example, consumer advocacy groups have noted that the fees associated with a RAC may be high, lack transparency, and taxpayers may not always fully understand the refund product to which they are agreeing—similar to concerns previously raised with respect to RALs. A 2010 Urban Institute report developed at the request of Treasury noted that RAC and RAL usage is common across similar population groups. For example, RAC and RAL usage is highly concentrated in America’s poorest communities, and RAC and RAL users frequently do not have bank accounts to receive direct deposits. RAC and RAL users also tend to be similar to consumers of other alternative financial services, including pawnshop loans and payday loans.explained, in part, by the fact that allowing the return preparer to deduct tax preparation fees from the refund (as opposed to being paid for out-of- pocket when the return is prepared) is the primary benefit of a RAC. The high use of RACs and RALs in low-income areas may be The total average cost of a RAC is difficult to calculate because pricing varies considerably across providers, and fees beyond the flat fee for setting up the RAC account are difficult to calculate. Figure 4 provides an example of potential fees incurred by taxpayers when filing a return and receiving a refund through a RAC. For example, most tax preparers we observed charge a flat fee of about $30 to $35 for setting up a RAC account. Tax preparers may also charge an additional fee for issuing a paper check and taxpayers may incur fees when using debit cards supplied by tax preparers. In addition, preparers charge standard fees for tax preparation, including for document preparation and e-filing. Taxpayers may also incur other fees not charged by the preparer, such as check-cashing fees. Despite the fact that many taxpayers still need to pay for tax preparation services out of their refunds, the fees associated with RACs and the concerns noted above raise questions about whether taxpayers understand the benefits and all the fees of RACs. Federal rules require tax preparers to inform taxpayers that they are receiving a RAC or a RAL, and some states have issued additional regulations requiring more disclosures that must be provided to taxpayers when signing up for a RAC. For example, Arkansas, California, Maine, and Maryland require preparers to post a RAL and RAC fee schedule. One step that could help taxpayers make more informed decisions about RAC use is improving the relevance of IRS’s refund timeliness performance measure, which we previously discussed. A refund timeliness measure that gave taxpayers a clearer picture of how long it normally takes to get a refund from IRS could help them decide whether it is worth paying RAC fees. In part to address concerns about costly refund delivery mechanisms, in January 2011 Treasury launched a pilot program that offers low- and medium-income taxpayers the option to receive their federal tax refund on a debit card, which could be delivered to taxpayers more quickly and at a lower cost to IRS. In part, the program was intended to measure whether providing such options to taxpayers would reduce reliance on costly refund products and offer safer and more secure refunds to taxpayers without bank accounts. Treasury designed the program to test a variety of fee structures and marketing techniques to determine whether taxpayers would likely use the cards if the program were expanded on a national The preliminary results indicate that taxpayers are most sensitive basis.to pricing of a refund debit card, opting for the card with no fees the most frequently. The Urban Institute is working on the final assessment of the Treasury program, which is scheduled to be released in December 2011. According to a Treasury official, because the program was a pilot to test taxpayers’ responsiveness to different debit card offers, Treasury does not expect to continue the program in 2012. However, it is exploring other options to test out products in future filing seasons. Treasury is also determining whether refunds could be deposited on Direct Express cards—cards on which citizens already receive federal benefit payments—at a future date. Separately, IRS is trying to encourage taxpayers who may not have an account at a bank or other financial institution to receive their refunds through direct deposit on a debit card issued by one of IRS’s four national bank partners.VITA sites and IRS’s national banking partners issued just over 6,000 prepaid cards—a small percentage of potentially eligible taxpayers. In In 2011, refunds on debit cards were available at all August 2011, IRS completed a study to determine how to appropriately market debit cards and other services provided at VITA sites. IRS concluded that, among other things, low-cost options, such as continuing to work through VITA/TCE partners to promote the cards, exist to increase the use of debit cards at VITA/TCE sites. IRS officials anticipate that these two efforts may result in reduced taxpayer use of RACs and RALs by providing taxpayers without bank accounts a low-cost or no-cost option to receive refunds quickly. IRS is processing tax returns in a rapidly changing environment. In 2011, IRS met a number of its filing season performance goals, and e-filing increased to nearly 80 percent—a target that IRS has worked to reach over more than a decade. Moving forward, new systems, including CADE 2 and MeF, should allow IRS to issue refunds much more quickly and provide other benefits. In this context, opportunities exist for IRS to revisit how it measures its performance in processing tax returns and refunds and make related goals more meaningful. The continued low level of telephone service combined with the high cost of assistor answered calls highlights the importance of implementing additional self-service tools for both IRS’s telephones and its website. This is especially important in an era of tight budgets when federal agencies will be expected to do more with less. By developing an Internet strategy and implementing e-authentication, IRS is taking important steps to identify and provide additional online services, including starting to spend a planned $320 million on its website over 10 years. However, developing a more comprehensive strategy should help ensure that IRS gets the most benefit for taxpayers from this investment. We recommend that the Commissioner of Internal Revenue take the following four actions: develop a new refund timeliness measure and goal to more appropriately reflect current capabilities; offer an automated telephone line that gives taxpayers the status of their amended tax return, unless IRS has a convincing cost-benefit analysis to suggest that the costs exceed the benefits; assess the costs and benefits of automating the TAC/VITA location telephone lines, and automate these lines if the benefits exceed the costs; and complete an Internet strategy that provides a justification for the implementation of online self-service tools and includes an assessment of providing online self-service tools that allow taxpayers to access and update elements of their account online; acknowledges the cost and benefits to taxpayers of new online services; sets the time frame for when the online service would be created and available for taxpayer use; and includes a plan to update the strategy periodically. We provided a draft of this report to the Commissioner of Internal Revenue. In written comments on a draft of this report (which are reprinted in app. VIII) the IRS Deputy Commissioner for Services & Enforcement agreed with three of our four recommendations. IRS agreed that automating the ability to locate TAC and VITA sites could enhance service and convenience, but said that resources are not currently available to support it. We acknowledge that IRS is facing tough choices in an environment of constrained resources. However, we recommended that IRS assess the costs and benefits of automating the TAC/VITA location telephone lines, and automate these lines if the benefits exceed the costs. Since 2007, IRS telephone service has continued to suffer and we believe that a rigorous assessment of the costs and benefits of automating the TAC/VITA telephone line will give IRS better information on how to allocate scarce resources. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Table 7 summarizes the Internal Revenue Service’s (IRS) 13 areas of existing math error authority (MEA). As early as the first codification of the Internal Revenue law in 1926, Congress granted IRS MEA so that IRS does not have to provide taxpayers with a statutory notice of deficiency for math errors. A 1976 statutory revision defined the authority to include not only mathematical errors, but other obvious errors, such as omissions of data needed to substantiate an item on a return, and provided a statutory right to file a request for abatement of the assessment within 60 days after the notice is sent. In the 1990s, Congress extended the authority multiple times, and more recently it has added other provisions to help determine eligibility for certain tax exemptions and credits, such as the First-Time Homebuyer Credit. For almost a century, Congress has been expanding the Internal Revenue Service’s (IRS) math error authority (MEA) on a case-by-case basis. In 2010, we suggested that authorizing the use of MEA on a broader basis with appropriate controls to protect taxpayer rights could help IRS address compliance problems with newly created tax credits. In the absence of broader MEA, we have also suggested that Congress expand MEA for more limited purposes as shown in table 8. The Treasury Inspector General for Tax Administration (TIGTA) also made several recommendations to IRS to request expanded MEA from Congress. Table 9 summarizes the key systems the Internal Revenue Service (IRS) uses to conduct prerefund compliance checks and process tax returns. As shown below in table 10, the Internal Revenue Service (IRS) met all of its processing performance goals in 2011. The Internal Revenue Service (IRS) receives millions of phone calls each year, some of which are answered by live assistors and some of which are handled through automated services, as figure 5 shows. In 2011, IRS received 83 million calls as of June 30 compared to about 57 million through the same date in 2007. Figures 6 through 8 show the Internal Revenue Service’s (IRS) process for conducting prerefund compliance checks for all individually filed tax returns, returns filed electronically, and returns filed on paper, respectively. An interactive graphic illustrating this process is shown in figure 2 of this report. In addition to the contact named above, Joanna Stamatiades, Assistant Director; Steven J. Berke; Abbie David; David Fox; Tom Gilbert; Matt Johlie; Inna Livits; Kirsten Lauber; Karen O’Conor; and Sabrina Streagle made key contributions to this report.
The tax filing season is an enormous undertaking in which the Internal Revenue Service (IRS) processes millions of tax returns, issues billions of dollars in refunds to taxpayers, corrects taxpayers' errors, and provides service to millions of taxpayers through telephones, website, and face-to-face assistance. Among other things, GAO was asked to assess (1) IRS's performance processing returns and issuing refunds, and providing telephone assistance, and (2) IRS's plans to expand self-service options on its website. To conduct the analyses, GAO obtained and compared data from 2007 through 2011, reviewed IRS documents, interviewed IRS officials, observed IRS operations, and interviewed tax-industry experts, including from tax preparation firms. During the 2011 filing season the following occurred: Electronic filing (e-filing) increased to nearly 80 percent of the 140 million individual returns filed. The benefits of e-filing include that it is more accurate, faster, and less expensive for IRS than processing returns filed on paper. Due to the increase in e-filing, new systems, and IRS's performance in recent years, its refund timeliness measure and goal are outdated. The measure only relates to the 22 percent of returns filed on paper. IRS's goal is to issue refunds for paper-filed returns within 40 days. In 2012, IRS expects to issue most refunds within 4 to 6 days of processing a return (paper and e-filed), meaning the current goal does not reflect current performance and capabilities. The percent of callers seeking live assistance who receive it remained much lower than in 2007 and the average wait time for callers continued to increase. Providing live telephone assistance is expensive. However, IRS can shift some assistor-answered calls to less costly tools. Two such opportunities include creating self-service phone lines for taxpayers seeking to identify the (1) status of their amended return--a source of high call volume--and (2) location of a Taxpayer Assistance Center (TAC) or Volunteer Income Tax Assistance (VITA) site, where IRS employees and volunteers prepare returns, respectively. IRS officials expect the benefits of the amended return line to exceed the costs, but have not studied the costs and benefits of adding a TAC/VITA locator line. The use of IRS's website is growing, particularly the number of searches, which IRS officials attribute, in part, to taxpayers having difficulties locating information. Having an easily searchable website is important for IRS because it reduces costly phone calls. IRS has begun spending a planned $320 million on its website over 10 years. However, IRS's initial strategy for providing new self-service tools online does not include allowing taxpayers to access account information and is missing fundamental elements, including a justification for new services and time frames. Doing so would provide Congress and taxpayers with a better understanding of the online services IRS plans to provide with its significant investment on its website. GAO recommends that IRS develop a new refund timeliness performance measure to better reflect current capabilities, create an automated telephone line for taxpayers seeking information about amended returns unless IRS has a convincing costbenefit analysis suggesting the costs exceed the benefits, assess the costs and benefits of automating a TAC/VITA locator line, and finalize a strategy for determining which self-service tools to provide on its website. IRS agreed with three of GAO's recommendations, but said that resources are not available to automate the TAC/VITA line. GAO believes a review of the costs and benefits would better inform IRS decisions about how to allocate scarce resources.
Under tax law, married couples who file joint tax returns are treated as a single unit, which means that each spouse becomes individually responsible for paying the entire amount of the tax associated with his or her joint return. Accordingly, an “innocent spouse” can be held liable for tax deficiencies assessed after a joint return was filed, even if those liabilities were solely attributable to the actions of the other spouse. However, if certain conditions are met, the innocent spouse may be able to obtain relief from the tax liability. Prior to the Restructuring Act, relief was available to taxpayers but under rather restrictive conditions, such as that certain dollar thresholds for tax underpayments first be met. The Restructuring Act revised the conditions for obtaining innocent spouse relief to make it easier for taxpayers to qualify. The act liberalized the former conditions and added new conditions. Simply stated, the three basic provisions related to innocent spouse relief are as follows: When the innocent spouse had no knowledge that there was an understatement of tax attributable to erroneous items of the other individual filing the joint return, and considering all facts and circumstances, it would be unfair for IRS to hold the innocent spouse liable for the tax. When the innocent spouse otherwise qualifies, he or she may request that the tax deficiency from a jointly filed return be recalculated to include only items allocable to him- or herself. When the tax shown on a joint return was not paid with the return, the innocent spouse may obtain “equitable relief” if he or she did not know that the funds intended to pay the tax were not used for that purpose. Equitable relief is also available for understatements of tax for which relief under the above two conditions was not available. Each of these three conditions has different eligibility requirements and different types of relief. Appendix III describes in more detail the eligibility requirements for each condition and the factors that IRS is to weigh in deciding whether to grant or deny relief. Relief is generally available to taxpayers for liabilities arising after July 22, 1998, the date that the law was enacted, and for liabilities that arose before that date but remained unpaid as of that date. Limited data exist to determine the trend in innocent spouse workload immediately following passage of the Restructuring Act. IRS did not systematically track innocent spouse cases until March 1999, about 8 months after the act was passed. Prior to the Restructuring Act, IRS administered innocent spouse relief as part of its process for examining tax returns and did not keep statistics on the number of cases in which innocent spouse relief was requested or on the disposition of those requests. Nevertheless, according to IRS, because taxpayers were anticipating passage of the Restructuring Act, innocent spouse requests increased from a few cases to about 750 cases in each of the 4 months leading up to the act. During fiscal year 2000, it received, on average, 4,800 cases per month. IRS processes innocent spouse cases on the basis of proposed regulations issued in January 2001, which set forth the basic guidelines that its examiners are to use in evaluating taxpayers’ cases to determine whether to grant or deny relief. IRS’s Wage and Investment (W&I) Division is responsible for managing this program. Under procedures adopted in fiscal year 2001, virtually all innocent spouse cases are to be processed by correspondence at IRS’s Centralized Innocent Spouse Operation (Cincinnati processing site) in Covington, Kentucky. Generally, only those cases needing face-to-face contact or that arise in the field are to be handled by field staff, generally tax compliance officers and revenue agents in IRS’s Small Business/Self-Employed (SB/SE) Division. As discussed later in this report, IRS is phasing in new W&I field staff to work cases needing face-to-face contact with taxpayers. Staff at the Cincinnati processing site screen the incoming cases to determine whether they meet the basic eligibility requirements for further processing. These requirements include, among other things, verifying that a joint tax return was filed, that an outstanding tax liability exists, and that the request is for the appropriate tax year. Any request that does not meet the basic requirements is to be judged ineligible for further review and closed through written notification to the taxpayer of the reasons for IRS’s decision. Any case that meets the basic eligibility requirements is to be assigned to an examiner to further review the merits of the taxpayer’s request for relief. IRS is required by law to attempt to contact the other taxpayer who signed the joint tax return, to give him or her an opportunity to participate in the case. IRS generally allows 30 days for the nonrequesting spouse to respond. If a taxpayer files a claim for innocent spouse relief covering more than one tax period or year, IRS evaluates the merits of the claim for each tax year individually to determine whether relief should be granted. Therefore, the claim for each tax year is counted as a separate case. Based on the merits of an individual claim, IRS grants a taxpayer full relief, partial relief, or no relief. The examiners evaluate the facts and circumstances of each case and ultimately decide whether full, partial, or no relief should be granted. IRS is required by the Restructuring Act to notify the requesting spouse of its decision on each case and to inform him or her of the right to file an appeal with IRS’s Office of Appeals within 30 days. If the taxpayer does not file an appeal with IRS, or after the appeal is settled, IRS is to send a final determination letter to the requesting spouse and, as required by law, advise the individual of his or her right to appeal IRS’s decision to a federal court within 90 days. IRS’s decision on a case becomes final after the taxpayer exhausts all rights to an IRS appeal or a court review or waives these rights and accepts IRS’s decision. At that time, IRS is required to notify the nonrequesting spouse of the final result. To close a case after relief is finally approved, IRS must separate and transfer taxes from the taxpayers’ joint tax account in the amount of the approved relief. IRS procedures require that its staff establish a separate, individual tax account for the taxpayer who was judged responsible for the tax liability and transfer the tax liability to that account. Any joint tax liability that is not part of the relief granted remains a liability of both taxpayers that IRS may collect from either. To assess IRS’s efforts to ensure that innocent spouse cases were being processed in a timely, accurate, and consistent manner, we reviewed W&I and SB/SE planning documents, ISTS data on the program’s performance, and data from the innocent spouse quality review program, which analyzes samples of closed cases for adherence to procedures and accuracy of decisions. We interviewed IRS’s innocent spouse project manager and staff to obtain information on how the program was managed, and we obtained relevant documentation on the program, including its procedures, policies, and guidance. To further assess the management of the program, we relied on our past reports on managing organizational performance, IRS’s guidance regarding performance management, and other management literature, including the Government Performance and Results Act. We reviewed reports by TIGTA and IRS’s taxpayer advocate that addressed innocent spouse management issues. We also analyzed the assumptions that IRS used in projecting inventory and staffing levels. To determine whether IRS’s efforts to process cases timely, accurately, and consistently were resulting in changes in program performance, we also obtained and analyzed ISTS data from March 1999 to December 2001 regarding the number of innocent spouse cases that IRS received and resolved and their average case-processing times. We performed limited accuracy checks on the ISTS database. In September 2001, TIGTA recommended that IRS strengthen its controls over data in the ISTS. IRS subsequently implemented corrective actions to help ensure the accuracy and validity of the ISTS data, including correcting data from prior years. The database that we used reflected these corrections. We did not test the reliability of the other IRS databases used in our analysis—the Examination Case Reporting System and the Work Planning and Control System—that maintain data on staff hours applied to examination programs, including innocent spouse case processing. We analyzed the direct staff hours that IRS staff charged to conduct case evaluations for fiscal years 2000 and 2001. IRS did not have complete information for earlier periods. To assess the adequacy of IRS’s procedures to transfer liabilities between taxpayers when relief was granted, we reviewed the procedures and related guidance such as training materials and policy memoranda. We discussed the procedures and related guidance with the innocent spouse project manager, managers at the Cincinnati processing site, and Customer Account Services staff who are responsible for overseeing the process of transferring tax liabilities between tax accounts. We also compared our published guidance on internal control management for federal agencies with IRS’s procedures. We observed the process that IRS had in place at the Cincinnati processing site for transferring tax liabilities from taxpayers’ joint accounts to their individual accounts. We discussed IRS’s procedures with taxpayer advocate service staff to determine whether they received any complaints from taxpayers that IRS had incorrectly transferred liabilities in their innocent spouse cases. To assess IRS’s efforts to evaluate the usefulness of its Innocent Spouse Program Web site to taxpayers, we reviewed guidance from selected academic and industry experts on assessing Web sites and we interviewed IRS’s Electronic Tax Administration and Innocent Spouse Program officials. We also examined IRS’s Innocent Spouse Web site to determine the type and content of information available to the taxpayers. To determine the number and disposition of innocent spouse cases filed in U.S. federal courts, we obtained information from staff at Treasury’s Office of Chief Counsel that identified court decisions on innocent spouse cases compiled from the LexisNexis database for the period June 1996 to June 2001. The compilation excluded those cases that addressed only procedural issues such as whether a court had jurisdiction to hear the case. We also obtained information that was complied from IRS’s Office of Appeals database on innocent spouse cases scheduled for trial that were settled by the Office of Appeals or Treasury’s Office of Chief Counsel from fiscal year 1999, when IRS began tracking these data, through May 2001. We performed our work with IRS’s W&I Division staff and the National Taxpayer Advocate Service’s office at IRS’s national headquarters in Washington, D.C. We also met with W&I Division staff at the innocent spouse Cincinnati processing site in Covington, Kentucky; IRS’s SB/SE Division office in Atlanta, Georgia; and Treasury’s Office of Chief Counsel in Washington, D.C. We did our work between May 2001 and February 2002 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the commissioner of internal revenue. We received written comments from the commissioner in a letter dated April 18, 2002. The comments are reprinted in appendix VII and are discussed at the end of this letter. To better ensure timely, accurate, and consistent processing of innocent spouse cases, IRS officials developed and implemented several initiatives. Although the specific contribution of each initiative to improved results is not clear, IRS’s initiatives, in total, have increased its ability to handle innocent spouse cases more quickly and at lower cost while maintaining or improving the accuracy of relief decisions as measured by IRS’s quality review program. The initiatives contributed, along with a decrease in cases received, to IRS’s reaching decisions on more cases than it received in fiscal year 2001. The average times to reach decisions and close cases continued to increase in fiscal year 2001 but should stabilize and then decline in future years. The principal initiatives that IRS undertook to improve its management of the innocent spouse case workload included centralizing case processing within one W&I location—its Cincinnati site in Covington, Kentucky—and bringing more of the program staff under the project manager’s direction; developing an automated decision-making and case-building tool; developing a model to estimate future workload and staffing needs and monitor program performance; and measuring the quality of case decision making, including adherence to procedures and the accuracy of decisions. Although these initiatives, taken as a group, have improved IRS’s ability to process cases and promote quality decision making, IRS has not established a set of balanced performance measures and performance targets for the program. Without such measures, IRS cannot be sure that these changes are having the desired results and are not creating unintended consequences. Balanced measures and performance targets are integral parts of IRS’s performance management system that are intended, in part, to better ensure that program performance does not overly focus on one area of program performance to the detriment of others. In anticipation of the Restructuring Act’s passage, IRS decided in April 1998 that innocent spouse cases should be handled in a central processing site. IRS officials believed that centralization would facilitate more rapid and consistent processing of cases because staff in the centralized processing site would specialize in innocent spouse cases and follow consistent procedures and processes in resolving them. Although IRS was unable to fully implement the decision to centralize processing in the years immediately following the act’s passage, over time IRS made considerable progress in doing so. IRS does not have good data on staff usage before fiscal year 2000, but in fiscal year 2000, staffing totaled 887 full-time equivalents (FTE) with 768 FTEs (about 86 percent) coming from SB/SE field staff and 119 FTEs coming from W&I staff in the centralized processing site. As table 1 shows, however, in fiscal year 2001, IRS increased the FTEs for W&I’s centralized processing and decreased the field staffing. IRS projects that by fiscal year 2003, about 70 percent of the FTEs used in processing innocent spouse cases will be in W&I’s centralized processing site. In addition to increasing its centralized staffing levels, IRS has improved the capability of the staff. IRS upgraded some examiner positions from grade level GS-7 to GS-8, owing to concerns about attrition, and trained some examiners to specialize in working complex cases that would previously have been sent to field offices. For instance, some of the Cincinnati examiners have been trained to handle cases involving bankruptcies. By the end of fiscal year 2001, IRS had 157 employees at the Cincinnati processing site, with 56 employees examining cases and 30 employees screening cases to determine whether they met the basic eligibility requirements for relief. The remaining employees were supervisors, case quality reviewers, and clerical staff. In general, the staff assigned to process cases in the centralized processing site are at lower grades—predominantly GS-8—than are the SB/SE staff processing innocent spouse cases in the field. SB/SE staff have generally been graded as GS-9 through GS-13, with most field staff tending to be graded as GS-12. Thus, as IRS has shifted processing more cases in Cincinnati, it has also lowered the salary structure of the staff processing the cases. Beginning in fiscal year 2002, IRS is using W&I Division taxpayer resolution representatives (TRRs) to process field cases requiring face-to- face contact with the taxpayers. TRRs are to perform a variety of services at IRS field locations throughout the country. By using the W&I Division’s TRRs, the innocent spouse project manager hopes to further reduce reliance on the SB/SE Division’s field staff. Program officials said that their lack of control over SB/SE field staff was one reason why field cases take longer to resolve than cases processed centrally. IRS expects that TRR involvement in the program will be minimal, as the agency expects to process 95 percent of the cases centrally during fiscal year 2003. As of February 2002, IRS projected that 11 TRR FTEs will be used in the Innocent Spouse Program in fiscal year 2002 and only 3 FTEs in fiscal year 2003. To increase the accuracy and consistency of examiners’ decisions about granting relief to innocent spouses, to better ensure that an adequate case file supports each decision, and to speed case processing, IRS developed an “integrated case processing” system (ICP) for innocent spouse cases. The ICP, which was implemented in January 2001 at the Cincinnati processing site, uses a computer program to direct examiners through a series of questions leading to a decision about what, if any, relief is due to the taxpayer. The algorithm was designed to capture all of the factors that must be considered in making these determinations. The ICP also automatically prompts the examiner to create a documented case file so that IRS can be better assured that examiners’ decisions are properly supported. The ICP is intended to increase the accuracy and consistency of determinations, since it is designed to help ensure that examiners consider all pertinent aspects of a taxpayer’s case in accordance with the law. The ICP was expected to increase the speed of case processing, because, among other reasons, examiners would have all of the criteria for decision making available on-line and the structured process for guiding decisions should result in fewer examiner errors. IRS is planning future enhancements to the ICP that would make it easier for examiners to access and update taxpayer data. IRS plans to make the ICP available to the field office TRRs to better ensure the accuracy and consistency of their determinations. IRS developed an inventory model in April 2000 to enhance its ability to manage staff resources and the inventory of innocent spouse cases. IRS uses the model to estimate the numbers of staff that it will need in the field and at the Cincinnati processing site to process enough cases to result in a targeted level of cases at the end of the fiscal year. The model helped IRS to gauge the amount of progress that it could make in reducing its inventory of cases, assuming differing mixes in the numbers of staff available in Cincinnati and the field. According to the innocent spouse project manager, the model provides a reasonable basis for planning and leads to improved staff allocations, but it is not expected to be precise. The project manager said that the projected case closures that are derived from the model estimates become part of the W&I Division’s business and operating plans for the given fiscal year. The model begins with the existing inventory, adds projections of new cases expected to be received during the period in question, and estimates the number of cases that will be in inventory at the end of the period, given assumptions about the number of staff who will be available and their productivity in handling cases. IRS’s estimates of new cases likely to be received are based largely on prior experience and professional judgment. Staff requirements are projected on the basis of the percentage of cases that IRS estimates will be processed centrally versus in the field and on the types, numbers, and productivity of staff at these locations. For example, in fiscal year 2001, IRS estimated that the examiners at the centralized site could process about 14 cases per week, spending about 2.9 hours per case; this estimation was based in part on data from IRS’s Work Planning and Control System and assumed productivity increases. For cases processed in the field, IRS used actual time from IRS’s Examination Case Reporting System that showed that tax compliance officers were processing a case in about 5.6 hours and revenue agents were processing a case in about 13 hours. IRS estimated that the new TRRs would require about the same amount of time to process a case as do tax compliance officers. To test the functioning of the model, we analyzed the assumptions in IRS’s model as of November 2001 against recent performance data to confirm whether IRS would be likely to reduce its inventory of cases and reach the inventory level that it had projected for the end of fiscal year 2003. Our analysis showed that examiners at the centralized site had not been as productive as IRS believed. We determined that IRS would have to add the equivalent of 16 FTEs or become about 17 percent more efficient to achieve its projected ending inventory level for fiscal year 2003. Our analysis did not consider the productivity levels of field staff who are projected to close about 5 percent of cases. We recognize that other factors may also affect IRS’s ability to meet its targeted inventory levels, such as unexpected changes in the volume of new cases, the proportion of cases processed centrally versus in the field, and the productivity of field staff. IRS revises the projections from its model routinely as new data show changes in the volume of cases being received and in the productivity of staff. Subsequent to our test of the model, IRS revised its inventory projection using lower assumptions about examiners’ productivity. Table 2 shows IRS’s inventory projections for fiscal years 2002 and 2003 as of February 2002. As shown in the table, the inventory is expected to drop 43 percent during fiscal year 2002—from 52,093 cases to 29,810 cases. In June 1999, IRS established a process for reviewing closed innocent spouse cases from all locations to help ensure that high-quality case decisions were being made. Beginning in July 2000, IRS’s research staff developed statistically valid sampling plans for each location on the basis of projected annual case receipts. Staff assigned to the quality review function assess the sampled cases using quality standards developed especially for the review. The standards are structured to evaluate whether examiners have followed all of the required processes for an innocent spouse case as well as whether the decision made by the examiner was correct. Adherence to process requirements is reviewed both to ensure compliance with legally required procedures, such as notifying the nonrequesting spouse and giving the individual an opportunity to participate in the case, and because the adherence to required process steps is expected to lead to better IRS decisions about the requested relief. The accuracy of decisions is reviewed to provide IRS data on the accuracy and consistency of decisions made in the diverse offices handling innocent spouse cases. Originally, the quality review staff included SB/SE Division revenue agents at grade levels GS-11 through GS-13. Starting in fiscal year 2002, IRS relieved SB/SE of the quality review function and now staffs the review with experienced W&I Division grade-level GS-9 examiners, who are to periodically rotate to the quality review function from the Cincinnati Innocent Spouse Program staff. Overall, IRS’s innocent spouse case quality was maintained between fiscal years 2000 and 2001, as shown in table 3. In the first quarter of fiscal year 2002, however, the quality review results reflected better performance. During that quarter, the quality review staff agreed with 100 percent of the decisions to grant and deny relief made for the sample of cases from the centralized processing site. For all field locations combined, the reviewers agreed with 93 percent of the decisions made. However, owing to the small sample size, the quarterly results may not be indicative of results over a longer period. The results from the first quarter of fiscal year 2002 are the first quality review data that accounted for IRS’s use of automation in case processing and for the staffing enhancements at the centralized processing site. The effect of each of the individual IRS initiatives to process innocent spouse cases more timely, accurately, and consistently is difficult to separate and quantify. However, taken as a whole, these initiatives have enabled IRS to reduce its inventory of undecided innocent spouse cases while maintaining or improving the quality of its decisions as measured by the quality review program. The decline in cases received during fiscal year 2001 also contributed to IRS’s ability to reduce its inventory of cases. Table 4 shows that for the first year since the Restructuring Act was passed, IRS reached a decision on more cases than it received. IRS decided 61,423 cases in fiscal year 2001, or about 21 percent more than the 50,840 cases it received, reducing some of the backlog from previous years. IRS received about 12 percent fewer cases in fiscal year 2001 than in fiscal year 2000, which contributed to its ability to reach decisions on more cases than it received. Appendix VI shows the disposition of resolved cases for fiscal years 1999 through 2001. However, the decline in cases received does not fully account for IRS’s progress in reducing its inventory of undecided cases. The resulting increase in productivity appears to be largely attributable to IRS’s strategy of centralizing case processing in Cincinnati, but it may be partly due to the use of the ICP and to general improvements in how IRS handles innocent spouse cases. As table 5 shows, IRS has shifted an increasing portion of cases to the Cincinnati processing site. Because Cincinnati staff reach decisions on cases about four times faster than field office staff, IRS has realized an overall gain in productivity, which rose, on average, from 60 cases per FTE to 82 cases per FTE, or by 37 percent, between fiscal years 2000 and 2001. In fiscal year 2000, field office examiners took, on average, about 11.7 hours to make a determination on a case, compared with about 3.1 hours per case for examiners at the Cincinnati processing site. Similarly, in fiscal year 2001, field office examiners took, on average, about 10.5 hours per case, compared with 2.5 hours per case at the Cincinnati processing site. Because IRS expects to shift 95 percent of innocent spouse cases to the Cincinnati processing site, where cases are processed faster, by the end of fiscal year 2003, additional gains in overall productivity are expected. These projected productivity gains, coupled with IRS’s expectation that new case receipts will remain fairly close to the volume received in 2001, result in the significant estimated reduction in total staffing for the Innocent Spouse Program shown in table 1. If this reduction is achieved, IRS will have reduced overall staffing for the Innocent Spouse Program by 75 percent between fiscal year 2000 and fiscal year 2003 and will have redirected hundreds of tax compliance officers and revenue agents to their traditional duties. To some extent, IRS also was able to reach decisions on more innocent spouse cases than it received in fiscal year 2001 because the portion of innocent spouse cases it receives that are not eligible for relief has been increasing. As a percentage of cases received, those determined to be ineligible rose from about 13 percent in fiscal year 1999 to 45 percent in fiscal year 2000 and to 56.5 percent in fiscal year 2001. Most requests for innocent spouse relief that are not eligible—for instance, because the taxpayers did not file a joint return in the year for which relief is requested—are identified during screenings at the Cincinnati processing site. Officials estimated that, on average, staff who screen cases at the Cincinnati site need about 30 minutes per case to determine whether a taxpayer’s request meets the basic eligibility requirements. In general, 80 percent or more of the cases found to be ineligible for relief are identified during case screening; the remainder are found to be ineligible during case processing either at the Cincinnati site or by field staff. Although, in general, determining whether a case is ineligible does not require a significant amount of IRS time, agency officials are concerned about the portion of cases received that do not meet basic eligibility requirements. As a result, they have made revisions to forms and attempted to better inform tax practitioners of the innocent spouse eligibility requirements. In addition, IRS’s Web site, which is discussed later in this report, includes information on the Innocent Spouse Program that is intended to help taxpayers determine their eligibility. Appendix IV provides more information on ineligible cases. The improvements that IRS has realized in handling innocent spouse cases and in reducing its inventory of undecided cases have occurred while the agency has maintained or increased the accuracy of relief decisions. As the data in table 3 illustrate, IRS’s reviewers have concurred with the case decision in a growing proportion of cases over time. Moreover, although this example reflects only the results during the first quarter of fiscal year 2002, the quality review staff agreed 100 percent of the time with the sample of case decisions pulled from the centralized processing site. This was the first quality measurement that reflected use of the ICP system and staffing enhancements at the centralized site. IRS was not successful through fiscal year 2001 in reducing the average time to reach a decision on whether relief would be granted or to reach closure on cases, including all required notices, appeals, and transfers of taxpayers’ liabilities when full or partial relief was granted. Table 6 shows that average times to decide and close cases continued to increase in the field offices and at the Cincinnati processing site through fiscal year 2001. If IRS is successful in reducing the inventory of cases by 43 percent during fiscal year 2002 as shown in table 2, the average case-processing time likely will stabilize or begin to decline. As a general rule, IRS processes the older cases in its inventory before it processes the newer cases. Thus, a significant reduction in inventory would disproportionately draw from the oldest cases. As these older cases are cleared out, and if IRS succeeds in processing as many cases as it receives in a year—that is, the maintenance level of inventory that the project manager would like to achieve—the average time for each case processed should decline. Further, because IRS estimates that 95 percent of the cases received will be processed at the Cincinnati processing site by the end of fiscal year 2003, and because Cincinnati’s times for deciding cases are less than half the times for cases decided in the field, as shown in table 6, case-processing times should begin to fall toward the shorter times that are used at the centralized site. Appendix V provides information on the average number of days for IRS to come to a decision on a case. As part of its strategic planning process, IRS has instructed divisions, operating units, and lower levels of the organization to implement management practices that will help IRS support its strategic goals of top- quality service to each taxpayer in every interaction, top-quality service to all taxpayers through fairness, and productivity through a quality work environment. IRS views balanced measures as its primary means for assessing organizational performance. The three balanced measures— customer service, employee satisfaction, and business results—are to be considered when setting objectives, establishing goals, assessing progress and results, and evaluating performance. Business results measures are to reflect both quantity and quality. W&I stated in its October 2001 business performance review guidance that one of the keys to meeting future division objectives is the use of balanced measures to achieve target levels of performance at lower levels within the division. The performance measures are to be aligned with the strategic goals that the programs support. Other IRS management guidance such as IRS’s Strategic Planning, Budgeting, and Performance Management Process Manager’s Guide instructs programs that report to operating units, such as divisions, to develop measures, with designated performance targets, for use in evaluating progress toward achieving IRS’s mission and long-term goals. Performance measures and associated target performance levels that are explicitly stated and that conform to IRS’s goals form the basis for communicating desired outcomes to program staff. Further, such measures and targets form the basis for assessing progress, identifying and addressing performance shortfalls, and holding managers and staff accountable for achieving results. The W&I strategic plan has only one explicit performance measure for the Innocent Spouse Program, the number of cases closed in a fiscal year, which reflects only the business results quantity component of IRS’s balanced approach to measuring performance. Data for the number of cases closed provides some useful information on the performance of the program. However, this one measure fails to address other dimensions of performance. For example, it does not address timeliness and quality, which relate both to business results and to the customer satisfaction component of IRS’s balanced approach. Striving to achieve a specific number of case closings in a year could come at the expense of higher- quality decisions, which is why IRS’s performance management system stresses the need for balanced performance measures. Although timeliness and quality measures and targets have not been adopted for the Innocent Spouse Program, the W&I strategic plan for fiscal years 2001 to 2003 recognizes that a measure of timeliness is needed and states that IRS is to establish such a measure. Although IRS does not have performance measures or targets for timeliness and quality in the Innocent Spouse Program, the agency is gathering data that can be used in developing such measures and targets. Within its ISTS, IRS already flags innocent spouse cases that have remained unusually long in any processing stage. Beginning in fiscal year 2002, for all cases that are processed centrally, IRS will begin recording in ISTS the actual staff time used to close each case. In addition, IRS has developed estimates of the time to process a case under optimal circumstances. Program officials told us that the estimated times are used for benchmarking actual processing times but are not goals. The records of staff time to close each case and initial efforts to define benchmarks for case timeliness should therefore provide data that IRS can analyze in developing an appropriate performance measure, as well as a desired target level, for case-processing timeliness. IRS had a performance measure for case quality as well as target levels of acceptable quality, but the agency recently dropped its performance target. IRS collects information on the quality of innocent spouse case determinations that derives from its quality review process, which samples closed cases. In fiscal years 2000 and 2001, IRS’s quality goals for the Innocent Spouse Program were that the reviewers would concur with the examiners’ decisions in 85 and 90 percent, respectively, of the sampled cases. In January 2002, the project manager told us that their goal should be the attainment of each quality standard for all cases and that they would no longer specify a quality goal as a percentage of cases meeting the quality standards. Accordingly, although IRS will continue to measure case quality, it no longer plans to include a specific performance measure and performance target for the quality of innocent spouse cases in its strategic or operating plans. In discussing the program’s performance measures and targets, the project manager said that the Innocent Spouse Program is small in comparison with other IRS programs and that consideration needs to be given to how much effort should be expended in developing performance measures and targets. Because data are being collected that could be used in developing timeliness and case quality performance measures and targets, the required effort should not be too great. In March 2002, program officials told us that by the end of fiscal year 2003, IRS plans to have collected survey data relating to innocent spouse customer satisfaction that will enable it to develop related performance measures and targets. The officials said that these data on customer satisfaction, along with existing data on business results—including case quality measures—and employee satisfaction, would position IRS to develop a set of balanced performance measures and targets for the Innocent Spouse Program. IRS procedures for transferring tax liabilities in innocent spouse cases conform to federal guidance for ensuring accurate and complete information processing. The procedures, if adhered to, should preclude erroneous transfers of liabilities between spouses as well as the sending of collection notices to the innocent spouse for liabilities that have been transferred to the other spouse. To ensure accurate and complete processing, federal guidance on internal controls for managing information processing advises agencies to include in their procedures a variety of controls tailored to their systems. The guidance advises agencies to employ a combination of actions such as edit checks for controlling data entry and reconciliation of account totals. For authorization control, the guidance advises agencies to use key source documents with authorizing signatures, batch control sheets, and independent reviews of data before the data are entered into the system. The guidance instructs agencies to design their data entry processes to provide for editing and validating data and for output reports. IRS’s procedures for transferring tax liabilities in innocent spouse cases employ a variety of processes to control for accuracy and completeness. For example, IRS requires that the document with the final authorizing signature approving the taxpayer’s request for relief be the key document used to start the transfer process. The procedures require that an employee who was not involved in deciding the case perform edit checks and verify the information on the approval document by comparing it with the taxpayer’s account information. Further, IRS procedures require that a worksheet be prepared to document, verify, review, and reconcile the accuracy of account adjustments before any changes are made to the tax accounts. The worksheet is required to include specific instructions for the data entry personnel to use when making the tax account adjustments, such as the exact dollar amount of tax liability to be transferred from the joint account to a separate, individual account and the specific transaction codes required by IRS’s information systems to enter the changes. Staff are required to include documentation in the case files as evidence that the required tasks were completed. The procedures require that Accounting staff from a separate unit verify, reconcile, and record tax account adjustments in a journal before they are made. After the adjustments are made, Customer Account Service staff are to certify that the adjustments were made in accordance with IRS’s guidelines. On the basis of our review of IRS’s procedures and our inspection of the process used at the Cincinnati processing site for transferring tax liabilities when innocent spouse relief was approved, we concluded that the procedures conform with the federal guidance for managing information processing. If IRS staff reconcile, edit, and verify the account information as they are required to do, the adjustments to transfer the tax liabilities should be made correctly. However, we did not independently test a sample of closed cases, and therefore we cannot determine the extent to which IRS staff actually performed the required control activities. The taxpayer advocate concurs that IRS’s procedures to transfer tax liabilities in innocent spouse cases appear to be adequate and should resolve the past problems of monitoring taxpayers’ accounts after the liabilities were transferred. IRS has established a Web site for its Innocent Spouse Program but has not evaluated the site. As a result, IRS does not know how useful the site is to taxpayers or what effect, if any, it has had on lessening taxpayer confusion about innocent spouse eligibility. Program officials said that recent enhancements to the Web site would enable them to collect data that could help them assess whether the site is useful to taxpayers. IRS officials in the Electronic Tax Administration and Innocent Spouse Program offices told us that in developing the Web site, they did not give any consideration to the benefits that taxpayers’ evaluations of the site might offer. Their immediate goal in developing the Web site was to give taxpayers an easy tool to help them determine their eligibility for the program. The Innocent Spouse Program Web site, which went on-line in December 1999, was developed to help taxpayers determine their eligibility for innocent spouse relief. It is one of IRS’s initiatives to better educate taxpayers and tax practitioners about the Innocent Spouse Program’s eligibility requirements. By answering a series of yes-or-no questions, a taxpayer can generally determine whether he or she is eligible for innocent spouse relief. Called the Innocent Spouse Tax Relief Eligibility Explorer, the Web site takes the user through each of the innocent spouse eligibility factors via a series of questions. The Web site also allows the user to download the form used to apply to IRS for relief. IRS has taken steps to increase awareness of the Web site among members of the public and within the tax professional community. IRS officials say that they have advertised the Web site in several ways, such as by including the Web site’s address in IRS publications regarding the Innocent Spouse Program and linking the Web site to the Tax Professionals Web site within IRS’s agencywide Web site. Since officials had not considered evaluating the Innocent Spouse Program Web site when it was first created, they did not include features that would allow the collection of data for a meaningful evaluation. For example, IRS officials did not develop a capability for collecting information through Web-site-based customer surveys or customer feedback links. As part of the upgrading of its agencywide Web site, IRS is enhancing the data collection capabilities of the Innocent Spouse Program Web site. According to an official in IRS’s Electronic Administration Office, the upgrading is being done in phases and the implementation of the initial phase began in January 2002. With more powerful software applications and tools, IRS will have the capability to collect data on use of the Web site and gather comments from taxpayers. For example, IRS will be able to collect data on the number of times the Innocent Spouse Web site was accessed, the number of times users accessed the Web site’s eligibility questionnaire, and the number of times users completed the questionnaire and were found eligible or ineligible for consideration. As the new Innocent Spouse Web site evolves, IRS expects to develop standard performance reports and refine data collection by involving the stakeholders responsible for the contents of the various links on the agencywide Web site. IRS anticipates that project managers will have the opportunity to add additional customized enhancements and applications later. Although the Web site’s enhancements may provide IRS with more useful information than it currently has, the agency’s plans did not call for obtaining directly from taxpayers any information on the Web site’s usefulness. However, when we discussed the lack of such plans with Innocent Spouse Program officials in March 2002, they said that they would ask IRS’s Electronic Tax Administration officials to include a survey of taxpayers. They said that they would gather information on the usefulness of the Web site that then could be used in determining how to reduce the number of ineligible cases. They said that they would do this for six months and then evaluate the costs and benefits. According to some information management experts, customer comments, surveys, and focus groups can provide valuable information for managers to assess the usefulness of a Web site. Over the past three years, IRS designed and implemented a number of initiatives to improve its ability to process innocent spouse cases timely, accurately, and consistently. In fiscal year 2001, these initiatives and the reduction of new cases collectively contributed to IRS’s progress, for the first time since passage of the Restructuring Act, in reducing its inventory of innocent spouse cases while maintaining or increasing the quality of its case decisions. Absent unforeseen significant increases in the innocent spouse caseload, IRS appears to be positioned to make material additional improvements in reducing the inventory of cases at the same time that it redirects hundreds of employees to perform other work. To date, these improvements have not resulted in reduced average times for making case decisions and completing cases. However, improvement in the timeliness of case processing should be realized as inventory levels decrease and work is shifted to IRS’s centralized processing center. Although program improvements have been, and should continue to be, realized, IRS lacks a balanced set of measures for the Innocent Spouse Program that would help ensure that future performance does not inappropriately concentrate on one aspect of performance at the expense of others. IRS’s only performance measure for the Innocent Spouse Program focuses on business results—that is, the number of cases closed. Because IRS is collecting information relevant to other dimensions of its performance, such as timeliness and case quality, developing performance measures and target levels for performance should not be burdensome for this relatively small IRS program. IRS established its Innocent Spouse Program Web site to help educate taxpayers about eligibility requirements for the program. However, more than half of taxpayers’ requests for innocent spouse relief are judged to be ineligible by IRS. Although IRS officials are beginning to formulate plans to evaluate the Web site, unless and until those plans are implemented, the agency would continue to lack a basis for determining whether the Web site could be improved to lessen taxpayers’ confusion about eligibility requirements. If IRS had information on taxpayers’ opinions about its Web site, it would be in a better position to adopt cost-effective strategies for further educating taxpayers and tax practitioners about the program. Better informed taxpayers will likely mean fewer ineligible cases and even further performance enhancements. We recommend that the commissioner of internal revenue establish balanced performance measures and targets for the Innocent evaluate the Innocent Spouse Program Web site’s usefulness to taxpayers. On April 18, 2002, we received written comments on a draft of this report from the commissioner of internal revenue (see app. VII). The commissioner concurred with our recommendations and stated that our report acknowledges the significant efforts taken by IRS to control the Innocent Spouse Program’s workload. The commissioner also said that our methodology for determining the average time from receipt of a case to a notification decision on cases closed over a yearlong period is disproportionately drawn from older cases in IRS’s inventory. He enclosed a table showing, alternatively, the average time that cases received in a fiscal year have taken IRS to process. Although the commissioner’s alternative calculation provides a useful perspective, we chose to reflect the average time that the taxpayers whom IRS notified of a decision during a fiscal year had to wait for that decision. Our methodology accurately reflects that average wait time. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its issue date. We will then send copies of the report to the commissioner of internal revenue and other interested parties. We will make copies available to others who request them. If you have any questions or would like additional information, please call me or Charlie Daniel at (202) 512-9110. Key contributors to this report are Michael S. Arbogast, Helen Branch, and John Gates. Data obtained from IRS show that 73 innocent spouse cases were litigated in court from June 1996 through June 2001. These tried cases were from a large universe of innocent spouse cases; more than 130,000 cases were decided between July 1998 and September 2001. Most cases were tried in the U.S. Tax Court, and federal courts generally upheld IRS’s determinations. According to officials at the U.S. Department of the Treasury’s (Treasury) Office of Chief Counsel, cases are sent to the courts for three primary reasons—disagreements between the parties over the actual facts of the case, over the interpretation of the law, and over the application of the law to the facts of the case—or some combination of these reasons. The officials also stated that courts weigh their decisions about cases by considering these reasons and other factors. Given this fact and the small number of innocent spouse cases that are tried, the officials do not regard the outcome of tried cases as a reliable measure of the quality of IRS’s innocent spouse decisions. According to IRS data, from June 1996 through June 2001, 73 innocent spouse cases pertaining to issues of relief were litigated in federal courts. Of these, 54, or about 75 percent, were litigated in the U. S. Tax Court, which has primary jurisdiction to review IRS tax cases. In 63 percent of the cases litigated in the Tax Court, the court concurred with IRS’s decision to deny taxpayers relief; in 24 percent of the cases, the court disagreed with IRS’s decision to deny relief. In the remaining 13 percent of the cases, the court granted taxpayers partial relief. The table below shows the number and disposition of cases litigated in the Tax Court from June 1996 through June 2001. Although the U.S. Tax Court has primary jurisdiction over innocent spouse cases, in some instances an innocent spouse case may be litigated in another federal court. Innocent spouse cases can be contested in federal district courts and the U.S. Court of Federal Claims if the taxpayer has already paid the tax liability and seeks a refund. Innocent spouse issues may also become part of any U.S. bankruptcy proceedings that the taxpayer initiates. In addition, all innocent spouse cases can be appealed to the U.S. Court of Appeals. According to IRS data, no cases were tried in either the U.S. District Courts or the Court of Federal Claims pertaining to issues of innocent spouse relief. However, 8 cases were decided in U.S. Bankruptcy Courts and 11 cases were decided in the U.S. Court of Appeals. As with cases tried in the Tax Court, for the 19 cases concluded in the other courts between July 1996 and June 2001, the courts usually sustained IRS’s determinations. IRS data show that the courts agreed with IRS’s decision to deny taxpayers’ relief in 11 cases and disagreed with IRS’s denial of relief in 6 cases. In 2 cases, the courts granted the taxpayers partial relief. The following table shows the number and disposition of cases litigated in the federal courts other than Tax Court. According to IRS, most docketed innocent spouse cases are settled before going to trial. IRS’s Office of Appeals (Appeals) and Treasury’s Office of Chief Counsel (Counsel) work with taxpayers to resolve the cases out of court. IRS officials stated that the primary mission of Appeals is to resolve tax controversies, without litigation, on a basis that is fair and impartial to both the government and the taxpayer and in a manner that will enhance voluntary compliance and public confidence. When Appeals or Counsel settles a case, the outcome may reduce the amount of the taxpayer’s liability, absolve the taxpayer of any liability, or leave unchanged the liability as originally determined by IRS. Of the docketed cases that were settled by Appeals or Counsel, 55 percent resulted in Appeals’ absolving the taxpayer of the liability, and 33 percent resulted in Appeals’ reducing the taxpayer’s liability; in 12 percent of the cases, the liability remained unchanged. However, when the liability is reduced or absolved on behalf of the requesting spouse, the spouse not requesting relief would still be liable for the tax liability related to the jointly filed tax return. The fact that Appeals or Counsel staff changed an examiner’s determination does not necessarily mean that the examiner was incorrect in the application of law or the analysis of the facts in the case. An Appeals officer has the authority to settle a case on the basis of the hazards of litigation. Revenue agents and tax examiners do not have this authority. Even though an examiner may be correct in the application of law and interpretation of the facts, Appeals officers may settle the case because of a concern that IRS might not prevail in court owing to the relative weaknesses and strengths of IRS’s case and the taxpayer’s positions. The table below shows the number and disposition of docketed innocent spouse cases from fiscal year 1999 through May 2001 that were settled before going to trial. Allocation of liability (Section 6015(c) Equitable relief in community property states(Section 66(c) Two years from first collection activity after July 22, 1998Relief is available only for amounts unpaid as of July 22, 1998, and amounts arising after July 22, 1998. Section 6013(e) criteria that are similar but more restrictive than section 6015(b) criteria apply for amounts paid prior to July 22, 1998. Appendix V: Average Days for Innocent Spouse Determinations, Fiscal Years 1999– 2001 All determinations Number of cases Average days Cases not meeting eligibility requirements Number of cases Average days Number of cases Average daysNumber of cases Average daysNumber of cases Average daysLegend: FY = fiscal year. As shown in the table, the average days for IRS to reach a decision on a case differed based on the outcome of the case—ineligible, full relief, partial relief, and denied relief—but regardless of the outcome, the average days have increased yearly from fiscal year 1999 through fiscal year 2001.
By law, married persons who file joint tax returns are each fully responsible for the accuracy of the tax return and for the full tax liability. This is true even though only one taxpayer may have earned the wages or income shown on the tax return. Under the Internal Revenue Service's (IRS) Innocent Spousal Program, IRS can relieve taxpayers of tax debts on the basis of equity considerations, such as not knowing that their spouse failed to pay taxes due. Since passage of the IRS Restructuring and Reform Act of 1998, IRS has received thousands of requests from taxpayers for innocent spouse relief. IRS's inability to provide timely responses to such requests has generated concerns among taxpayers, Congress, and other stakeholders. IRS reached decisions on 21 percent more cases than it received in fiscal year 2001, reducing some of its backlog from previous years. The agency accomplished this through a variety of initiatives, including a substantial staffing commitment, centralization and specialization, automated tools, and routine estimating of future workload and staffing needs. IRS's procedures conform to applicable guidance for transferring tax liabilities from joint tax accounts to individual tax accounts when innocent spouse relief has been granted. The procedures follow federal internal control guidelines by requiring a mix of checks, verifications, reconciliations, and documentation to support steps throughout the process. The Web site for IRS's Innocent Spouse Program--part of IRS's agency wide Web site--went on-line in December 1999 to help taxpayers determine their eligibility for innocent spouse relief. Because IRS has not evaluated the Web site, the agency does not know how useful the Web site has been to taxpayers in determining their eligibility for innocent spouse relief.
To assess IRS’ methodology, we discussed the methodology with officials in the Financial Analysis Division of the Office of the Chief Financial Officer, which was the division responsible for preparing the Compliance Initiatives Report; determined the extent to which IRS’ methodology addressed the concerns we had raised in reports on prior years’ compliance initiatives; discussed specific assumptions in IRS’ methodology with cognizant staff in IRS’ enforcement functions; assessed the sensitivity of IRS’ results to changes in certain key verified the accuracy of IRS’ computations. IRS computed the results of the fiscal year 1995 initiatives using data on planned and actual full-time-equivalent (FTE) staff years and planned and actual revenues for various enforcement programs. To verify the accuracy of IRS’ computations, we (1) traced actual FTEs back to reports generated by IRS’ Automated Financial System, (2) traced actual revenues back to reports generated by ERIS, (3) interviewed staff from IRS’ enforcement functions about the methods used to develop planned FTEs and revenues, (4) recomputed calculations, and (5) resolved any inconsistencies with cognizant IRS staff. We did not assess the reliability of the reports generated by ERIS or the Automated Financial System. We did our work from June 1996 to May 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Acting Commissioner of Internal Revenue or his designated representative. The Acting Commissioner provided comments in a letter dated July 24, 1997. Those comments are reprinted in appendix II and are summarized and evaluated at the end of this letter. In reviews of past compliance initiatives, we identified several weaknesses in IRS’ methodology for computing and tracking initiative results. Our past reviews disclosed, for example, that IRS had overstated initiative results by failing to recognize the opportunity costs associated with moving experienced staff off-line to train new staff and by failing to adequately account for underrealizations of planned base staffing, was unable to track actual enforcement results, and was not providing Congress with enough meaningful information on initiative results because it was reporting positive results from initiatives without recognizing negative results from reductions in base activities. Over the years, IRS revised its methodology to address those, and other, concerns. In preparing its fiscal year 1995 Compliance Initiatives Report, for example, IRS recognized the impact of opportunity costs, obtained revenue data from an automated system (ERIS) that was designed to track actual enforcement results, and improved the report’s usefulness to Congress by including not only the estimated results of the initiatives but also the estimated results of the base enforcement programs and explanations for variances between the results anticipated when the initiatives were approved and the estimated final results. Also, in computing the results of the fiscal year 1995 compliance initiatives, IRS adopted a rule that no FTEs, and thus no revenue, would be allocated to the initiatives until planned base staffing had been achieved. Although the methodology used for the fiscal year 1995 initiatives is an improvement over previous methodologies, the results of that methodology are estimates that are sensitive to various productivity assumptions. Those productivity assumptions were not based on empirical data and could, if they were erroneous, cause IRS’ reported results to be overstated or understated. Also, in verifying IRS’ calculations, we found four errors that had a relatively minor effect on IRS’ reported results. IRS developed and uses ERIS to track the results of enforcement activities. Assuming ERIS does what it is designed to do, it should provide the total amount of dollars actually collected as a result of enforcement activities in fiscal year 1995 (i.e., $31.4 billion). However, as IRS acknowledged in the Compliance Initiatives Report, ERIS does not distinguish between the dollars collected from base enforcement activities and the dollars collected as a result of the initiatives. Similarly, the Automated Financial System, from which IRS extracted the total number of FTEs spent on enforcement activities in fiscal year 1995, does not distinguish between staffing for base activities and for the initiatives. Because its systems do not distinguish between base and initiative activities, IRS, as part of its methodology, developed the formula we describe in appendix I to allocate the $31.4 billion in enforcement revenue between base and initiative activities. Before implementing its new methodology, IRS briefed us on the allocation formula. We said then that because IRS had no initiative-specific data, its formula was a reasonable approach for identifying initiative results. We continue to believe that. However, because planned (i.e., estimated) revenue and staffing levels are an integral part of that formula, the end results are estimates. Thus the “actual” initiative results cited in the Compliance Initiatives Report are not actual results but are estimates of results. IRS did not clearly disclose that fact in the Compliance Initiatives Report. In tables throughout the report, for example, IRS refers to “actual,” without clearly explaining the term. One important set of assumptions embedded in IRS’ methodology relates to the comparative productivity of new staff versus experienced employees. IRS used these assumptions in computing planned revenue, which was an integral part of the methodology. The two primary IRS enforcement functions, Collection and Examination, approached the issue of relative productivity differently. Collection assumed that new staff were less productive than experienced staff, even after they were trained, while Examination believed that new staff, once trained, were as productive as experienced employees. Neither of these assumptions was based on empirical data. According to IRS officials, to estimate the relative productivity of new Collection staff, Collection Division officials met in a brainstorming session and decided, based primarily on their institutional knowledge, that new staff were generally 50 percent as productive as experienced employees during their first year on the job. Part of that reduced productivity assumed by Collection is attributable to the amount of time new staff spend in training and part to the belief that it takes time for a new employee to become as productive as an experienced one. (Collection assumed that new employees do not reach full productivity until their second or third year, depending on their position). We have no basis to determine whether Collection’s productivity assumptions are correct. We do know, however, that changes to the assumptions could significantly alter the reported results of the compliance initiatives. To demonstrate the sensitivity of the reported results to changes in Collection’s productivity assumptions, we arbitrarily adjusted Collection’s 50-percent assumption by 5 percentage points in either direction and recalculated the initiatives’ results. Our recalculation showed that a 5 percentage point change would either increase or decrease IRS’ reported initiative results of $545.2 million by $42 million (about 8 percent), depending on the direction of the change. Unlike Collection, the Examination function assumes that new staff are as productive as experienced staff after they have completed classroom training. That assumption applies to all of Examination’s enforcement staff—tax examiners, who audit simple issues by corresponding with taxpayers; tax auditors, who do more complex audits generally by meeting with taxpayers at an IRS office; and revenue agents, who do the most complex audits generally by meeting taxpayers or their representatives at the taxpayer’s home or place of business. Examination officials told us that they did not have any empirical data to support their assumption that new tax examiners are as productive as experienced tax examiners because Examination’s information systems do not track individual tax examiner’s accomplishments. Thus, historical data cannot be separated between experienced and inexperienced tax examiners. Instead, Examination officials justified their assumption by noting that tax examiners work on relatively noncomplex issues and returns and are able to get up to speed fairly quickly. However, according to an IRS official, after 2 weeks of formal classroom training, tax examiners are assigned to an on-the-job instructor for 10 weeks. The need for on-the-job instructors suggests that IRS does not expect new employees to be able to handle issues and returns as effectively or efficiently, and thus be as productive, as experienced employees. Examination officials also said that they did not believe that it was necessary to assume different productivity levels for experienced revenue agents and tax auditors and new agents and auditors. The officials provided two reasons for their position. First, Examination adjusts its revenue estimates to consider the amount of time that new staff must spend in classroom training. Examination assumes, based on past experience, that the amount of time available to do audit work is reduced between 19 and 25 percent (it varies between revenue agents and tax auditors) because of classroom training requirements. However, that adjustment only affects the amount of time new staff have to do audits; it does not get at the issue of how productive new staff are when they are doing audits. Second, Examination’s resource allocation model imputes a lower marginal yield for each additional return audited by initiative revenue agents and tax auditors. That means, in effect, that Examination assumes that each additional return audited will generate less revenue than the return audited before it. However, although IRS assumes a lower yield with each additional audited return, it does not consider any differences in the efficiency with which experienced staff and new staff audit tax returns. In that regard, the officials said that, after completion of classroom training, new staff are expected to close the same number of cases in any year as experienced employees who work the same types of cases. But the productivity of Examination staff is determined not only by the number of cases they close in a year but also by the revenue generated from those cases. Thus, even if new staff were to close as many cases as experienced staff (Examination officials could provide no evidence to support that contention), they may or may not achieve comparable dollar results. According to an Examination official, revenue agents and tax auditors, during their first year on the job, have 10 weeks and 13 weeks, respectively, of on-the-job training after their classroom training. The need for on-the-job training suggests, in our opinion, that new staff may not be prepared to be as productive as experienced staff, after classroom training. Although we did not have data to test the appropriateness of Examination’s productivity assumption, we tested the sensitivity of IRS’ reported initiative results to changes in that assumption by assuming the following: New tax auditors would be 75 percent as productive as experienced tax auditors. New revenue agents would be 69 percent as productive as experienced revenue agents. New tax examiners would be 95 percent as productive as experienced tax examiners. We arrived at the 75-percent and 69-percent figures for tax auditors and revenue agents, respectively, by using Collection’s productivity assumption of 50 percent and adjusting it to recognize the fact that Examination’s assumption already includes a factor for lost productivity due to training. We assumed only a slight fall-off in productivity for tax examiners because the correspondence audits they do are straightforward, and thus, the skills needed to do them can be more quickly learned than the skills needed by tax auditors and revenue agents. Use of the different productivity assumptions reduced the initiative results by $12.9 million—a reduction of about 5 percent from the reported amount of $237.1 million that IRS estimated was generated by Examination through the initiatives. We traced the data in the Compliance Initiatives Report back to supporting documentation and verified the various calculations involved in using IRS’ methodology. We found four relatively minor errors. The first two errors involved the categorization of FTEs between revenue- and nonrevenue-producing FTEs. A portion of the staffing increase associated with compliance initiatives is for support staff, such as clerks and secretaries. Those staff, unlike revenue officers, revenue agents, and other frontline staff, do not directly generate revenue. Thus IRS, in its calculations, segregated nonrevenue-producing FTEs from revenue-producing FTEs. In verifying that part of IRS’ calculations, we found two errors. IRS mistakenly (1) categorized about 140 nonrevenue-producing FTEs in the Collection function as revenue-producing and (2) included 240 nonrevenue-producing Examination FTEs in the formula it used to allocate enforcement revenue between base activities and initiatives. The third error involved the use of an incorrect average yield figure in computing initiative results for the Compliance Research program. The fourth error involved a failure to include about 180 tax examiners in computing the initiative results for the Collection function. By our calculations, these four errors—which IRS officials acknowledged—caused IRS’ reported yield from the initiatives to be understated by $2.6 million. Absent other changes, correction of the four errors would increase IRS’ reported yield to $805.9 million. In considering IRS’ estimates of the results of the fiscal year 1995 compliance initiatives, there are two other caveats that are relevant: (1) the fact that IRS collected a certain amount in fiscal year 1995 as a result of the initiatives does not necessarily mean that IRS collected more enforcement revenue in fiscal year 1995 than in fiscal year 1994 but only that IRS collected more in fiscal year 1995 than it estimated it would have without the initiatives and (2) the first year’s results from the fiscal year 1995 initiatives are not necessarily indicative of what other compliance initiatives would generate in their first year. Although IRS reported that the compliance initiatives resulted in additional collections in fiscal year 1995 of $803.3 million, it does not mean that IRS brought in $803.3 million more than it did in fiscal year 1994. What it means is that IRS estimated that it generated $803.3 million more in enforcement revenue in fiscal year 1995 than it had estimated it would generate without the compliance initiatives. According to the Compliance Initiatives Report, IRS collected a total of $31.4 billion in fiscal year 1995—$30.6 billion from its base programs and $0.8 billion from the initiatives. Despite the estimated additional revenue from the initiatives, however, the amount of enforcement revenue collected in fiscal year 1995 was less than the amount collected in fiscal year 1994. That is, IRS data indicated that IRS collected about $33.1 billion in fiscal year 1994, or $1.7 billion more than in fiscal year 1995. According to IRS officials, a number of factors could make enforcement revenue decrease even with a staffing increase in the same year. Revenue collected in one fiscal year is a function not only of that year’s staffing but also of prior years’ staffing. Much of the revenue impact of a staffing increase or decrease occurs in subsequent years because of the possibility of appeals, litigation, and collection activity. In that regard, IRS officials said that enforcement revenue in fiscal year 1996 increased to $38.0 billion, even with a staffing decrease, partially as a result of the fiscal year 1995 compliance initiatives. A second caveat to keep in mind is that IRS’ results in fiscal year 1995 are not necessarily indicative of the results that would be achieved in the first year of future compliance initiatives. The results achieved for any compliance initiative depend on many factors that can, and most likely will, vary from one initiative to another. Two of those factors, both of which had a significant impact on the results achieved in fiscal year 1995, are (1) the extent to which new staff must be hired to fill the positions authorized by the initiatives and (2) how IRS decides to allocate the initiative positions among its various enforcement programs. Most of the positions funded with the $405 million provided for the compliance initiatives in fiscal year 1995 were filled not by new hires but by staff who were already on board. This happened, at least in part, because IRS’ fiscal year 1995 appropriation, except for the compliance initiatives, actually resulted in reductions in IRS’ enforcement staffing. Thus, many of the positions funded by the $405 million were used to offset that reduction. According to data provided by IRS, of the 5,470 initiative positions filled as of September 30, 1995, 1,145 were filled by new staff. The other 4,325 were filled by existing employees either through lateral reassignments or promotions (such as promoting tax auditors to fill revenue agent positions). If future initiatives require a greater proportion of new staff, the results could be different from those in fiscal year 1995 because new staff (1) require more training, which, under IRS’ current procedures for training new staff, increases the opportunity cost associated with moving experienced staff off-line to do the training and (2) generally can be expected to generate less revenue, at first, than experienced staff. Decisions on how to allocate staff among enforcement programs also affects initiative results. The change in IRS’ estimate of how much revenue would be generated by the fiscal year 1995 initiatives, which we noted at the beginning of this report, is an example of how staffing decisions can affect initiative results. IRS’ first estimate was $9.2 billion over 5 years and $331 million in fiscal year 1995. Then, in an effort to maximize revenues, IRS decided to allocate more of the $405 million to areas, such as Automated Collection System sites, that are staffed by lower graded personnel and to allocate fewer dollars to more costly areas, such as the Collection Field Function, which is staffed by higher graded revenue officers. This reallocation enabled IRS to fund many more FTEs than originally expected and resulted in revised estimates of $9.6 billion over 5 years and $728 million in fiscal year 1995. IRS’ methodology for computing the results of the fiscal year 1995 compliance initiatives is a significant improvement over past methodologies. However, there are productivity assumptions embedded in the methodology that are not based on empirical data and could cause the results of an initiative to be overstated or understated. We do not have a basis for determining what the correct assumptions should be, but our sensitivity analyses showed that a change in the assumptions used could have a significant effect on the reported initiative results of $803.3 million. For example, as discussed earlier, changing Collection’s productivity assumption by 5 percentage points would either increase or decrease the reported results by $42 million and changing Examination’s assumptions to be comparable to Collection’s would decrease the reported results by $12.9 million. After adjusting IRS’ reported results for the effect of these different productivity assumptions and after increasing the results to account for the $2.6 million in calculation errors, the estimated yield from the fiscal year 1995 compliance initiatives would fall somewhere between $751.0 million and $847.9 million. We requested comments on a draft of this report from the Acting Commissioner of Internal Revenue. IRS provided comments in a letter dated July 24, 1997 (see app. II). Overall, IRS agreed with the findings in the report, which it said confirmed that IRS accurately tracked and reported on the results of the compliance initiatives. However, IRS expressed concerns about two aspects of the observations in the report. First, IRS took issue with our statement that it did not clearly disclose the fact that the initiative results cited in the Compliance Initiatives Report were not “actual” results but rather estimates. To show that it had clearly disclosed that fact, IRS quoted two passages from the Compliance Initiatives Report which refer to the allocation of the revenue to base and initiative activities. In our opinion, the statements IRS quoted, while informative to a careful reader, do not provide sufficient information to make clear that the results are estimates, especially when every table in the final report referred to the results as “actuals.” Second, IRS observed that, although we state that we did not assess the reliability of the data in ERIS, (1) after reviewing the Compliance Initiatives Report and supporting ERIS data, we have found no indication that ERIS does not do what it purports to do—accurately accumulate and summarize enforcement revenue; (2) work done to date on ERIS as part of our financial audit of IRS had not disclosed any problems; and (3) an ERIS sample we took as part of an audit of large corporations concluded that ERIS does what it purports to do. IRS also observed that we have asked for ERIS data in conjunction with two recently initiated audits. We appreciate IRS’ viewpoint; however, (1) the scope of our review of the Compliance Initiatives Report, as discussed earlier, did not include a specific assessment of ERIS reliability; (2) we have not yet done sufficient work on ERIS as part of the financial audit to reach any overall conclusion about data reliability; and (3) the scope of our audit of large corporations was also not broad enough to reach any overall conclusion about ERIS reliability. Until such time as ERIS’ reliability can be determined, we necessarily continue to rely on ERIS data for some of our work because they are the best data available. However, the standards applicable to our work require that we disclose that data reliability has not been confirmed. We are sending copies of this report to the Committee Chairman; the Chairmen and Ranking Minority Members of the Senate Committee on Finance, the House Committee on Ways and Means, and the House Committee on Government Reform and Oversight; various other congressional committees; the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix III. If you have any questions, please contact me on (202) 512-9110. According to the Compliance Initiatives Report, IRS estimated the amount of revenue attributable to the fiscal year 1995 compliance initiatives by means of a general formula that consists of two components. The first component is the ratio of total actual yield per FTE (base and initiative combined) divided by the total planned yield per FTE (base and initiative combined). This component indicates by how much the average tax yield was over or under what IRS expected. If IRS accurately predicted the average yield, the ratio value of the first component would equal 1. If the actual yield was more or less than what was predicted, the ratio value would be greater or less than 1. The second component multiplies the number of “actual” FTEs allocated to the initiatives by the planned yield per initiative FTE. The result of the second component is then multiplied by the result of the first component. If the first component equaled 1, the yield attributable to the initiatives would be equal to the number of initiative FTEs times what yield they were predicted, on average, to realize. However, if the first component was less than 1, implying that the average yield of both base and initiative staff was less than expected by some percentage, the expected tax yield per initiative FTE would be reduced by the same percentage. Conversely, if the ratio value of the first component was greater than 1, the second component would be increased by the percentage by which the actual average yield exceeded what was expected. To illustrate, assume the following information: Total actual yield — $70 million Actual number of FTEs (base and initiative) — 1,800 Actual yield per FTE ($70 million divided by 1,800 FTEs) — $38,889 Planned total yield — $85 million Planned number of FTEs (base and initiative) — 1,500 Planned yield per FTE ($85 million divided by 1,500 FTEs) — $56,667 First component: $38,889 divided by $56,667 = 0.686 or 68.6 percent. Planned initiative yield — $5 million Planned initiative FTEs — 400 Planned yield per initiative FTE — $12,500 “Actual” initiative FTEs — 700 Second component: 700 times $12,500 = $8,750,000. “Actual” initiative yield: 0.686 times $8,750,000 = $6,002,500. The above information is an illustration. The numbers and thus the results would vary among initiatives and even within an initiative. For example, Examination assigned its initiative staff to various audit classes (“individual taxpayers who file a Form 1040C showing total gross receipts of less than $25,000 dollars” is an example of an audit class). For each audit class, IRS would apply the above formula; and for each audit class, the numbers and the results would be different. David J. Attianese, Assistant Director James A. Wozny, Assistant Director John Lesser, Evaluator-in-Charge Charles C. Tuck, Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) Fiscal Year (FY) 1995 Compliance Initiatives Report, focusing on: (1) the methodology IRS used to allocate staff years and revenues between the base enforcement programs and the compliance initiatives; and (2) certain caveats to consider in interpreting IRS' reported results. GAO noted that: (1) IRS could not compile actual revenue results from the FY 1995 compliance initiatives because the Enforcement Revenue Information System (ERIS) only provides information on the total amount of revenue collected as a result of enforcement activities and because other systems, such as those that track IRS staffing, also do not account separately for base enforcement activities and initiative activities; (2) therefore, IRS developed a new methodology to allocate FY 1995 enforcement revenues between base programs and the initiatives; (3) IRS' new methodology: (a) accounted for the opportunity costs associated with moving experienced staff off-line to train new staff; (b) provided that no staff or revenue would be allocated to the initiatives until planned staffing for base programs had been achieved; and (c) improved the Compliance Initiatives Report's usefulness to Congress by including total staffing and total revenue for the various enforcement programs, allocated between base and initiatives, along with explanations for variances between the results anticipated when the initiatives were approved and the estimated final results; (4) although the methodology used for the FY 1995 initiatives is an improvement over previous methodologies, the results of that methodology are estimates that are sensitive to assumptions embedded in the methodology about the productivity of new staff and more experienced staff; (5) those assumptions were based on the judgments of IRS managers rather than empirical data; (6) GAO does not know what the correct assumptions are, but GAO's sensitivity analyses showed that a change in productivity rates could have a significant effect on the reported results; (7) in considering IRS' estimates of the FY 1995 compliance initiatives, there are two other caveats that are relevant; (8) the fact that the initiatives generated a certain amount of revenue in FY 1995 does not necessarily mean that IRS collected more enforcement revenue in FY 1995 than it did in FY 1994 but only that IRS collected more enforcement revenue in FY 1995 than it had estimated it would collect without the initiatives; (9) in fact, the amount of enforcement revenue IRS reported collecting in FY 1995 was less than that reported for FY 1994; and (10) because, the estimates of revenue attributable to the compliance initiatives depended on various assumptions, including how IRS decided to allocate staff, the results in FY 1995 are not necessarily indicative of what other compliance initiatives would generate in their first year.
Pursuant to Homeland Security Presidential Directive 6, the Attorney General established TSC in September 2003 to consolidate the government’s approach to terrorism screening and provide for the appropriate and lawful use of terrorist information in screening processes. TSC’s consolidated watch list is the U.S. government’s master repository for all records of known or appropriately suspected international and domestic terrorists used for watch list-related screening. When an individual makes an airline reservation, arrives at a U.S. port of entry, or applies for a U.S. visa, or is stopped by state or local police within the United States, the frontline screening agency or airline conducts a name-based search of the individual against applicable terrorist watch list records. In general, when the computerized name-matching system of an airline or screening agency generates a “hit” (a potential name match) against a watch list record, the airline or agency is to review each potential match. Any obvious mismatches (negative matches) are to be resolved by the airline or agency, if possible, as discussed in our September 2006 report on terrorist watch list screening. However, clearly positive or exact matches and matches that are inconclusive (difficult to verify) generally are to be referred to TSC to confirm whether the individual is a match to the watch list record. TSC is to refer positive and inconclusive matches to the FBI to provide an opportunity for a counterterrorism response. Deciding what action to take, if any, can involve collaboration among the frontline screening agency, the National Counterterrorism Center or other intelligence community members, and the FBI or other investigative agencies. If necessary, a member of an FBI Joint Terrorism Task Force can respond in person to interview and obtain additional information about the person encountered. In other cases, the FBI will rely on the screening agency and other law enforcement agencies—such as U.S. Immigration and Customs Enforcement—to respond and collect information. Figure 1 presents a general overview of the process used to resolve encounters with individuals on the terrorist watch list. To build upon and provide additional guidance related to Homeland Security Presidential Directive 6, in August 2004, the President signed Homeland Security Presidential Directive 11. Among other things, this directive required the Secretary of Homeland Security—in coordination with the heads of appropriate federal departments and agencies—to submit two reports to the President (through the Assistant to the President for Homeland Security) related to the government’s approach to terrorist- related screening. The first report was to outline a strategy to enhance the effectiveness of terrorist-related screening activities by developing comprehensive and coordinated procedures and capabilities. The second report was to provide a prioritized investment and implementation plan for detecting and interdicting suspected terrorists and terrorist activities. Specifically, the plan was to describe the “scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities” to implement the U.S. government’s terrorism-related screening policies. The National Counterterrorism Center and the FBI rely upon standards of reasonableness in determining which individuals are appropriate for inclusion on TSC’s consolidated watch list. In accordance with Homeland Security Presidential Directive 6, TSC’s watch list is to contain information about individuals “known or appropriately suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism.” In implementing this directive, the National Counterterrorism Center and the FBI strive to ensure that individuals who are reasonably suspected of having possible links to terrorism—in addition to individuals with known links—are nominated for inclusion on the watch list. To determine if the suspicions are reasonable, the National Counterterrorism Center and the FBI are to assess all available information on the individual. According to the National Counterterrorism Center, determining whether to nominate an individual can involve some level of subjectivity. Nonetheless, any individual reasonably suspected of having links to terrorist activities is to be nominated to the list and remain on it until the FBI or the agency that supplied the information supporting the nomination, such as one of the intelligence agencies, determines the person is not a threat and should be removed from the list. Moreover, according to the FBI, individuals who are subjects of ongoing FBI counterterrorism investigations are generally nominated to TSC for inclusion on the watch list, including persons who are being preliminarily investigated to determine if they have links to terrorism. In determining whether to open an investigation, the FBI uses guidelines established by the Attorney General. These guidelines contain specific standards for opening investigations, including formal review and approval processes. According to FBI officials, there must be a “reasonable indication” of involvement in terrorism before opening an investigation. The FBI noted, for example, that it is not sufficient to open an investigation based solely on a neighbor’s complaint or an anonymous tip or phone call. If an investigation does not establish a terrorism link, the FBI generally is to close the investigation and request that TSC remove the person from the watch list. Based on these standards, the number of records in TSC’s consolidated watch list has increased from about 158,000 records in June 2004 to about 755,000 records as of May 2007 (see fig. 2). It is important to note that the total number of records in TSC’s watch list does not represent the total number of individuals on the watch list. Rather, if an individual has one or more known aliases, the watch list will contain multiple records for the same individual. TSC’s watch list database is updated daily with new nominations, modifications to existing records, and deletions. Because individuals can be added to the list based on reasonable suspicion, inclusion on the list does not automatically prohibit an individual from, for example, obtaining a visa or entering the United States when the person is identified by a screening agency. Rather, when an individual on the list is encountered, agency officials are to assess the threat the person poses to determine what action to take, if any. From December 2003 (when TSC began operations) through May 2007, screening and law enforcement agencies encountered individuals who were positively matched to watch list records approximately 53,000 times, according TSC data. A breakdown of these encounters shows that the number of matches has increased each year—from 4,876 during the first 10-month period of TSC’s operations to 14,938 during fiscal year 2005, to 19,887 during fiscal year 2006. This increase can be attributed partly to the growth in the number of records in the consolidated terrorist watch list and partly to the increase in the number of agencies that use the list for screening purposes. Our analysis of TSC data also indicates that many individuals were encountered multiple times. For example, a truck driver who regularly crossed the U.S.-Canada border or an individual who frequently took international flights could each account for multiple encounters. Further, TSC data show that the highest percentage of encounters involved screening within the United States by a state or local law enforcement agency, U.S. government investigative agency, or other governmental entity. The next highest percentage involved border-related encounters, such as passengers on airline flights inbound from outside the United States or individuals screened at land ports of entry. The lowest percentage of encounters occurred outside of the United States. The watch list has enhanced the U.S. government’s counterterrorism efforts by allowing federal, state, and local screening and law enforcement officials to obtain information to help them make better-informed decisions during encounters regarding the level of threat a person poses and the appropriate response to take, if any. The specific outcomes of encounters with individuals on the watch list are based on the government’s overall assessment of the intelligence and investigative information that supports the watch list record and any additional information that may be obtained during the encounter. Our analysis of data on the outcomes of encounters revealed that agencies took a range of actions, such as arresting individuals, denying others entry into the United States, and most commonly, releasing the individuals following questioning and information gathering. TSC data show that agencies reported arresting many subjects of watch list records for various reasons, such as the individual having an outstanding arrest warrant or the individual’s behavior or actions during the encounter. TSC data also indicated that some of the arrests were based on terrorism grounds. TSC data show that when visa applicants were positively matched to terrorist watch list records, the outcomes included visas denied, visas issued (because the consular officer did not find any statutory basis for inadmissibility), and visa ineligibility waived. Transportation Security Administration data show that when airline passengers were positively matched to the No Fly or Selectee lists, the vast majority of matches were to the Selectee list. Other outcomes included individuals matched to the No Fly list and denied boarding (did not fly) and individuals matched to the No Fly list after the aircraft was in flight. Additional information on individuals on the watch list passing undetected through agency screening is presented later in this statement. U.S. Customs and Border Protection data show that a number of nonimmigrant aliens encountered at U.S. ports of entry were positively matched to terrorist watch list records. For many of the encounters, the agency determined there was sufficient information related to watch list records to preclude admission under terrorism grounds. However, for most of the encounters, the agency determined that there was not sufficient information related to the records to preclude admission. TSC data show that state or local law enforcement officials have encountered individuals who were positively matched to terrorist watch list records thousands of times. Although data on the actual outcomes of these encounters were not available, the vast majority involved watch list records that indicated that the individuals were released, unless there were reasons other than terrorism-related grounds for arresting or detaining the individuals, such as the individual having an outstanding arrest warrant. Also, according to federal officials, encounters with individuals who were positively matched to the watch list assisted government efforts in tracking the respective person’s movements or activities and provided the opportunity to collect additional information about the individual. The information collected was shared with agents conducting counterterrorism investigations and with the intelligence community for use in analyzing threats. Such coordinated collection of information for use in investigations and threat analyses is one of the stated policy objectives for the watch list. The principal screening agencies whose missions most frequently and directly involve interactions with travelers do not check against all records in TSC’s consolidated watch list because screening against certain records (1) may not be needed to support the respective agency’s mission, (2) may not be possible due to the requirements of computer programs used to check individuals against watch list records, or (3) may not be operationally feasible. Rather, each day, TSC exports applicable records from the consolidated watch list to federal government databases that agencies use to screen individuals for mission-related concerns. For example, the database that U.S. Customs and Border Protection uses to check incoming travelers for immigration violations, criminal histories, and other matters contained the highest percentage of watch list records as of May 2007. This is because its mission is to screen all travelers, including U.S. citizens, entering the United States at ports of entry. The database that the Department of State uses to screen applicants for visas contained the second highest percentage of all watch list records. This database does not include records on U.S. citizens and lawful permanent residents because these individuals would not apply for U.S. visas. The FBI database that state and local law enforcement agencies use for screening contained the third highest percentage of watch list records. According to the FBI, the remaining records were not included in this database primarily because they did not contain sufficient identifying information on the individual, which is required to minimize instances of individuals being misidentified as being subjects of watch list records. Further, the No Fly and Selectee lists disseminated by the Transportation Security Administration to airlines for use in prescreening passengers contained the lowest percentage of watch list records. The lists did not contain the remaining records either because they (1) did not meet the nomination criteria for the No Fly or Selectee list or (2) did not contain sufficient identifying information on the individual. According to the Department of Homeland Security, increasing the number of records used to prescreen passengers would expand the number of misidentifications to unjustifiable proportions without a measurable increase in security. While we understand the FBI’s and the Department of Homeland Security’s concerns about misidentifications, we still believe it is important that federal officials assess the extent to which security risks exist by not screening against certain watch list records and what actions, if any, should be taken in response. Also, Department of Homeland Security component agencies are taking steps to address instances of individuals on the watch list passing undetected through agency screening. For example, U.S. Customs and Border Protection has encountered situations where it identified the subject of a watch list record after the individual had been processed at a port of entry and admitted into the United States. U.S. Customs and Border Protection has created a working group within the agency to study the causes of this vulnerability and has begun to implement corrective actions. U.S. Citizenship and Immigration Services—the agency responsible for screening persons who apply for U.S. citizenship or immigration benefits—has also acknowledged areas that need improvement in the processes used to detect subjects of watch list records. According to agency representatives, each instance of an individual on the watch list getting through agency screening is reviewed to determine the cause, with appropriate follow-up and corrective action taken, if needed. The agency is also working with TSC to enhance screening effectiveness. Further, Transportation Security Administration data show that in the past, a number of individuals who were on the government’s No Fly list passed undetected through airlines’ prescreening of passengers and flew on international flights bound to or from the United States. The individuals were subsequently identified in-flight by U.S. Customs and Border Protection, which checks passenger names against watch list records to help the agency prepare for the passengers’ arrival in the United States. However, the potential onboard security threats posed by the undetected individuals required an immediate counterterrorism response, which in some instances resulted in diverting the aircraft to a new location. According to the Transportation Security Administration, such incidents were subsequently investigated and, if needed, corrective action was taken with the respective air carrier. In addition, U.S. Customs and Border Protection has issued a final rule that should better position the government to identify individuals on the No Fly list before an international flight is airborne. For domestic flights within the United States, there is no second screening opportunity—like the one U.S. Customs and Border Protection conducts for international flights. The government plans to take over from air carriers the function of prescreening passengers prior to departure against watch list records for both international and domestic flights. Also, TSC has ongoing initiatives to help reduce instances of individuals on the watch list passing undetected through agency screening, including efforts to improve computerized name-matching programs. Although the federal government has made progress in using the consolidated watch list for screening purposes, additional opportunities exist for using the list. Internationally, the Department of State has made progress in making bilateral arrangements to share terrorist screening information with certain foreign governments. The department had two such arrangements in place before September 11, 2001. More recently, the department has made four new arrangements and is in negotiations with several other countries. Also, the Department of Homeland Security has made progress in using watch list records to screen employees in some critical infrastructure components of the private sector, including certain individuals who have access to vital areas of nuclear power plants, work in airports, or transport hazardous materials. However, many critical infrastructure components are not using watch list records. The Department of Homeland Security has not, consistent with Homeland Security Presidential Directive 6, finalized guidelines to support private sector screening processes that have a substantial bearing on homeland security. Finalizing such guidelines would help both the private sector and the Department of Homeland Security ensure that private sector entities are using watch list records consistently, appropriately, and effectively to protect their workers, visitors, and key critical assets. Further, federal departments and agencies have not identified all appropriate opportunities for which terrorist-related screening will be applied, in accordance with presidential directives. A primary reason why screening opportunities remain untapped is because the government lacks an up-to-date strategy and implementation plan— supported by a clearly defined leadership or governance structure—for enhancing the effectiveness of terrorist-related screening, consistent with presidential directives. Without an up-to-date strategy and plan, agencies and organizations that conduct terrorist-related screening activities do not have a foundation for a coordinated approach that is driven by an articulated set of core principles. Furthermore, lacking clearly articulated principles, milestones, and outcome measures, the federal government is not easily able to provide accountability and a basis for monitoring to ensure that (1) the intended goals for, and expected results of, terrorist screening are being achieved and (2) use of the list is consistent with privacy and civil liberties. These plan elements, which were prescribed by presidential directives, are crucial for coordinated and comprehensive use of terrorist-related screening data, as they provide a platform to establish governmentwide priorities for screening, assess progress toward policy goals and intended outcomes, ensure that any needed changes are implemented, and respond to issues that hinder effectiveness. Although all elements of a strategy and implementation plan cited in presidential directives are important to guide realization of the most effective use of watch list data, addressing governance is particularly vital, as achievement of a coordinated and comprehensive approach to terrorist- related screening involves numerous entities within and outside the federal government. However, no clear lines of responsibility and authority have been established to monitor governmentwide screening activities for shared problems and solutions or best practices. Neither does any existing entity clearly have the requisite authority for addressing various governmentwide issues—such as assessing common gaps or vulnerabilities in screening processes and identifying, prioritizing, and implementing new screening opportunities. Thus, it is important that the Assistant to the President for Homeland Security and Counterterrorism address these deficiencies by ensuring that an appropriate governance structure has clear and adequate responsibility and authority to (a) provide monitoring and analysis of watch list screening efforts governmentwide, (b) respond to issues that hinder effectiveness, and (c) assess progress toward intended outcomes. Managed by TSC, the consolidated terrorist watch list represents a major step forward from the pre-September 11 environment of multiple, disconnected, and incomplete watch lists throughout the government. Today, the watch list is an integral component of the U.S. government’s counterterrorism efforts. However, our work indicates that there are additional opportunities for reducing potential screening vulnerabilities, expanding use of the watch list, and enhancing management oversight. Thus, we have made several recommendations to the heads of relevant departments and agencies. Our recommendations are intended to help (1) mitigate security vulnerabilities in terrorist watch list screening processes that arise when screening agencies do not use certain watch list records and (2) optimize the use and effectiveness of the watch list as a counterterrorism tool. Such optimization should include development of guidelines to support private sector screening processes that have a substantial bearing on homeland security, as well as development of an up-to-date strategy and implementation plan for using terrorist-related information. Further, to help ensure that governmentwide terrorist-related screening efforts are effectively coordinated, we have also recommended that the Assistant to the President for Homeland Security and Counterterrorism ensure that an appropriate leadership or governance structure has clear lines of responsibility and authority. In commenting on a draft of our report, which provides the basis for my statement at today’s hearing, the Department of Homeland Security noted that it agreed with and supported our work and stated that it had already begun to address issues identified in our report’s findings. The FBI noted that the database state and local law enforcement agencies use for screening does not contain certain watch list records primarily to minimize instances of individuals being misidentified as subjects of watch list records. Because of this operational concern, the FBI noted that our recommendation to assess the extent of vulnerabilities in current screening processes has been completed and the vulnerability has been determined to be low or nonexistent. In our view, however, recognizing operational concerns does not constitute assessing vulnerabilities. Thus, while we understand the FBI’s operational concerns, we maintain it is still important that the FBI assess to what extent security risks are raised by not screening against certain watch list records and what actions, if any, should be taken in response. Also, the FBI noted that TSC’s governance board is the appropriate forum for obtaining a commitment from all of the entities involved in the watch-listing process. However, as discussed in our report, TSC’s governance board is responsible for providing guidance concerning issues within TSC’s mission and authority and would need additional authority to provide effective coordination of terrorist-related screening activities and interagency issues governmentwide. The Homeland Security Council was provided a draft of the report but did not provide comments. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members have at this time. For questions regarding this testimony, please contact me at (202) 512- 8777 or [email protected]. Other key contributors to this statement were Danny R. Burton, Virginia A. Chanley, R. Eric Erdman, Michele C. Fejfar, Jonathon C. Fremont, Kathryn E. Godfrey, Richard B. Hung, Thomas F. Lombardi, Donna L. Miller, and Ronald J. Salo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Federal Bureau of Investigation's (FBI) Terrorist Screening Center (TSC) maintains a consolidated watch list of known or appropriately suspected terrorists and sends records from the list to agencies to support terrorism-related screening. This testimony discusses (1) standards for including individuals on the list, (2) the outcomes of encounters with individuals on the list, (3) potential vulnerabilities in screening processes and efforts to address them, and (4) actions taken to promote effective terrorism-related screening. This statement is based on GAO's report (GAO-08-110). To accomplish the objectives, GAO reviewed documentation obtained from and interviewed officials at TSC, the FBI, the National Counterterrorism Center, the Department of Homeland Security, and other agencies that perform terrorism-related screening. The FBI and the intelligence community use standards of reasonableness to evaluate individuals for nomination to the consolidated terrorist watch list. In general, individuals who are reasonably suspected of having possible links to terrorism--in addition to individuals with known links--are to be nominated. As such, being on the list does not automatically prohibit, for example, the issuance of a visa or entry into the United States. Rather, when an individual on the list is encountered, agency officials are to assess the threat the person poses to determine what action to take, if any. As of May 2007, the consolidated watch list contained approximately 755,000 records. From December 2003 through May 2007, screening and law enforcement agencies encountered individuals who were positively matched to watch list records approximately 53,000 times. Many individuals were matched multiple times. The outcomes of these encounters reflect an array of actions, such as arrests; denials of entry into the United States; and, most often, questioning and release. Within the federal community, there is general agreement that the watch list has helped to combat terrorism by (1) providing screening and law enforcement agencies with information to help them respond appropriately during encounters and (2) helping law enforcement and intelligence agencies track individuals on the watch list and collect information about them for use in conducting investigations and in assessing threats. Regarding potential vulnerabilities, TSC sends records daily from the watch list to screening agencies. However, some records are not sent, partly because screening against them may not be needed to support the respective agency's mission or may not be possible due to the requirements of computer programs used to check individuals against watch list records. Also, some subjects of watch list records have passed undetected through agency screening processes and were not identified, for example, until after they had boarded and flew on an aircraft or were processed at a port of entry and admitted into the United States. TSC and other federal agencies have ongoing initiatives to help reduce these potential vulnerabilities, including efforts to improve computerized name-matching programs and the quality of watch list data. Although the federal government has made progress in promoting effective terrorism-related screening, additional screening opportunities remain untapped--within the federal sector, as well as within critical infrastructure components of the private sector. This situation exists partly because the government lacks an up-to-date strategy and implementation plan for optimizing use of the terrorist watch list. Also lacking are clear lines of authority and responsibility. An up-to-date strategy and implementation plan, supported by a clearly defined leadership or governance structure, would provide a platform to establish governmentwide screening priorities, assess progress toward policy goals and intended outcomes, consider factors related to privacy and civil liberties, ensure that any needed changes are implemented, and respond to issues that hinder effectiveness.
Payments to MA organizations are based on the MA organization’s bid and benchmark and are adjusted for differences in projected and actual enrollment, beneficiary residence, and health status. PPACA changed how the benchmark and rebate are calculated. Payments to MA organizations and the additional benefits that MA organizations offer are based in part on the relationship between the MA organizations’ bids—their projection of the revenue required to provide beneficiaries with services that are covered under Medicare FFS—and a benchmark. If an MA organization’s bid is higher than the benchmark, the organization must charge beneficiaries a premium to collect the amount by which the bid exceeds the benchmark. If an MA organization’s bid is lower than the benchmark, the organization receives the amount of the bid plus additional payments, known as rebates, equal to a percentage of the difference between the benchmark and the bid. MA organizations are required to use rebates to provide additional benefits, such as dental or vision services; reduce cost-sharing; reduce premiums; or some combination of the three. CMS adjusts payments to MA organizations to account for differences in projected and actual enrollment, beneficiary residence, and health status. CMS adjusts for differences in projected and actual enrollment through its method for paying MA organizations. Specifically, MA organizations get paid a PMPM amount and thus only get paid for actual enrollees. CMS also adjusts PMPM payments to MA organizations on the basis of the ratio of the benchmark rate in the beneficiary’s county to the plan benchmark. Thus, if a beneficiary comes from a county that has a benchmark rate that is lower than the plan’s benchmark, the plan will receive a lower PMPM payment for that beneficiary. Finally, to help ensure that health plans have the same financial incentive to enroll and care for beneficiaries regardless of their health status, payments to MA organizations are adjusted for beneficiary health status—a process known as risk adjustment. Final payments are adjusted to account for differences between the projected average risk score—a relative measure of expected health care use for each beneficiary—submitted in plans’ bids and the actual risk scores for enrolled beneficiaries. Bidding rules for employer health plans differ from those for MA plans available to all beneficiaries and SNPs. Specifically, MA organizations are able to negotiate specific benefit packages and cost-sharing amounts with employers after the MA organizations submit their bid for an employer group plan. In contrast, MA organizations’ bids for all other MA plans must reflect their actual benefit package—including additional benefits, reduced cost-sharing, and reduced premiums—and MA organizations cannot change the benefits after the bid is approved by CMS. PPACA changed how the benchmark is calculated beginning in 2011. These changes have resulted in a decrease, on average, in county benchmarks relative to average Medicare FFS expenditures. In 2011, benchmark rates were held constant at 2010 benchmark rates. From 2012 through 2016, the benchmark will be a blend of the traditional benchmark formula and a new quartile-based formula. Counties will be stratified into quartiles based on their Medicare FFS expenditures, with the first quartile of counties (the 25 percent of counties that have the highest Medicare FFS expenditures) having a benchmark equal to 95 percent of FFS expenditures. Counties in the second, third, and fourth quartiles will have benchmarks of 100 percent, 107.5 percent, and 115 percent, respectively, of FFS expenditures. In addition, any MA organization that receives 3 or more stars on CMS’s 5-star quality rating system will receive a bonus to the PPACA portion of their blended benchmark. In 2017 and future years, the quartile-based formula will determine 100 percent of the benchmark value. PPACA also changed how the rebate is calculated. This change resulted in decreased rebate amounts starting in 2012. By 2014, the rebate amounts will be equal to 50, 65, or 70 percent of the difference between the benchmark and the bid, depending on the number of stars a plan receives on CMS’s 5-star quality scale. Prior to 2012, MA organizations received a rebate equal to 75 percent of the difference between the benchmark and the bid. SNPs and employer group plans have specific eligibility requirements. SNPs serve specific populations, including beneficiaries who are dually eligible for Medicare and Medicaid, are institutionalized, or have certain chronic conditions. Employer group plans are MA plans offered by employers or unions to their Medicare-eligible retirees and Medicare- eligible active employees, as well as to Medicare-eligible spouses and dependants of participants in such a plan. In those cases where an active employee is enrolled in an employer’s non-Medicare health plan, the Medicare employer group plan would serve as a secondary payer, while the employer’s non-MA plan for active employees would serve as the primary payer. Among plans available to all beneficiaries, 2011 expenses and profits represented similar percentages of total revenue compared to projections. Among plans with specific eligibility requirements—that is, SNPs and employer group plans—2011 expenses were lower and profits were higher as a percentage of revenue compared to projections. As a percentage of 2011 total revenue, MA organizations’ actual medical expenses, nonmedical expenses, and profits were, on average, similar to projected values for plans available to all beneficiaries. As a percentage of revenue, medical expenses and profits were slightly lower than projected, while nonmedical expenses were slightly higher. Also, as a percentage of revenue, all three categories were within 0.3 percentage points of what MA organizations had projected (see table 1). MA plans that were available to all beneficiaries received slightly higher total revenue per beneficiary than projected, which could be a result of differences between actual and projected health status and geographic location of beneficiaries who enrolled. For instance, MA plans could have received additional Medicare payments if they enrolled beneficiaries who were expected to need more health care, who were disproportionately from counties with higher benchmarks, or a combination of these two reasons. Because of the higher total revenue, medical expenses as a percentage of revenue were 0.2 percentage points lower than projected, despite MA organizations’ spending more dollars on medical expenses than projected. The percentage of revenue spent on medical expenses and profits varied substantially between MA contracts. For example, while MA organizations spent an average of 86.3 percent of revenue on medical expenses, approximately 39 percent of beneficiaries were covered by contracts where less than 85 percent of revenue was spent on medical expenses, and 13 percent of beneficiaries were covered by contracts where less than 80 percent of revenue was spent on medical expenses (see table 2). Further, while the average profit margin was 4.5 percent among plans available to all beneficiaries, 26 percent of beneficiaries in our analysis were covered by contracts where profit margins were negative. In contrast, 15 percent of beneficiaries in our analysis were covered by contracts where profit margins were 10 percent or higher. For MA organizations with either high or low benchmarks, profit margins and the percentage of total revenue devoted to expenses were, on average, similar to projections. As a percentage of revenue, MA organizations with high benchmarks had slightly lower-than-projected medical expenses but slightly higher-than-projected nonmedical expenses and profits (see table 3). As a percentage of revenue, MA organizations with low benchmarks had slightly higher-than-projected medical expenses and nonmedical expenses but slightly lower profits. In addition, MA organizations with high benchmarks had higher profit margins compared to those with low benchmarks. Specifically, organizations with high benchmarks had an average profit margin of 5.9 percent and made $668 per beneficiary compared to 3.5 percent and $313 per beneficiary for organizations with low benchmarks. The accuracy of MA organizations’ projections varied on the basis of the type of plan they offered under each contract. Among the three plan types studied (HMO, PPO, and PFFS), PFFS contracts had the largest differences, in percentage point terms, between their actual and projected expenses and profits. For example, as a result of spending, on average, a higher-than-projected percentage of total revenue on medical and nonmedical expenses, PFFS contracts reported an actual profit margin of only 0.3 percent after projecting a 4.3 percent profit margin (see table 4). HMO contracts had the highest profit margins and were the only type of contract, among the three studied, that averaged higher profits than projected. Specifically, HMO contracts had a 5.3 percent profit margin, which was slightly higher than projected—5.0 percent—and substantially higher than the profit margins of PPO and PPFS contracts—2.5 percent and 0.3 percent, respectively. SNPs’ profits were higher than projected both in terms of a percentage of total revenue and in dollars. SNPs received somewhat higher revenue than projected and spent a lower percentage of total revenue on medical and nonmedical expenses than projected (see table 5). As a result of the higher-than-projected revenue and spending a lower percentage of revenue on expenses, SNPs reported an average profit per beneficiary of $1,115, which was 44 percent higher than projected ($777) and 149 percent higher than the profit per beneficiary for plans available to all Medicare beneficiaries ($447). Compared to plans available to all Medicare beneficiaries, SNPs spent more in terms of amount per beneficiary, but less in percentage terms, on medical and nonmedical expenses. CMS officials said SNPs might have higher profit margins because of the potential additional risk of providing a plan that targets a specific population. For instance, the officials noted that it may be more difficult to predict revenue and spending for a SNP’s targeted population. SNPs may face higher medical expenses because beneficiaries enrolled in such plans may have increased health care needs. According to CMS officials, SNPs may also face higher administrative expenses for several reasons, such as potentially higher marketing expenses associated with targeting SNPs’ designated population. Employer group plans had higher revenue, had higher profit margins, and spent a lower percentage of total revenue on expenses than projected. Specifically, total revenue per beneficiary was about 14 percent higher than projected—$11,364 compared to $9,957 (see table 6). In addition, employer group plans spent 86.3 and 6.1 percent of total revenue on medical expenses and nonmedical expenses, respectively, compared to a projected 89.5 and 6.3 percent, and these plans also had an actual profit margin of 7.6 percent compared to a 4.2 percent projected profit margin. The combined effects of higher revenue and a higher profit margin translated into average profits per beneficiary of $861, which was 108 percent higher than projected ($413) and 93 percent higher than the profit per beneficiary for plans available to all Medicare beneficiaries ($447). Unlike other MA plans, projections for employer group plans may vary from their actual profits and expenses because MA organizations that offer such plans are able to negotiate specific benefit packages and cost-sharing amounts with employers after they submit their bids to CMS. We requested comments from CMS, but none were provided. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix I. In addition to the contact named above, Christine Brudevold, Assistant Director; Sandra George; Gregory Giusto; Brian O’Donnell; and Elizabeth T. Morrison made key contributions to this report.
MA organizations are entities that contract with the Centers for Medicare & Medicaid Services (CMS) to offer one or more private plans as an alternative to the Medicare fee-for-service (FFS) program. These MA plans are generally available to all Medicare beneficiaries, although certain types of plans, such as employer group plans, have specific eligibility requirements. Payments to MA organizations are based, in part, on the projected expenses and profits that MA organizations submit to CMS. These projections also affect (1) the extent to which MA beneficiaries receive additional benefits not provided under FFS and (2) beneficiary cost-sharing and premium amounts. The Patient Protection and Affordable Care Act (PPACA) required that, starting in 2014, MA organizations have a minimum medical loss ratio of 85 percent--that is, they must spend 85 percent of revenue on medical expenses, quality-improving activities, and reduced premiums. This report examines how MA organizations' actual expenses and profits for 2011 as a percentage of revenue and in dollars compared to projections for the same year, both for plans available to all Medicare beneficiaries and for plans with specific eligibility requirements. GAO analyzed data on MA organizations' projected and actual allocation of revenue to expenses and profits. The percentage of revenue spent on medical expenses reported in GAO's study is not directly comparable to the PPACA medical loss ratio calculation, as the final rule defining the calculation was issued after actual 2011 data were submitted. Medicare Advantage (MA) organizations' actual medical expenses, nonmedical expenses (such as marketing, sales, and administration) and profits as a percentage of total revenue were, on average, similar to projected values for plans available to all beneficiaries in 2011, the most recent year for which data were available at the time of the request for this work. MA organizations' actual medical expenses, nonmedical expenses, and profits were 86.3 percent, 9.1 percent, and 4.5 percent of total revenue, respectively. As a percentage of revenue, all three categories were within 0.3 percentage points of what MA organizations had projected. In addition, MA organizations received, on average, $9,893 in total revenue per beneficiary, slightly higher than the projected amount of $9,635. The percentage of revenue spent on medical expenses and profits varied between MA contracts. For example, while MA organizations spent an average of 86.3 percent of revenue on medical expenses, 39 percent of beneficiaries were covered under contracts where less than 85 percent of revenue was spent on medical expenses. In addition, the accuracy of MA organizations' projections varied on the basis of the type of plans offered under the contract. For example, contracts for private fee-for-service plans--a plan type with new provider network requirements in 2011--had average profit margins that were 4 percentage points lower than projected. In 2011, plans offered by MA organizations with specific eligibility requirements had higher-than-projected profits. Special needs plans (SNP), which serve specific populations, such as those with specific chronic conditions, had an 8.6 percent profit margin, but had projected 6.2 percent. This higher percentage, combined with higher-than-projected revenue, resulted in SNPs reporting an average profit per beneficiary of $1,115, or 44 percent higher than projected ($777). Employer group plans, which are offered by employers or unions to their employees or retirees, as well as to Medicare-eligible spouses and dependants of participants in such plans, had a 7.6 percent profit margin, but had projected 4.2 percent. The higher profit margin, combined with higher-than-projected revenue, resulted in employer plans receiving an average profit per beneficiary of $861, or 108 percent higher than projected ($413). GAO requested comments from CMS, but none were provided.
To access the Internet, most residential users dial in to an ISP over a telephone line, although other physical means of access to the Internet— such as through a cable television line—are becoming increasingly common. For a residential customer, the ISP sends the user’s Internet traffic on to the backbone network. To perform this function, ISPs obtain direct connections to one or more Internet backbone providers. Small business users may also connect to a backbone network through an ISP, however, large businesses often purchase dedicated lines that connect directly to Internet backbone networks. An ISP’s traffic connects to a backbone provider’s network at a facility known as a “point of presence.” Backbone providers have points of presence in varied locations, although they concentrate these facilities in more densely-populated areas where Internet end users’ demands for access are greatest. If an ISP or end user is far from a point of presence, it is able to reach distant points of presence over telecommunications lines. Figure 1 depicts two hypothetical Internet backbone networks that link at interconnection points and take traffic to and from residential users through ISPs and directly from large business users. Once on an Internet backbone network, digital data signals that were split into separate pieces or “packets” at the transmission point are separately routed over the most efficient available pathway and reassembled at their destination point. The standards that specify most data transmissions are known as the Internet Protocol (IP) Suite. Under part of this protocol, streams of packets are routed to their destination over the most efficient pathway. Other aspects of the protocol facilitate the routing of packets to their appropriate destination by examining the 32-bit numeric identifier— or IP address—attached to every packet. Currently, IP addresses for North America are allocated by the American Registry for Internet Numbers (ARIN). There are many Internet backbone providers offering service in the United States. Boardwatch—an industry trade magazine—reports 41 backbone providers with a national network and many other regional backbones. Approximately five to eight of these national providers are considered to be “Tier 1” backbone providers. A Tier 1 provider is defined by Boardwatch as having a network of wide geographic scope, having a network with many IP addresses, having extensive information for traffic routing determinations, and handling a large percentage of transmissions. Unlike telecommunications services, the provision of Internet backbone service is not regulated by governmental communications agencies. Dating back to the 1960s when data signals began to flow over public telephone networks, FCC determined that “basic services”—the physical transport of data over telephone networks—would be regulated, but “enhanced services”—the data-processing or computer-enhanced functions of data transmissions—was a vibrant and competitive market that should remain free of regulation. Congress maintained this distinction when it enacted the Telecommunications Act of 1996, terming these services “telecommunications” and “information,” respectively. No provisions were contained in the 1996 act pertaining to Internet backbone services; rather, the act sought to increase competition in other communications sectors, primarily the local telephone market. However, the treatment of these more established communications services and infrastructures under the Communications Act of 1934—as amended by the 1996 act—has indirectly affected the burgeoning Internet medium. Additionally, the act provided FCC and states the authority to take actions to encourage the deployment of advanced telecommunications capability. Two types of facilities are used for the exchange of data traffic by interconnected Internet backbone providers. The first type of facility, known as a “network access point” (NAP), enables numerous backbone providers to interconnect with each other at a common facility for the exchange of data traffic. Internet data traffic is also exchanged by backbone providers at “private” interconnections. Independent of the type of facility at which backbone providers exchange traffic, two different types of financial arrangements exist among backbone providers for traffic exchanges. In a “peering” relationship, backbone providers exchange data destined only for each other’s network generally without the imposition of a fee. Transit payments, which involve the payment by one backbone provider to another for the mutual exchange of traffic and for the delivery of traffic to other providers, have become more common with time. A NAP facilitates the interconnection of multiple backbone providers. In the early to mid-1990s, the National Science Foundation designed and partially funded four NAPs, each of which was managed by a different company. Since that time, other interconnection points have been constructed, and for purposes of this report, the term NAPs refers to approximately 10 major traffic exchange points that host backbone providers. Managed by different companies, NAPs are not uniform facilities; differences exist in terms of equipment, software, and data transmission rates. Although most backbone providers we interviewed use the NAPs, a few providers voiced concerns about them. In the first years of their existence, NAPs became congested with the rapid rate of growth in Internet traffic. Two of the providers with whom we spoke said that some NAPs were not well managed. Also, originally some NAP technology was not “scalable”— that is, beyond some level, it was very costly to increase the amount of traffic that could be exchanged at a NAP. If traffic exchange at a NAP became congested, service quality could be compromised. Two typical problems that congestion causes include latency (delay in the transmission of traffic) and packet loss (when transmitted data are actually lost and never reach their destination). For example, one backbone provider told us that the loss of packets at some NAPs had sometimes reached 50 percent. The congestion and poor quality of connections at the NAPs led backbone providers to engage in another type of traffic exchange known as “private interconnection.” Private interconnection refers to the exchange of traffic at a place other than a NAP. Usually, these private interconnections involve two companies entering into a bilateral agreement to exchange traffic; no third party manages the traffic exchange. The parties interconnect their networks at any feasible location, such as a facility of one of the providers. Because of the private nature of these agreements, the number of private interconnections that currently exist across the United States, according to one company representative, is not known. Despite a variety of technological developments that have improved traffic flow at NAPs, we found that for the providers we interviewed, the majority of Internet traffic exchange occurs at private interconnection points. Of 17 backbone providers with whom we spoke, 15 used both NAPs and private interconnections; the remaining 2 used only private interconnections, avoiding the NAPs entirely. Slightly more than half of the 15 providers using both NAPs and private interconnection said they exchanged more than 80 percent of their traffic at private exchange points. Of the 17 companies that we met with, 10 provided estimates of how their mix of private interconnection and NAP use would likely change in the future. Nine of the 10 stated that they either plan less use of NAPs in the next few years or do not see their mix of NAPs and private interconnection changing; only one company said that it was likely to make greater use of NAPs in the future. We found that some Internet backbone providers value several features of NAPs. For example, when a company interconnects at a NAP, it saves on equipment costs and administrative overhead. Representatives of two companies with whom we spoke noted that the NAPs play an important role in helping to keep the market for backbone service open for entry, and thus more competitive, because NAPs provide new backbone firms an efficient, low-cost method for exchanging traffic with numerous other providers. When the commercial Internet began, only a few major backbone providers of relatively similar size existed, each of which sent and received roughly equal amounts of traffic. The similarities among these backbone firms led them to view each other as “peers.” These providers elected to exchange traffic for free, rather than trying to measure the actual traffic exchanged and developing a payment method. In a peering arrangement, two backbone providers agree to exchange traffic destined only for each others’ networks. As depicted in figure 2, the peering agreement between backbone provider A and backbone provider B only covers traffic going from A’s network to B’s network and vice versa. For backbone A to move traffic to backbone C’s network under peering, it must have a peering agreement directly with backbone C. By the mid to late-1990s, another financial arrangement known as “transit” emerged. Transit and peering are distinctive in two key respects. First, while peering generally entails traffic exchange between two providers without payment, transit entails payment by one provider to another for carrying traffic. Transit agreements thus constitute a supplier-customer relationship between some backbone providers, much like the relationship between a backbone provider and a nonbackbone customer (such as an ISP). Second, when a backbone provider buys transit from another provider, it obtains not only access to the “supplier’s” backbone network, but also access to any other backbone network with which its supplier peers. Regarding physical locations, however, both transit and peering take place at NAPs as well as at private interconnection points. Currently, there is a segregation of backbone providers into “tiers.” The top tier or “Tier 1” providers generally peer with each other and sell transit to smaller backbone providers. However, we found that smaller providers often peered with each other and were able, in some cases, to peer with larger providers. The illustration in figure 3 shows backbone provider C as a transit customer of backbone provider B and backbone providers B and A as peers. In this case, traffic originating on backbone C can get to backbone B’s network as well as to that of backbone A (with which backbone C does not have an independent relationship) because B will pass C’s traffic off to A as part of its delivery of transit service to C. Thus, a smaller backbone provider generally need only buy transit from one or two large providers to achieve universal connectivity. We found that it is generally not viewed as economical for a backbone provider to peer with a less geographically dispersed backbone provider. Thus, even if there were equal traffic flows, the larger provider will tend to carry traffic a further distance—which, according to a larger backbone provider we spoke with, ultimately means more costs are imposed on its infrastructure—when it peers with a provider with a smaller or less widely dispersed network. Figures 4 and 5 illustrate this paradigm. In figure 4, backbone providers A and B are of similar size, and traffic between the two could be carried mostly by one backbone provider in one direction, but mostly by the other in the opposite direction. In figure 5, backbone provider D is smaller than backbone provider C, with more limited points at which traffic can be brought onto the network. When backbones C and D exchange traffic, C must carry the traffic much farther on the return path before it can hand off the data packets to D. Therefore, C might consider D to be benefiting from C’s network investment and thus, C would be more likely to view D as a customer purchasing access to its network than as a peer in traffic exchange. The “tiering” of Internet backbone providers and the dual system of peering and transit agreements have caused controversies. Several of the non-Tier 1 backbone providers with whom we spoke expressed concerns about their inability to peer with the largest providers. In particular, we were told that the inability of non-Tier 1 providers to peer with Tier 1 providers puts smaller companies—which must therefore purchase transit service—at a competitive disadvantage. We were also told that peering policies should be made public. To some extent, market forces may be relieving some of these problems. First, despite the view that smaller providers have no choice but to buy transit, some backbone providers with whom we spoke stated that the market is competitive, and transit rates have been decreasing. Second, eight of the backbone providers with whom we spoke (some of which were Tier 1 providers and some of which were not) said they already had posted or soon would be posting their peering policies on their Web sites or otherwise making them publicly available. Perhaps most interesting, we found that some non-Tier 1 backbone providers do not want to peer with the largest backbone providers. For example, one provider spoke critically of the quality of peering connections and the quality of service provided between peers. Some stated that it is difficult to guarantee their own clients a certain level of service if they receive few guarantees themselves—a common occurrence under peering. Transit customers, however, do contract for a specified level of service for such items as “uptime”—the functioning of a network without impairment or failure. No official data sources were identified that would provide information on the structure and competitiveness of the Internet backbone market. Market participants we interviewed—Internet backbone providers, ISPs, and other end users—described the Internet backbone market as competitive. Several characteristics were described by market participants, such as increasing choice of providers and lower prices, as evidence of the competitiveness of the market. However, officials also described to us factors that may reduce competition in this market or cause other problems, such as the limited number of Tier 1 providers, the limited choice of providers in rural areas, the manner in which Internet addresses are assigned, and the lack of control or knowledge about the movement of traffic across backbone networks. We were also told that the choice of local telephone companies providing access to Internet backbone networks may be limited, creating problems for providers of Internet services. We found no official data source that could provide information to allow an empirical investigation of the nature of competition in the Internet backbone market. In particular, we found little in the way of official or complete information on the relative size of companies—even the largest companies—operating in the market. Neither FCC nor NTIA collect data on the provision of Internet backbone services. However, FCC does solicit public comments on the deployment of underlying telecommunications infrastructure that support backbone services for their report on advanced telecommunications capabilities under section 706 of the Telecommunications Act of 1996. DOJ often collects data for merger-specific analyses—as it did in two cases that involved an assessment of backbone assets—but such data are not publicly available. We also found that neither the Bureau of Labor Statistics nor the U.S. Census Bureau currently collects data directly on Internet backbone providers. In the case of both of these agencies, aggregate data on services provided by telecommunications providers is collected. To investigate the degree of competition, we spoke with an array of buyers and sellers of backbone connectivity and asked questions that were designed to provide information about the competitiveness of the market. For example, we asked questions about the availability of choice among providers in the market, the viability of purchasing transport to a distant location to connect to a backbone provider, the length of contracts for backbone connectivity, the types of service guarantees buyers receive from sellers, the ability of buyers to negotiate favorable contract terms, and the factors that were important to buyers when choosing a backbone provider. Representatives of ISPs and end users we interviewed throughout the country described the Internet backbone market as competitive. Most of these providers stated that they have several choices of backbone providers from which to obtain services. Although a few ISP representatives noted a relatively limited number of companies among the Tier 1 providers, they nonetheless considered the market to be competitive with greater choices across the entire range of backbone providers. Similarly, most non-Tier 1 backbone providers stated that they can purchase transit from a number of Tier 1 backbone providers. A few ISPs and other purchasers of backbone services also noted that the extensive choice of backbone providers enables them to engage in “multihoming”— purchasing backbone services from more than one provider—to provide redundant access that enhances ISPs’ assurances to customers of uninterrupted Internet connectivity. We found, based on our discussions with ISPs and other purchasers of backbone connectivity, that several characteristics of the market show evidence of its competitiveness. In particular: Many ISPs noted that, coincident with increased choice of backbone providers throughout the country, the price of backbone connectivity had declined significantly in recent years. Representatives of several companies told us that although they were presented with standard contracts by backbone providers, they were able to negotiate terms and conditions in their contracts that were important to them. A few ISP representatives with whom we met said they receive frequent sales calls from multiple backbone providers. An ISP representative noted that many backbone providers are working to increase the speed and decrease the latency of transmissions of their networks to improve their competitiveness in the market. Even though there have been bankruptcies and consolidation in this market, a few new backbone providers have entered the market in the recent past. Some backbone providers are filling market niches by offering customers additional or unique services to complement their backbone services. The majority of market participants with whom we spoke expressed the view that the Internet backbone market is competitive, if not highly competitive. At the same time, many of these respondents noted factors that might be reducing the level of competition or creating other problems in this market. In particular, we were told that (1) a small number of large backbone providers stand out as the premier providers, (2) choice among backbone providers may be more limited in rural areas, (3) ISPs are concerned about the way Internet addresses are assigned to users, and (4) ISPs and other end users are frustrated by their minimal control and understanding about how their traffic moves across Internet backbone networks. ISPs and other end users indicated to us a general perception that Tier 1 companies are “different” or superior when compared with other backbone providers. For example, 17 of the 24 ISPs and all 8 of the end users we interviewed purchase backbone connectivity from at least 1 of the 5 Tier 1 backbone providers identified in a recent FCC Working Paper. Similarly, 11 ISPs and 3 end users we interviewed explicitly stated that it was important to them to purchase service from a Tier 1 provider. Finally, many ISPs and end users stated that it was important to them to purchase backbone connectivity from a provider possessing certain network characteristics. Commonly cited characteristics of importance were a network with a broad geographic scope, many customers, significant capacity, and good peering arrangements with other providers. These are all common characteristics of Tier 1 backbone providers. Because Tier 1 providers are viewed as a special class of backbone providers, the existence of approximately 40 national backbone providers may not fully reveal the competitiveness of this market. Instead, it appears that only the 5 to 8 Tier 1 backbone providers are viewed as competitors for primary backbone connectivity. However, most of the ISPs and end users with whom we spoke nonetheless stated that the market is competitive and they have significant choice of provider. It appears that even if the “relevant” market for primary backbone connectivity is the Tier 1 providers, that market segment may be viewed as competitive. A remaining concern regarding the “tiered” segmentation of the market is the potential for the number of Tier 1 providers to decline or for one of these providers to become dominant. For example, the recent economic downturn in the communications sector may portend a further shakeout of backbone providers. Several of the company officials we interviewed expressed concern that there would be consolidation among the Tier 1 providers and thus noted the importance of antitrust oversight of this industry. Moreover, both an FCC Working Paper and the Antitrust Division of DOJ have noted that in industries such as the Internet backbone market, interconnection among carriers is critical to the quality of service consumers receive. As such, a much larger provider may have less incentive to have good interconnection quality with other providers because without quality interconnection, customers may have an incentive to buy service from the largest provider with the best-connected network. This would give the larger provider a competitive advantage, which in turn could cause the market to “tip”—that is, more and more users would choose connectivity from the larger network—risking a monopolization of the industry. Because of this concern, both agencies have noted that if one of the Tier 1 providers were to grow considerably larger than the rest, there could be competitive concerns. Members of Congress are often concerned about whether telecommunications services reach rural areas. Several representatives of companies we interviewed noted that there are less Internet backbone facilities running through rural areas and fewer points of presence in those areas. As such, purchasers of backbone connectivity in rural areas may have fewer choices among providers than their counterparts in more urban locations. One point made by two rural providers is that rural areas sometimes have subsidized networks (e.g., state networks or networks funded, in part, by governmental subsidy) that may actually discourage private backbone companies from entering and thriving in such markets. Despite the view that rural areas have fewer choices among backbone providers, most companies we interviewed in rural areas purchased “transport” services to connect to an Internet backbone network. That is, they were able to transmit their traffic over fiber lines, most often owned by one or more local telephone carriers, to a backbone provider’s point of presence that was perhaps hundreds of miles away. Eighteen of the 24 ISPs and 3 of the 8 end users we interviewed used transport from their location to another location for at least some of their Internet traffic. Sometimes transport was used to move data traffic to a nearby city that was not very far away—perhaps 30 to 50 miles. But in some cases—particularly for ISPs in rural areas—traffic was transported a few hundred miles to a point of presence of a backbone provider. The majority of officials from these companies told us that the quality of Internet service is not diminished by transporting traffic across such distances. Because many ISPs and end users told us that distant transport was a viable option for obtaining Internet backbone connectivity, even ISPs and users in more rural areas told us that they generally had choice among backbone providers that could receive traffic at varied distant locations. The one disadvantage of distant transport noted by several providers, however, was cost. Some company officials noted that it generally costs more to purchase transport to a distant location than it does to connect to a backbone at a local point of presence. Two companies specifically mentioned that they had or were planning to move their facilities to more urban locations because of the cost of distant transport. Several ISPs and end users with whom we spoke expressed concern about the manner in which Internet addresses are allocated. Most ISPs and other end users—except for fairly large organizations—do not directly obtain their own IP addresses, but they instead receive a block of IP addresses from a backbone provider. In particular, when an ISP obtains an Internet connection from a backbone provider, it also generally receives a block of IP addresses from among the addresses that are assigned by ARIN to that backbone provider. This method of IP address allocation was adopted for technical efficiency reasons—that is, allocations made in this manner reduce the number of addresses that need to be maintained for traffic routing purposes. (See app. II for detailed information on IP address allocations). While the method of allocating IP addresses in large blocks enables backbone routers to operate efficiently, some of the ISPs and end users with whom we spoke also told us that it makes it difficult for smaller entities to switch backbone providers. In particular, if an ISP were to change its backbone provider, it would generally have to relinquish its block of IP addresses and get a new block of addresses from the new backbone provider. Several ISPs and end users with whom we spoke told us that changing address space can be time consuming and costly. We found that the degree of difficulty in changing address space depends on how an individual company’s computer network is configured. Two respondents expressed concern about the loss of customers due to a change of IP addresses. A few also told us that it is not uncommon for an ISP to retain a relationship with its original backbone provider—paying for a minimal level of connectivity to that provider—in order to avoid having to go through a disruptive readdressing process. It appears, therefore, that customers’ feelings of being tied to a provider may lessen the effective level of competitiveness in this market. A concern among several market participants we interviewed was the difficulty of guaranteeing customers a given level of quality for Internet services. We were told that this difficulty is related to the way that the Internet is engineered. In particular, several of those with whom we spoke noted that Internet traffic is exchanged among providers on a “best efforts” basis—that is, Internet traffic is routed according to a set of protocols aimed at providing the best routing possible at a given time. However, the Internet was not engineered to enable extremely high quality service at all times—as are telephone networks—and the quality of Internet services can be compromised when high levels of traffic flow lead to congestion. Several of the market participants we interviewed were particularly concerned about their ability to understand where and why problems have occurred. These company representatives told us that when they contact their backbone provider to report service degradation they are sometimes told that the problem is with another interconnected backbone network. Because the Internet is a network of interconnected networks with little data available or reported on service disruptions or outages, finding the source, cause, or reason for a problem may be difficult. Thus, ISPs and end users expressed frustration that accountability for traffic transmission problems is lacking. Several ISPs noted, for example, that they receive service level guarantees from their backbone provider but that collecting remuneration for “downtime”—the time that a network has failed or otherwise is nonfunctional—is difficult because they are unable to prove that the problem occurred on their backbone provider’s network. One backbone provider with whom we spoke also noted that the quality problems inherent in the Internet lead some customers—particularly business clients—to purchase expensive private network services. One of the initiatives of the current and fifth Network Reliability and Interoperability Council (NRIC V) is a trial program for voluntary reporting of outages by providers not currently required to make such reports to FCC, such as Internet backbone providers. A focus group of the Council will evaluate the effectiveness of the program upon its completion and analysis of trial data, and it will make a recommendation on outage reporting of these networks. We were told that, due to concerns by some Internet providers about reporting network outages to a governmental agency, there was little participation in the program by Internet providers through the first half of 2001. Although the Internet backbone market appears to be competitive, another market that is essential to the functioning of the Internet may be less so. Most ISPs and other end users connect to a backbone provider’s point of presence through the local telecommunications infrastructure. These systems are typically owned and operated by incumbent telephone companies—those providing local telephone service prior to enactment of the 1996 act. Many of the market participants with whom we spoke noted that local telephone markets are, in their view, close to monopolistic; and some noted that several companies attempting to compete against incumbent local telephone carriers have recently gone out of business. Based on our interviews with market participants, it appears that a limited choice of local carriers may affect the providers of Internet services. In particular, interviewees stated that incumbent telephone carriers take a long time to provision or provide maintenance on special access services and other high speed access lines—which are often used to link businesses (such as an ISP) to an Internet backbone point of presence. Additionally, some companies we spoke with expressed concern about slow or limited deployment of high-speed Digital Subscriber Line (DSL) service in residential areas. Some backbone providers and ISPs said that these problems were more severe or more limiting in rural areas. For instance, we were told that rural areas are least likely to have competitors to the local carrier, and the incumbents were less likely to roll out DSL in their more rural markets. Incumbent local carriers, on the other hand, have stated that there is considerable competition in the provision of special access service. One such carrier with which we spoke noted that any delay in its own provisioning of these lines is due to the high expense of deploying the necessary infrastructure and to technical difficulties in rolling out DSL, especially in more rural areas. This carrier also noted that FCC found the percentage of all local lines served by competitors had doubled to approximately 8 percent in 2000. New Internet services, such as video streaming and voice telephone calls over the Internet, are expected to become increasingly common in the coming years. Both Internet backbone networks and local communications infrastructure must have sufficient bandwidth and technical capabilities to support such services. In response to problems of latency and packet loss associated with Internet transmissions, various initiatives and efforts are under way to make improvements in the functioning of the Internet and to build alternative networks that are more robust and reliable. We found that most of those with whom we spoke were optimistic that backbone capacity and technical features would adapt to new needs, but concern was expressed that limited broadband capabilities in local telephone markets could stall certain new applications. Incumbent local telephone companies have stated that the rollout of DSL service is hampered by the cost of reengineering parts of the network and existing regulations that require them to sell piece parts of their networks to competitors at cost-based rates. A variety of the company representatives with whom we spoke told us that new services and some services that were traditionally regulated (such as telephone calls) are expected to become more commonly provided over the Internet in the coming years. Many companies are developing technologies to enable voice services to be provided over IP networks. At present, however, many backbone networks are not well designed to provision such “time-sensitive” services. Specifically, real-time services such as IP telephony and interactive video require “bounded delays”—that is, these services require very low and uniform delays between sender and receiver in order for the service to be of adequate quality. Also, more broadband content is expected to be transmitted over the Internet. Before such broadband content can be provided, both the backbone and the local communications infrastructure must have sufficient bandwidth. Many industry representatives with whom we met told us that latency and the loss of data packets due to traffic congestion is a consequence of the current protocols for transmitting Internet traffic. As transmissions of time-sensitive applications over the Internet become increasingly common in the future, these problems may become particularly acute. A few of those we interviewed noted that these applications can run well across one backbone network, but when traffic must transverse across more than one network, quality cannot be assured given current routing protocols. We found that participants in Internet markets have begun to address latency and reliability problems in Internet backbone networks. For example: In addition to its experimental outage reporting initiative, NRIC V is in the process of evaluating and will report on the reliability of “packet- switched” networks. The council is also examining issues related to interconnection and peering of Internet backbone providers and the sufficiency of the best efforts standard for Internet transmissions as more time-sensitive services are provided over the Internet. Companies have emerged to build and provide services over networks that do not rely as much on traffic exchange across networks. For example, we found that a few providers are building and relying on private data networks—rather than the Internet—for the transmission of voice services. Similarly, some companies are building “virtual private networks”— networks configured within a public network for data transmissions that are secured via access control and encryption. Companies reduce reliance on backbone service—and thus increase transmission speed—by caching frequently used content on their servers. In addition, companies have emerged that specialize in caching frequently accessed content and storing it in varied geographic locations, thus making it more quickly accessible to customers. Because the Internet is not viewed as conducive to supporting research capabilities of high-speed technologies and other advanced functions, alternative methods for such research have emerged. For example, “Internet2” is a partnership of universities, industry, and government formed to support research and the development of new technologies and capabilities for future deployment within the Internet. According to many of the company officials we interviewed, there appears to be ample deployment of fiber optic cable in Internet backbone networks to support high bandwidth services. Similarly, we were told that capacity continues to be built by backbone providers and others and that backbone networks’ capacity will not be a bottleneck for the deployment of broadband applications. However, concerns were expressed to us that shortcomings in the local telephone market were likely to intensify in the future due, in part, to the increase in demand for broadband applications and content. We found that some companies are offering services to address this problem by attempting to bypass incumbent telephone companies’ facilities and bring services directly to customers. However, the majority of these efforts are focused on business customers in urban areas. For example, we found: Metropolitan fiber rings—fiber optic cables encircling central business districts of urban areas—are being constructed as an alternative to using incumbent carrier services. Business customers purchase a direct connection to the fiber ring, which is connected directly to the backbone point of presence. Wireless direct access is also becoming available that will enable a company’s data traffic to bypass local telecommunications infrastructures. While solutions such as these hold promise for greater choice for business customers in urban areas, market forces may not naturally address constraints in capacity of local telecommunications infrastructure in certain areas, particularly in rural, residential locations. Instead, representatives expressed concern that the deployment of broadband telephone facilities in residential and rural areas may not keep up with demand. Some of those we spoke with gave the example of limited DSL deployment in many areas. An incumbent local telephone provider we spoke with stated that they are aggressively rolling out DSL service, but that the service is costly to roll out and often requires significant reengineering of their networks. These providers also have noted publicly that DSL rollout is hampered by certain regulations that require incumbents to sell parts of their network (including DSL lines) to entrants at cost-based rates. Legislation is pending in the 107th Congress that would address these concerns, and proponents of this legislation have stated that this will advance the deployment of broadband in residential and rural areas. Opponents of the legislation believe the bill will not foster increased deployment of broadband services and may stifle competition in the local telephone market. Other bills have been introduced in Congress proposing various other approaches and strategies to accelerate the deployment of high-speed data services. In the 6 years since the federal government ended its sponsorship of a key backbone network, the Internet has changed the way people of the world live, work, and play. Its rapid growth is seen in the substantial investments made by private sector firms in backbone networks and interconnection facilities, by the proliferation of interactive applications and content, and by the exponential increase in the connectivity of end users. These developments are particularly noteworthy in light of the dynamic nature of the Internet backbone marketplace—Internet backbone providers not only compete with each other for customers but also cooperate for the exchange of traffic. The success of the Internet, as evidenced by its growth, evolution, diversity, and cooperative structure, has occurred with minimal government involvement or oversight. Despite the Internet’s success and the competitiveness of the Internet backbone market, several issues of concern regarding this market were raised to us during the course of our study. Market participants noted the importance of Tier 1 backbone providers and the potential for reduced competition if consolidation were to occur at the Tier 1-provider level. The inability of backbone customers to ascertain the causes of service degradation or traffic disruptions was also expressed to us, along with concerns about the adaptability of the Internet to new services. These and other concerns underscore the need for adequate information on such items as, for example, the geographic scope of backbone networks, the number of backbone providers’ customers, the number of IP addresses assigned to providers, traffic flows, and outages. In the absence of adequate information, it is difficult to fully ascertain the quality of service, the reasons for problems when they occur, and the extent of market concentration and competition in the Internet backbone market. The adaptability of backbone networks for new services, such as Internet- based voice and video services, foretell a trend commonly identified as “convergence” in the broader communications sector and the increasing importance of the Internet to the U.S. economy. This expectation of greater convergence was widely shared by the market participants we interviewed for this study and for other studies we have conducted at your request over the past 3 years. There is a strong expectation that traditionally regulated services—such as voice telephone and video services—are already migrating to the Internet and will soon become common applications used by residential and business Internet users. Moreover, advances in technology are changing the very nature of the Internet. In the last half decade, the Internet has evolved from a nascent but promising information tool to a 21st century medium central to commerce and communications for Americans and citizens the world over. The implications of convergence and greater future reliance on the Internet are at present largely unknown. No evidence came to light in the course of this study to suggest that the long-standing hands-off regulatory approach for the Internet has not worked or should be modified. Further, FCC said it believes that the appropriate means to collect information on Internet backbone networks at the present time is through informal and experimental efforts, which are currently under way. Because of the trend towards convergence in the communications marketplace and the nation’s increasing reliance on the Internet, however, FCC may need to periodically reassess its data collection efforts to evaluate whether they are providing sufficient information about key developments in this industry. FCC should develop a strategy for periodically evaluating whether existing informal and experimental methods of data collection are providing the information needed to monitor the essential characteristics and trends of the Internet backbone market and the potential effects of the convergence of communications services. If a more formal data collection program is deemed appropriate, FCC should exercise its authority to establish such a program. We provided a draft of this report to the FCC, NTIA of the Department of Commerce, and DOJ for their review and comment. FCC and NTIA officials stated that they were in general agreement with the facts presented in the report. Technical comments provided by FCC, NTIA and DOJ officials were incorporated in this report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 14 days after the date of this letter. At that time, we will send copies to interested congressional committees, the Chairman, FCC; the Assistant Secretary of Commerce for Communications and Information, Department of Commerce; the Assistant Attorney General, Antitrust, DOJ; and other interested parties. We will also make copies available to others upon request. If you have any questions about this report, please call me at 202- 512-2834. Key contacts and major contributors to this report are listed in appendix IV. To obtain information about the characteristics and competitiveness of the Internet backbone market, the Chairman and the Ranking Member of the Subcommittee on Antitrust, Business Rights and Competition, Senate Committee on the Judiciary, asked us to report on (1) the physical structure and financial arrangements among Internet backbone providers, (2) the nature of competition in the Internet backbone market, and (3) how this market is likely to develop in the future. To respond to these objectives, we gathered information from a variety of sources, including government officials, industry participants, and academics familiar with the functioning of this market. We interviewed officials and obtained documents from the Federal Communications Commission, the Department of Justice, the National Telecommunications and Information Administration of the Department of Commerce, the National Science Foundation, the Bureau of Labor Statistics, and the Census Bureau. We also interviewed two national Internet industry trade associations and three academics with expertise in this area. To obtain information from a wide variety of participants within the Internet backbone market, we visited locations in 12 states with varying characteristics. We included large and small cities and rural areas from various regions of the country. Other criteria used for selection of areas were proximity to Internet points of presence, which are access points to the Internet, and proximity to network access points (NAP), which are points where Internet backbones interconnect. Also considered were the presence of other features, including regional backbone networks, statewide educational or government networks, state Internet Service Provider (ISP) associations, or Native American reservations. In the selected localities, we conducted 55 semistructured interviews with participants in the Internet backbone market between January and June 2001. For these interviews, we used interview guides containing questions concerning background information about the company, connectivity to backbone networks, business relationships in the backbone market, service quality issues, and views on competition in this market and on other public policy issues. We interviewed eighteen Internet backbone providers of varying size; two miscellaneous Internet companies that provide backbone-like twenty-four Internet service providers of varying size; eight end users of backbone services, including a college, a state government, corporations, and providers of content and Web hosting; two state-level ISP associations; one Internet equipment manufacturer; and one incumbent local telephone company. Responses from interviewees were evaluated and general themes were drawn from the aggregated responses and from the aggregated responses of relevant subsets of respondents. These themes are presented in this report. We contacted an additional 32 market participants and industry representatives for purposes of conducting interviews to support this study. In these instances, we were not able to schedule an interview. In some cases, our request for an interview was declined, our telephone contacts were not returned, or we were unable to schedule an interview after repeated discussions with company officials. In addition to the information collected through interviews, we also conducted technical, legal, and regulatory research on the characteristics and competitiveness of the Internet backbone market. Each individual network or node that is connected to the Internet is identified by an Internet Protocol (IP) address—a number that is typically written as four numbers separated by periods, such as 10.20.30.40 or 192.168.1.0. When information is sent from one network or node to another, the packet of information includes the destination IP address. Because the IP deals with inter-networking—the exchange of information between networks—the IP address is based on the concept of a network address and a host address that uniquely identifies a computer connected to the Internet. The network address indicates the network to which a computer is connected, and the host address identifies the specific computer on that network. Devices known as “routers” send data packets from one network to another by examining the destination IP address of each packet. In its memory, the router contains a “routing table” which contains information specifying all of the IP addresses of other networks. The router compares a packet’s destination IP address with the information contained in the routing table to determine the network to which the packet should be sent. In order to ensure that packets from one network can reach any other network, the router must include an entry for each possible network. As more and more network addresses come into use, there is concern about the growth in the number of routing tables entries. Historically, IP addresses were organized into three commonly used classes—Classes A, B, and C. For Class A, there are 126 possible network addresses, each with nearly 17 million hosts. Slightly more than 16,000 networks may have a Class B address, each with over 65,000 hosts. Finally, there can be approximately 2 million networks with a Class C address, each with a maximum of 254 host addresses. As the Internet grew, engineers quickly identified the problems associated with exhaustion of class B addresses and the increasing number of Class C address entries in routing tables and developed a solution known as Classless Inter-Domain Routing (CIDR). CIDR treats multiple contiguous Class C addresses as a single block that requires only one entry in a routing table. This method of IP address allocation was adopted for technical efficiency reasons—the number of IP addresses that must be maintained in each router for traffic routing purposes is substantially reduced. However, this method of IP address allocation presents unique problems for smaller ISPs and other entities. If an entity seeking IP addresses cannot utilize a large block of address issued by ARIN, the entity must obtain their addresses from among the allocations made by ARIN to their Internet backbone provider. ISPs and end users with whom we spoke expressed concern about method of IP address allocation. In addition to those named above, Naba Barkakati, John Karikari, Faye Morrison, Lynn Musser, Madhav Panwar, Ilga Semeiks, and Mindi Weisenbloom made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system).
Although most Americans are familiar with Internet service providers that give consumers a pathway, or "on-ramp," to the Internet, few are familiar with Internet backbone providers and backbone networks. At the Internet's core are many high-capacity, long-haul "backbone" networks that route data traffic over long distances using high-speed fiber lines. Internet backbone providers compete in the marketplace and cooperate in the exchange of data traffic. The cooperative exchange of traffic among backbone providers is essential if the Internet is to remain a seamless and widely accessible public medium. Interconnection among Internet backbone providers varies both in terms of the physical structure and financial agreements of data traffic exchange. The physical structure of interconnection takes two forms: (1) the exchange of traffic among many backbone providers at a "network access point"--a common facility--and (2) the exchange of traffic between two or more backbone providers at "private" interconnection points. No publicly available data exist with which to evaluate competitiveness in the Internet backbone market. Evolution of this market is likely to be largely affected by two types of emerging services. First, demand is likely to rise for time-sensitive applications, such as Internet voice systems. Second, more "broadband"--bandwidth-sensitive--content, such as video, will likely flow over the Internet in the coming years.
St. Elizabeths Hospital is located approximately 2 miles south of the Capitol building and is divided by Martin Luther King, Jr., Ave. into what are known as the west and east campuses. St. Elizabeths began operations in 1855 as the “Government Hospital for the Insane.” During the Civil War, St. Elizabeths was used to house soldiers recuperating from amputations, and the west campus includes a Civil War cemetery. In 1916, Congress formally changed its name to St. Elizabeths Hospital, which refers to the historic name of the old royal land grant of which the hospital was a part. According to an HHS official, during its peak use period in the early to mid-1960s, St. Elizabeths housed approximately 7,000 inpatients and employed nearly 4,000 people. As discussed in more detail below, a 1984 law authorized the transfer of part of St. Elizabeths to the District. Today, the District primarily uses the east campus for its mental health services program. The west campus—61 buildings containing about 1.2 million square feet of space on 182 acres—is owned by the federal government, with the exception of 5 buildings that are owned by the District. HHS is the holding agency that is responsible for the federal portion of the west campus. The District owns the east campus, which comprises some 42 buildings containing 1.75 million square feet on 118 acres. Figure 1 is a map of the St. Elizabeths hospital complex. Buildings on the west side of the map shaded in grey indicate ownership by the District of Columbia. Under a use and occupancy agreement between HHS and the District, the District was, at the time of our review, using about 15 of the west campus buildings at no charge other than accepting responsibility for maintenance. The District also provides security for the west campus. West campus buildings not being used by the District are vacant. In addition, the District plans to completely vacate the west campus in 2001. On January 17, 2001, HHS officially notified GSA of its intention to declare the federal portion of the west campus excess. Under the Federal Property and Administrative Services Act of 1949 (Property Act), as amended, GSA has responsibility for federal real property utilization and disposal of excess and surplus property. In 1984, the St. Elizabeths Hospital and District of Columbia Mental Health Services Act authorized HHS to transfer to the District all property at St. Elizabeths Hospital needed by the District to provide mental health and other services under the District’s comprehensive mental health system plan. In accordance with that provision, on September 30, 1987, the Secretary transferred to the District title to almost all of the portion of St. Elizabeths that is known as the east campus and several buildings on the west campus. The act contemplated that the remaining portion of St. Elizabeths that was not so transferred, which would include most of the west campus, would be subsequently transferred to the District after Congress approved a master plan prepared by the District for the use of such property. The act, as amended, also stated that Congress would approve and enact a law transferring the remainder of the property at St. Elizabeths within 2 years after the plan was submitted to Congress.Athough the master plan was submitted to Congress in 1993, it was never approved. The 1993 master plan recommended renovation of the west campus, after which approximately 52 percent of the west campus would continue to be used for the District’s mental health program. The remainder of the west campus would be adapted for other institutional-type uses, retail facilities, and support buildings. The plan also included estimated costs ranging from about $116 million to $128 million for implementation. A District official informed us that the District does not intend to use the west campus at St. Elizabeths for mental health programs because it currently intends for mental health services to be community-based. Because Congress did not approve the 1993 master plan and did not enact legislation transferring the west campus of St. Elizabeths from HHS to the District, and because the District no longer needs or intends to use the west campus to provide mental health services, HHS is preparing the property to be excessed to GSA under the normal process for excessing federal real property. St. Elizabeths was designated a national historic landmark (NHL) in December 1990. This is the same designation given to the White House and the U.S. Capitol Building; and, according to an Advisory Council official, is granted to a very small percentage of historic properties. Agencies that hold NHL properties are responsible for preserving their historic character; and the National Historic Preservation Act (NHPA) provides that the agency responsible for the property shall, to the maximum extent possible, undertake such planning and actions as may be necessary to minimize harm to such a landmark. The NHL designation for the west campus recognized the exceptional national significance of the property and mandated the highest level of national preservation priority. The west campus comprises the oldest portion of the St. Elizabeths complex and consists of the buildings with the most historic importance. In addition to the buildings on the west campus, the NHL designation covers the landscaping and grounds, the vistas of the rivers and city, and a Civil War cemetery that is located on the property. Figure 2 shows the Center Building, which opened in 1855 and served as the main hospital building on the west campus; the Civil War cemetery, which houses the remains of about 300 Confederate and Union dead; and the vistas. Prior to the NHL designation, St. Elizabeths was listed on the National Register of Historic Places. In 1989, HHS, the District, and the Advisory Council entered into a Memorandum of Agreement (MOA) on St. Elizabeths. The MOA detailed specific measures for preserving the historic character of St. Elizabeths in accordance with the Department of the Interior’s standards for maintaining historic properties. Despite the NHL designation and the MOA, officials from HHS, the District, and GSA said that lack of funding and the absence of a clear direction for the future of the west campus over the years have left it in a badly deteriorated condition. Figure 3 shows examples of the deterioration that has occurred on the west campus. During fiscal year 2001 congressional budget deliberations, GSA and HHS jointly developed a cost estimate of $8.5 million for stabilizing and mothballing the federal portion of the west campus and performing various studies to start preparing it to be excessed and eventually disposed of by GSA. Table 1 shows the cost estimates that were developed and identifies who would normally be responsible for funding each item under existing laws and implementing regulations. According to GSA officials, the $5.3 million for stabilization and mothballing would have been used to prevent further deterioration of the buildings and prepare them for approximately 5 years of nonuse while the process of preparing the property to be excessed and disposed of takes place and a plan for reuse is developed. As the holding agency for the property, HHS is responsible for funding this item. The regulations implementing the Property Act require the holding agency to be responsible for the protection and maintenance of property pending its transfer to another federal agency or its disposal. NHPA adds an additional requirement in that the heads of all federal agencies are responsible for the preservation of historic properties that are owned or controlled by the agency. Prior to the approval of any federal undertaking that may affect any NHL, the head of the responsible federal agency shall, to the maximum extent possible, undertake planning and action to minimize harm to the landmark. Mothballing and stabilization of the property are actions HHS intends to take that will prevent further deterioration and minimize harm to the historic St. Elizabeths property. According to GSA and HHS officials, they developed this estimate by using a previous estimate HHS had developed for stabilizing and mothballing the west campus. The previous estimate was based on a detailed study of five west campus buildings that was prepared by a team of experts HHS contracted with in 1998. GSA and HHS applied the cost per square foot to stabilize and mothball these five buildings to the entire square footage of the west campus to come up with a total cost. A GSA official said they used the total square footage for the west campus, including the District- owned buildings, to add some flexibility to the estimate to compensate for inflation and additional deterioration that would have occurred since 1998. However, the official said that the $5.3 million would have been used to mothball and stabilize only the federal portion of the west campus. According to GSA officials, there were other attempts to develop a stabilization and mothballing estimate. In March 2000, GSA estimated the cost at about $11 million. However, this estimate was not based on an analysis of actual conditions, and GSA and HHS decided to contract for an estimate in the summer of 2000. The contractor estimated a cost of $2.6 million, but GSA and HHS officials were not satisfied with the estimate because of their knowledge of the conditions of the buildings and past experience. GSA and HHS then decided to rely on the $5.3 million estimate that was derived from the 1998 HHS estimate discussed above because there was pressure to provide the estimate for the fiscal year 2001 budget deliberations. GSA officials acknowledged that the $5.3 million estimate was not as refined as it could have been because of the time constraints but said they believed it was reasonable given the circumstances under which it was prepared. GSA officials recognized that a better estimate would be based on a more thorough assessment of the conditions in all the buildings. They also said that because the $5.3 million was not funded for fiscal year 2001, the estimate might have to be adjusted to reflect additional deterioration that likely has occurred and inflation. The Phase II environmental and archeological studies are required prior to disposal. During fiscal year 2000, GSA contracted for a Phase I environmental study of the west campus that was paid for with HHS funds. This involved analyzing records and visiting the site, but it did not include any test borings or soil analyses. The Phase I consultants concluded that a Phase II study would be needed to test various sites where sufficient evidence of hazardous wastes was found. The Phase II archeological study is a follow-up to prior work by consultants that concluded that there was sufficient evidence to warrant additional study because of the potential for finding archeological sites. If sites are found, GSA officials said they would be responsible for developing guidelines for mitigation measures, such as excavation by subsequent owners in identified areas as part of the disposal process. An Advisory Council official added that avoidance and preservation-in-place, where potential sites would remain untouched, could be another option. HHS, as the holding agency, is responsible for funding the Phase II environmental study on the basis of GSA’s implementing regulations for the Property Act. HHS is also responsible for funding the Phase II archeological study because of its responsibilities under NHPA. According to HHS officials, because they have funds available for the Phase II environmental and archeological studies, this would not require a new appropriation. GSA officials said that the cost estimates for these studies are based on prior experience with similar properties. GSA, as the disposal agency, is required to comply with NEPA. According to GSA officials, due to the size, complexity, and historical importance of the site, this will probably mean an environmental impact statement at an estimated cost of $2 million, based on past experience. Similarly, the estimate for the land use study of $1 million is also based on GSA’s experience with these kinds of properties. GSA said it would consider hiring consultants to prepare a land use study to identify reasonable land use options and plans to coordinate this work with the District. A GSA official said that, in general, GSA will consider funding the land use study when it is needed and is not funded by a local community. However, an HHS official told us that on January 17, 2001, OMB directed HHS to provide GSA with funds for the land use study under a reimbursable work authorization and that this action was completed soon thereafter. A District planning official whom we interviewed expressed concern that $1 million may not be enough for a sophisticated, high-quality land use study. It is important to note that these costs do not reflect all of the federal government’s costs that will be needed to prepare the west campus for reuse. For example, according to a GSA official, if industrial contaminants are found, which is likely, there will be costs associated with remediation that will be the responsibility of the holding agency pursuant to the GSA implementing regulations for the Property Act. An HHS official said that these additional costs would be unknown until the Phase II environmental study is completed in late 2001. Another item that GSA said may require funding is the preparation of historic preservation covenants to be added to the property title when it changes hands. These covenants are designed to ensure the historic preservation of the property. If needed, GSA estimates that drawing up these covenants could cost up to $500,000 given the complexities of the issues at the site. This cost was not included in the $8.5 million estimate. GSA officials also said that because the District is vacating the west campus, including the buildings it owns, it would likely have little reason to continue its maintenance and protection responsibilities for the entire west campus. GSA officials said that GSA, HHS, and the District are in the process of developing memoranda of understanding that will outline responsibilities for costs and actions during the excess and disposal process. Much work needs to be done to facilitate a reuse of the west campus. Because HHS officially notified GSA of its intention to declare the property excess, the Property Act, as amended, and related GSA regulations for disposing of excess real property govern this process. On the basis of our interviews with HHS, GSA, and District officials, we identified a number of key actions that need to take place to facilitate a reuse of the site. As mentioned before, the vacant buildings on the west campus are in a badly deteriorated condition, and action is needed to prevent the situation from getting worse. The work is needed to preserve the buildings while the excess and disposal process takes place; and this work is the responsibility of HHS, the holding agency, under the Property Act and NHPA. The District plans to vacate the west campus and likely will have little reason to continue providing protection and maintenance services. Therefore, protection and maintenance plans and funding will need to be reevaluated for the period during the excess and disposal process. Also, GSA officials said that a well-planned interim use policy could sometimes help generate reuse of the site, but poor conditions may not make interim use a feasible option. The extent of environmental remediation needed at the site is not yet known. However, GSA officials said that it is likely that medical wastes will be found and asbestos, lead paint, and hazardous substance conditions will be analyzed for future action. HHS is responsible for the environmental assessment and any required remediation under the GSA implementing regulations for the Property Act. In addition, GSA is required under NEPA to consider the impact of the federal government’s actions to dispose of the site. GSA said that the District would have input to this part of the process. Given the property’s NHL designation, several historic preservation issues will need to be addressed as part of the excess and disposal process. At a minimum, HHS, GSA, the District, and the Advisory Council would be participants in a consultation process to address these issues. GSA officials told us that as the disposal agency, GSA would prepare historic preservation covenants, if needed, that will be part of the property title when it changes hands. They said that these covenants are designed to ensure the historic preservation of the property and will likely have an impact on how it is eventually used. According to an official with the Advisory Council, the covenants and the scope of restriction placed on redevelopment could be made flexible and would be a subject of discussion during the consultation process. GSA will consider hiring a consulting team to identify reasonable land use options for the west campus and intends to work with the District so that the District’s goals for the site are considered, the necessary zoning and other approvals for the proposed use can be obtained, and the issue of what to do with the five buildings the District owns can be addressed. The land use study, according to GSA officials, would include a building and infrastructure assessment, a market and economic viability analysis, an assessment of financing options, and recommendations for reusing the site. GSA officials said that once the land use study is close to completion and the environmental and historic preservation requirements are fully understood, a disposal strategy can be developed. To help identify user(s) for the site, GSA goes through several levels of screening that are set forth in the Property Act and related regulations as the land use study is being prepared. The first level of screening involves determining whether there are other federal uses for the site. If there are no federal uses, the next level of priority is for the site to be screened for use by the homeless under the Stewart B. McKinney Homeless Assistance Act, as amended. If the site is not used for the homeless, federal agencies can work with nonprofit organizations that may want the site for public use, such as a park, museum, or educational facility. The next level of screening involves determining if state or local governments where the property is located want to acquire the site. Finally, if none of these screening processes produce a user, the site becomes available for public sale with full and open competition. GSA officials said that the west campus is a unique property, and the entire excess and disposal process will likely take several years to complete. A District planning official whom we interviewed had concerns with allowing the federal portion of the west campus to be disposed of through the standard excess and disposal processes outlined by the Property Act and implementing regulations. This official said that the District does not oppose HHS’ efforts to excess the site. However, the District official was concerned that the District will not have enough input on key decisions regarding what will happen to the site under the normal disposal screening processes discussed above. The District official believes that another approach could be to either obtain a waiver from GSA or obtain special legislation that would exempt the site from the screening processes so that a joint commission could be established to determine the best use for it. The regulations that implement the Property Act make reference to such a waiver. However, a GSA official informed us that the McKinney Act requirements related to consideration for use by the homeless cannot be waived. The District official said that a possible approach could be a three-way partnership involving the federal government, the District, and possibly a private investor to develop the site as part of a public-private partnership. This official said that it is likely that the District would be willing to commit some funds to such a partnership because the site is critical to the redevelopment of that part of the District. This official said that the District’s only concern is that the disposal is properly planned and that the site ultimately enjoys its highest and best use. This official added that although the District may not be interested in gaining possession of the site, it does have the greatest long-term interest in what happens to it. An HHS official said that the federal government will review the District’s proposals while the excess process continues. The west campus of St. Elizabeths is a unique property that according to HHS, GSA, and District officials, is in a badly deteriorated condition. Our evidence suggests that a significant amount of money and much work would be needed to prepare it for reuse. This work includes stabilizing and mothballing the buildings for the period of time when the excess and disposal process will take place, developing plans for protection and maintenance, addressing environmental and historic preservation issues, studying potential uses for the property, and identifying user(s). The historic significance of the property, as well as the economic implications of its reuse for the District, will be key factors to be considered in determining the future use of the property. Attaining a successful outcome that is agreeable to all the interested stakeholders and is in the best interest of the government will be a challenging and complex task. In response to our request for comments on a draft of this report, HHS’ Acting Inspector General, on behalf of the Department, informed us that HHS had no comments. Budget review staff from OMB, the Executive Officer of GSA’s National Capital Region, a Historic Preservation Specialist with the Advisory Council, and the Director of NCPC’s Office of Plans Review provided oral technical comments, which we incorporated where appropriate. The Director of the District’s Office of Planning provided written comments that are reprinted as appendix I. The Director of the Office of Planning generally agreed with the report’s findings and said that planning for the west campus of St. Elizabeths is critical to the overall revitalization goals of the District in general and, in particular, the area of the city where St. Elizabeths is located. In commenting, the Director said that the District should be the lead entity for the planning of the site, working in close collaboration with GSA. According to the Director, this was due to the critical role community involvement will play in the ultimate reuse of the west campus, the importance of weaving west campus development into the myriad planning efforts being undertaken in that area, and the critical role St. Elizabeths can play in the District’s overall economic development goals. As discussed in the report, the Property Act, as amended, and related GSA regulations, govern the excess and disposal process, and GSA will have the responsibility for managing it. As also discussed in the report, a GSA official said that in general, if the local jurisdiction does not fund a land use study, GSA would consider funding it, as needed, as part of the planning process. Although GSA will be responsible for the land use study because it was provided with the funds for it, the Executive Officer of GSA’s National Capital Region said that the District would have an integral role in the process. The Director of the Office of Planning also made other specific points to clarify the District’s views on St. Elizabeths and we subsequently discussed some of the Director’s comments with the Executive Officer of GSA’s National Capital Region. Our comments on some of these additional points are contained in appendix I. We are sending copies of this report to the Chairmen and Ranking Members of several congressional committees with jurisdiction over HHS, GSA and the District; the Honorable Tommy G. Thompson, Secretary of HHS; the Honorable Mitchell E. Daniels, Jr., Director of OMB; the Honorable Anthony A. Williams, Mayor of the District of Columbia; Thurman M. Davis, Sr., Acting Administrator of GSA; Patricia E. Gallagher, Acting Executive Director, National Capital Planning Commission; and John M. Fowler, Executive Director of the Advisory Council on Historic Preservation. We will make copies available to others on request. Major contributors to this report were Susan Michal-Smith, David E. Sausville, Gerald Stankosky, and Wendy Wierzbicki. If you or your staff have any questions, please contact me on (202) 512-8387 or at [email protected]. The following are GAO’s comments on the letter from the District of Columbia. 1. We recognized in our report that the $8.5 million estimate—which includes the mothballing and stabilization costs—represents what GSA and HHS believed would be needed to begin the excess and disposal process and that more funds will be needed to prepare the west campus for reuse. We did not do work to assess the 1993 estimate. However, once the land use study is completed, the stakeholders will be in a better position to assess the potential costs of different options. 2. Exploring potential financing mechanisms was outside the scope of our review. However, as stated in the report, the land use study, according to GSA officials, will assess financing options. 3. As pointed out in the report, GSA officials said that a well-planned interim use policy could sometimes help generate reuse of the site. However, they recognized that the poor conditions at the site may not make interim use a feasible option. If interim use does not take place, the District’s concern about this type of use becoming permanent would not be an issue. Furthermore, the Executive Officer of GSA’s National Capital Region told us that the land use study process would seek to identify a comprehensive solution for the site that would be agreeable to all stakeholders and that would prevent a fragmented, piecemeal approach. 4. The Executive Officer of GSA’s National Capital Region told us that it was too early to determine with more specificity how long the excess and disposal process would take, but GSA intends to work with the District soon to develop a schedule for the process.
The west campus of St. Elizabeths hospital is a unique property that is generally acknowledged to be in poor condition. GAO concludes that a significant sum of money and much work would be needed to prepare the west campus for reuse. This work would include stabilizing and mothballing the buildings for the period of time when the excess and disposal process will take place, developing plans for protection and maintenance, addressing environmental and historic preservation issues, studying potential uses for the property, and identifying user(s). The historic significance of the property, as well as the economic implications of its reuse for the District of Columbia will be key factors to be considered in determining the property's future. A successful outcome that is agreeable to all the interested stakeholders and is in the government's best interest will be a challenging and complex task.
VA has departmentwide policy and procedures for convening and conducting AIB investigations. According to VA Handbook 0700, the department’s procedures are intended “to promote effectiveness and uniformity in the conduct and reporting of AIB investigations,” among other things. The procedures outlined in the handbook are mandatory, except where otherwise indicated. According to VA officials, the policy and procedures achieve their intended purpose, while also providing VA convening authorities—medical center directors, or any authorities senior to them within networks or headquarters—sufficient flexibility and discretion to tailor an investigation to effectively meet diverse informational needs. For example, convening authorities are required to select AIB members who are impartial and objective, but they have flexibility to vary the number of members appointed to each AIB based on the matter being investigated. VA Directive 6330, Directives Management (Feb. 26, 2009). General Counsel said the department plans to maintain flexibility in its AIB process. VA’s AIB process begins with a convening authority determining the need for an AIB investigation. Once convened, the AIB collects evidence, which may include witness testimony, and documents its results in an investigation report. (See fig. 1 for an overview of VA’s process for convening and conducting AIB investigations.) Convening an AIB investigation involves determining its need, scope, and board composition. VA Handbook 0700 states that a convening authority may determine whether an AIB investigation is needed based on several factors, including the results of a preliminary investigation, any other ongoing investigation, or the type of matter being investigated. A preliminary investigation is an informal process whereby readily available information is collected, for instance by obtaining witness statements. According to one convening authority, an AIB investigation would likely be convened after a preliminary investigation if, for example, conflicting witness accounts were provided during this initial investigation. A convening authority may also determine that another ongoing review into the matter, such as root cause analysis or peer review, meets VA’s needs without convening an AIB. Moreover, AIBs are not to investigate matters that may be criminal in nature without the convening authority first coordinating with federal and state law enforcement authorities, including VA’s Office of Inspector General. A convening authority also determines the scope of the investigation and composition of the AIB. An investigation’s scope—the matter to be investigated—may be focused on a specific incident involving alleged employee misconduct or a broader systemic matter. For example, among the investigation reports we reviewed, one AIB investigated alleged physical and verbal abuse of a patient by a VA nursing assistant (an employee misconduct matter), while another investigated the facts and circumstances surrounding the death of a patient, including whether changes to policies and procedures were effectively communicated to staff and monitored (a systemic policy matter). In determining the composition of the AIB—the number and qualifications of members to be appointed—VA Handbook 0700 states that AIBs generally should be comprised of one to three members, and the members are to be selected primarily based on their expertise and investigative capability, as well as their objectivity and impartiality. Convening authorities we interviewed— medical center directors—said they typically appoint three members to ensure that AIBs include a subject matter expert and at least one member with investigative experience or training. Moreover, three of these convening authorities have appointed AIB members from outside their medical center when necessary to ensure the board’s impartiality. Finally, if the convening authority determines that an AIB is needed, it documents the AIB’s scope and member composition in a charge letter, which officially authorizes the AIB investigation. During the course of the investigation, the convening authority may amend the charge letter, to change the scope of the investigation or composition of the AIB, among other things. For example, a convening authority included in our review initially charged an AIB to investigate an incident involving sexual harassment, but later expanded the investigation’s scope to also include an incident involving reprisal against the individual who reported the harassment. According to one convening authority, it may be more cost effective to expand the scope of an investigation to address additional matters than to convene a second AIB. The charge letter also communicates any waivers to VA’s procedural requirements for the AIB investigation. According to VA Handbook 0700, a convening authority may waive any of the requirements established by the handbook on a case-by-case basis, if, for example, requiring compliance with such requirements would not be cost effective. The charge letter also may authorize the AIB to provide recommendations for corrective actions. According to VA Handbook 0700, an AIB only may provide recommendations if authorized to do so by the convening authority. However, an AIB is prohibited from recommending a specific level or type of corrective action, such as termination or suspension, and instead may only recommend “appropriate disciplinary action.”Moreover, although an AIB may provide recommendations, convening authorities are not required to implement them. Three of the four convening authorities we interviewed have authorized AIBs to provide recommendations, while one convening authority said that he generally has not because AIB members are not privy to all information pertaining to an employee who is the subject of the investigation, such as the individual’s employment history. After the investigation is convened, the AIB collects and analyzes evidence, such as witness testimony and documentation, related to the matter under investigation. An AIB may obtain witness testimony from VA employees, who are obligated to cooperate with the investigation, as well as non-VA employees—including patients—who generally are not obligated to cooperate with the investigation. According to VA Handbook 0700, testimony may be obtained under oath and transcribed by tape recording, court reporter, or both. Additionally, the AIB may obtain all available documents, records, and other information that are material to the scope of the investigation, including VA policies, employee personnel records, and e-mail correspondence. The AIB analyzes the collected evidence and develops the findings and conclusions of the investigation, including whether any matter investigated was substantiated. The AIB documents results—evidence, findings, conclusions, and any recommendations—in an investigation report that is forwarded to the convening authority. The convening authority reviews the report to verify that the AIB sufficiently investigated the matter in accordance with the charge letter and VA’s AIB policy.AIB to further investigate the matter, clarify the information in the investigation report, or both. VA considers an AIB investigation to be complete once the convening authority certifies the investigation report. The convening authority may ask the Similar to VA, three other federal agencies that we reviewed with administrative investigation processes—Federal Bureau of Prisons, U.S. Navy Bureau of Medicine and Surgery, and U.S. Coast Guard—have policies and procedures in place to guide their administrative investigations. Further, the results of these agencies’ administrative investigations may be used to inform individual or systemic corrective actions. However, the extent to which the administrative investigations are expected to provide recommendations for such corrective actions varies by agency. (See app. I for characteristics of VA’s and these three other federal agencies’ administrative investigation processes.) VA does not collect and analyze aggregate data on AIB investigations, including data on the number of AIB investigations conducted, the types of matters investigated, and whether the matters were substantiated, or on any systemic deficiencies identified by AIBs. Without these data, VA is unable to adequately assess the causes or factors that may contribute to deficiencies occurring within all of its medical centers and networks. In contrast, through VA’s Patient Safety Program, analyzes aggregate data on patient safety matters. When an adverse event involving patient safety occurs at a medical center, information about the event is entered into a tracking system that allows VA to electronically monitor patient safety information throughout its health care system. Additionally, some of these events are assessed through root cause analysis to determine the underlying causes of the adverse event and to develop and implement corrective action plans to reduce the likelihood of recurrence at the medical center, as well as the potential occurrence at other medical centers. Information on AIB investigations is maintained by different offices across VA. For example, each medical center or network maintains the investigation report for each AIB investigation that it conducts related to VHA staff at the GS-15 level and below. In the absence of having aggregate data on AIB investigations, VHA administered a web-based survey to medical centers and networks, in response to our request for AIB data. These survey data on AIB investigations involving staff at the GS-15 level and below, in conjunction with VA data on AIB investigations involving senior leadership, showed that VHA conducted 1,143 AIB investigations during fiscal years 2009 through 2011. (See table 1.) VA’s Patient Safety Program is designed to identify and fix system flaws that could harm patients. Most of these investigations involved staff at the GS-15 level and below. VHA officials told us that although it administered the web-based survey in response to our request for data, the department has no plans to collect and analyze aggregate data on AIB investigations conducted within VHA. According to the VHA survey data, the types of matters investigated by AIBs during fiscal years 2009 through 2011 included inappropriate employee behavior involving patients and other employees; individual employee wrongdoing, such as theft and fraud; and systemic deficiencies. Our analysis of AIB investigation reports from the four medical centers in our review showed that allegations of inappropriate employee behavior involving patients and other employees were the most common types of matters investigated by AIBs during fiscal years 2009 through 2011. (See table 2 for more information on the types of matters investigated by AIBs at the four VA medical centers included in our review during fiscal years 2009 through 2011.) VA has used the results of AIB investigations to inform corrective actions taken at individual medical centers and networks to address both individual employee misconduct and system deficiencies. However, the department does not share information about improvements made in response to AIB investigations conducted at certain medical centers and networks that could have broader applicability. To address matters of employee misconduct, VA has used the results of AIB investigations—evidence, findings, conclusions, and recommendations—along with other factors to inform corrective actions taken against individual employees. These corrective actions range from disciplinary actions, such as termination or demotion, to nondisciplinary actions, such as counseling, reassignment, or training to expand an employee’s knowledge about VA policies and procedures or clinical standards, according to information provided by VA officials we interviewed.actions, they are not involved in determining actual corrective actions taken against an individual. Although AIBs may make recommendations for corrective A medical center director or appropriate higher level official may use results from the investigation to help determine whether any corrective actions are warranted, and if so, the type and severity of each action. Other VA staff, such as human resources and general counsel staff, may also provide guidance to management in determining appropriate corrective actions. Specifically, in determining the type and severity of corrective actions to be taken, VA officials review the results of the AIB investigations, along with other factors related to the alleged misconduct being investigated, including the nature and seriousness of the offense, whether the conduct was intentional or inadvertent, and the type of penalty used for similar matters. VA officials also consider other information regarding an employee’s history and conduct, including violations of VA policies. For example, medical center officials told us that an employee’s history of time and attendance violations may be used in addition to the misconduct investigated by the AIB to inform disciplinary action against an employee. VA does not collect and analyze aggregate information on the specific employee corrective actions that were informed by AIB investigations. Instead, this information is maintained by different offices throughout VA, including human resources offices at VA medical centers. Information provided by VA officials from the medical centers included in our review showed that the results of the 49 AIB investigations conducted during fiscal years 2009 through 2011 have been used, along with other information, to inform 67 employee corrective actions.Suspension and training were among the most common corrective actions that were informed by AIB investigations taken at these medical centers. (See table 3.) AIBs are an important investigation tool for VA that can lead to operational improvements, including improved quality of care provided to veterans. However, VA neither collects nor analyzes aggregate data on AIB investigations nor does it routinely share information about systemic deficiencies identified or corrective actions taken to improve VHA operations and services. During fiscal years 2009 through 2011, VHA conducted more than 1,100 AIB investigations, yet the lack of such information from AIB investigations may result in missed opportunities for VA to gauge the extent to which deficiencies occur throughout its medical centers and networks to prevent escalation of problems, and to take timely corrective action, when needed. Such missed opportunities come with a cost when information from these investigations is not used to improve the quality and efficiency of VHA operations, including the delivery of care to veterans. To systematically gauge the extent to which deficiencies identified by individual AIBs may be occurring throughout VHA; and to maximize opportunities for sharing information across VHA to improve its overall operations, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following two actions for AIB investigations conducted within VHA: establish a process to collect and analyze aggregate data from AIB investigations, including the number of investigations conducted, the types of matters investigated, whether the matters were substantiated, and systemic deficiencies identified; and establish a process for sharing information about systemic changes, including policies and procedures implemented in response to the results of AIB investigations, which may have broader applicability throughout VHA. We provided a draft of this report to VA for comment. In its response, which is reprinted in appendix II, VA concurred with our recommendations. In its comments, VA identified several activities that VHA uses to identify, address, and share information about systemic issues in facilities and VHA program offices—including root cause analysis and peer review, which we discuss in our report. VA stated that it is within the context of these existing activities, which address quality and safety issues, that it would explore any new processes for collecting and analyzing aggregate data from AIB investigations. We believe that it is important for VA to establish such processes, even if they are processes within existing activities, to systematically gauge the extent to which deficiencies identified by individual AIBs may be occurring throughout VHA and to maximize opportunities for sharing information across VHA to improve its overall operations. Additionally, VA stated that its comments focus only on implications and issues involving VHA, rather than VA, and suggested a revision to our recommendations to reflect this. As stated in the scope and methodology of this report, we focused on AIB investigations conducted in VHA, and thus our recommendations were only focused on these investigations. We revised the wording of our recommendation to clarify that we were focusing only on AIB investigations conducted within VHA. (VA also provided technical comments, which we have incorporated as appropriate.) As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send a copy of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Department of Veterans Affairs Administrative investigations are to be conducted in accordance with departmentwide policy and procedures, which allow flexibility to tailor an investigation to meet diverse informational needs. Federal Bureau of Prisons Administrative investigations must be conducted in accordance with agencywide policy. U.S. Navy Bureau of Medicine and Surgery U.S. Coast Guard Administrative investigations must be conducted in accordance with Navy-wide policy. Administrative investigations must be conducted in accordance with agencywide policy. A convening authority— medical center directors or any authority senior to them within networks or headquarters— determines the need and scope of the investigation based on several factors that may include the results of a preliminary investigation. Matters are sorted into three categories based on their severity and potential consequences. Officials use these categories to determine whether an administrative investigation will be convened by the local institution or by another office in the agency or Department of Justice, such as the Office of the Inspector General. A convening authority— usually a commanding officer—initiates a preliminary investigation into an incident. Based in part on the findings of the preliminary investigation, and in consultation with a Navy legal advisor, the convening authority may authorize an administrative investigation and if so, determines the scope of the investigation. A convening authority— usually a senior officer— generally determines the need and scope of administrative investigations. For certain matters, such as fires or ship collisions, administrative investigations are required. The convening authority selects individuals primarily based on their expertise and investigative capability, as well as their objectivity and impartiality. Generally, between one and three individuals should be selected to conduct the investigation. Typically one individual designated from the institution’s Special Investigator Supervisor Office—which primarily investigates crimes and corruption related to inmates and staff— conducts administrative investigations. The convening authority selects one or more best-qualified individuals to conduct an administrative investigation based on age, education, training, experience, length of service, and temperament. The convening authority selects the appropriate investigating officer. Typically, one junior officer conducts the investigation, but more officers may be appointed for complex incidents. Department of Veterans Affairs Investigation results are documented in an investigation report that includes the evidence, findings, conclusions, and any recommendations. The convening authority reviews the investigation report and certifies the investigation as complete. Federal Bureau of Prisons Investigation results are documented in an investigation report that includes findings and conclusions. The Bureau of Prison’s Office of Internal Affairs reviews the investigation report and closes the administrative investigation. U.S. Navy Bureau of Medicine and Surgery U.S. Coast Guard Investigation results are documented in an investigation report that includes findings of fact, opinions, conclusions, and any recommendations. The convening authority reviews and certifies the investigation report. Investigation results are documented in an investigation report that includes findings, opinions, and recommendations. The convening authority reviews the investigation report. Any officer senior to the convening authority may also review the investigation report. Administrative investigation reports may provide recommendations for corrective action if authorized to do so by the convening authority. Administrative investigation reports do not provide recommendations for disciplinary action, but may provide recommendations for other corrective actions, such as employee training. Administrative investigation reports may provide recommendations for corrective action only when requested to do so by the convening authority. Administrative investigation reports are expected to provide recommendations for corrective action. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Julianne Flowers; Lisa Motley; Carmen Rivera-Lowitt; C. Jenna Sondhelm; and Brienne Tierney made key contributions to this report.
VA may use an AIB to determine the facts surrounding alleged employee misconduct or potential systemic deficiencies related to VA policies or procedures. AIBs do not determine corrective actions, such as individual disciplinary actions or procedural changes, but AIB investigation results, including evidence, may be used to inform such actions, making it critical for AIBs to be convened and conducted appropriately. You expressed interest in the number of AIB investigations and their results. In this report, GAO examines (1) the process VA uses to convene and conduct AIB investigations, (2) the extent to which VA collects data on AIB investigations, and (3) how VA has used the results of its AIB investigations. GAO focused on AIB investigations conducted within VHA; reviewed VA documents, including policies and procedures, and VHA data on AIBs conducted during fiscal years 2009 through 2011; and interviewed VA officials from headquarters and four medical centers. To ensure data reliability, GAO reviewed VHA’s methods to collect AIB data. The Department of Veterans Affairs (VA) has departmentwide policy and procedures for convening and conducting administrative investigation boards (AIB). The department’s procedures contain requirements for convening and conducting AIB investigations, but according to VA officials, they also provide the flexibility to tailor an investigation to effectively meet diverse informational needs. For example, the VA official convening an AIB investigation is required to select AIB members who are impartial and objective, but has flexibility to vary the number of members appointed to each AIB based on the matter being investigated. VA is currently updating its AIB policy and procedures, but officials said the department plans to maintain flexibility in its AIB process. VA does not collect and analyze aggregate data on AIB investigations, including data on the number of AIB investigations conducted, the types of matters investigated, and whether the matters were substantiated, or on any systemic deficiencies identified by AIBs. Having aggregate data could provide VA with valuable information to systematically gauge the extent to which matters investigated by AIBs may be occurring throughout VA’s Veterans Health Administration (VHA) and to take corrective action, if needed, to reduce the likelihood of future occurrences. Without such data, VA is unable to adequately assess the causes or factors that may contribute to deficiencies occurring within its medical centers and health care networks. Information on AIB investigations is maintained by different offices across VA. For example, each medical center or network maintains information on each AIB investigation that it conducts. In response to GAO’s request for AIB data, VHA administered a web-based survey that collected data from all its medical centers and networks on AIB investigations they reported conducting during fiscal years 2009 through 2011. Survey data showed that medical centers and networks conducted more than 1,100 investigations during this time period, and the types of matters investigated included allegations of inappropriate employee behavior involving patients and other employees; individual employee wrongdoing, such as theft and fraud; and systemic deficiencies. VHA officials told us that although it administered the web-based survey, the department has no plans to collect and analyze aggregate data on AIB investigations conducted within VHA. VA has used the results of AIB investigations to inform corrective actions, but does not share information about improvements made that could have broader applicability. Specifically, VA has used the results of AIB investigations to inform systemic changes at the medical centers and networks where AIB investigations have been conducted. For example, VA has developed new policies and procedures for improving patient and employee safety and developed new training programs to expand employees’ knowledge of VA policies and procedures. However, VA does not share information about these improvements that may have relevance for other areas within VHA. Such information could be used to improve not only the quality of patient care provided, but also the efficiency of VHA’s overall operations. For example, one medical center included in GAO’s review implemented a tracking system to ensure surgical instruments are delivered promptly to the operating room and a checklist to ensure the availability of needed equipment prior to starting surgery. GAO recommends that VA establish processes to (1) collect and analyze aggregate data from AIB investigations conducted within VHA, and (2) share information about improvements that are implemented in response to the results of AIB investigations. VA concurred with these recommendations.
Thorough and comprehensive planning and preparation are crucial to the ultimate cost-effectiveness of any large, long-term project, particularly one with the scope, magnitude, and immutable deadlines of the decennial census. Indeed, the Bureau’s past experience has shown that the lack of proper planning can increase the costs and risks of downstream operations. Moreover, sound planning is critical to obtaining congressional support and funding because it helps demonstrate that the Bureau has chosen the optimal design given various trade-offs and constraints and that it will effectively manage operations and control costs. However, Congress, GAO, the Department of Commerce Inspector General, and even the Bureau itself have noted how the 2000 Census was marked by poor planning, which unnecessarily added to the cost, risk, and controversy of the last national head count. For example, our earlier work, and that of the Department of Commerce Inspector General, reported that in planning the 2000 Census, the Bureau, among other shortcomings, did not involve key operations staff in the initial design phases; did not translate key performance goals into operational, measurable terms that could be used as a basis for planning; did not develop and document a design until mid-decade; and initially failed to provide sufficient data to stakeholders on the likely effects of its initiatives for addressing the key goals for the census— reduced costs and improved accuracy and equity. Planning weaknesses were not limited to the 2000 Census. In fact, a variety of problems plagued the planning of the 1990 Census. To help prevent the Bureau from repeating those mistakes as it plans the 2010 Census, in our October 2002 report, we recommended that the Secretary of Commerce direct the Bureau to provide comprehensive information backed by supporting documentation in its future funding requests for planning and development activities, including, but not limited to, specific performance goals for the 2010 Census and information on how the Bureau’s programs would contribute to those goals; detailed information on program feasibility, priorities, and potential key implementation issues and decision milestones; and performance measures. The consequences of a poorly planned census are high given the billions of dollars spent on the enterprise and the importance of collecting quality data. The Constitution requires a census as a basis for apportioning seats in the House of Representatives. Census data are also used to redraw congressional districts, allocate billions of dollars in federal assistance to state and local governments, and provide information for many other public and private sector purposes. As agreed with your offices, our objectives for this report were to review the Bureau’s current plans for the 2010 Census and the extent to which they might address shortcomings with the 2000 Census, analyze the Bureau’s cost estimates, and assess the rigor of the Bureau’s 2010 planning process. To achieve these three objectives, we interviewed officials from the Bureau’s Decennial Management Division and other units involved with planning the 2010 Census. We also reviewed relevant design and budget documents as well as our prior work and that of the Department of Commerce Inspector General, on planning the 2000 and earlier censuses. We also reviewed reports by the National Academy of Sciences on planning the 2010 Census. We did not independently verify the cost information the Bureau provided. To help determine the key elements for successful project planning, we reviewed a number of guides to project management and business process reengineering. Our work was performed from January through September 2003 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce and the Director of the Office of Management and Budget. On November 6, 2003, we received the Secretary’s written comments on the draft (see app. I). On October 14, 2003, the Director of OMB forwarded OMB’s comments on the draft (see app. II). They are addressed in the Agency Comments and Our Evaluation section of this report. In designing the 2010 Census, Bureau officials had four principal goals in mind: (1) increase the relevance and timeliness of census long-form data, (2) reduce operational risk, (3) increase the coverage and accuracy of the census, and (4) contain costs. The goals were a direct response to problems the Bureau experienced in conducting the 2000 Census, such as untimely long-form data; inaccurate maps; coverage difficulties; and expensive, labor-intensive, and paper-laden field data collection. The Bureau recognized that its traditional approach to counting the population was insufficient for meeting these four objectives. In its place, the Bureau developed what it believes is a paradigm shift to taking the census, basing its reform efforts on three interrelated components the Bureau refers to as a “three-legged stool.” The mainstay of a successful census is an accurate address list and its associated maps. The Bureau uses MAF and TIGER to provide (1) maps for field operations and data reference, (2) the geographic location of every structure, (3) address lists for the decennial census, and (4) names and codes of entities for data tabulation and data for use by the commercial geographic information systems industry. The Bureau’s experience in conducting the 2000 Census highlighted the need to update and modernize MAF/TIGER prior to 2010. For example, the centerlines of streets in TIGER did not accurately reflect their true geographic locations, which could cause houses to be placed in the wrong census blocks. Also, according to the Bureau, the 1980s software used to develop TIGER is now outdated and cumbersome to update. To fix these and other problems, the Bureau launched the MAF/TIGER Enhancements Program (MTEP) as part of the 2010 Census modernization efforts. Its objectives include correcting the locations of each MAF address, street, and other map developing and deploying a new MAF/TIGER processing environment; expanding and encouraging geographic intergovernmental partnership programs; implementing the Community Address Updating System (CAUS), an initiative to partner with local governments to update MAF data; and initiating quality assurance evaluation activities. The Bureau estimates the total cost for these five objectives to be $536 million. According to Bureau officials, while some elements of MAF will be improved as part of the overall MAF/TIGER enhancements program, the primary focus of the effort is on TIGER modernization and data correction. This modernization program will not reengineer the MAF process. ACS is intended to be a monthly survey of 250,000 households that would replace the long form used in past decennial censuses. According to the Bureau, the benefits of ACS include (1) more timely long-form data at detailed geographic levels that would be as accurate as subnational annual data from existing surveys, such as the Current Population and American Housing Surveys, and (2) the ability to improve the accuracy of the decennial census population counts by eliminating the long-form questionnaire. The ACS data will be published annually for geographic areas with populations over 65,000; as a 3-year average for geographic areas with populations of 20,000 to 65,000; and as a 5-year average for geographic areas with populations under 20,000. According to the Bureau, because of the larger sample size associated with long-form data, the annual and 3- year average data will be significantly less accurate than the long-form data. The 5-year data would be about as accurate as the long form. The Bureau believes that eliminating the long form will result in a number of benefits to decennial data collection and general field operations. For example, according to the Bureau, the reduction in paper will enable it to process data with three data capture centers instead of the four centers used during Census 2000. The Bureau also would not need as many local census offices, thereby allowing it to reduce the rolls of clerical and administrative staff. According to the Bureau, a short-form-only census also allows the Bureau to use such technology as handheld mobile computing devices so that enumerators can locate and update data on housing units, help conduct interviews, transmit data directly to the data capture centers, and receive regularly updated field assignments. The devices will be linked to the satellite-based Global Positioning System (GPS) to enable field workers to locate addresses more precisely and efficiently. The Bureau also plans to incorporate changes that are not dependent on ACS and MAF/TIGER. For example, the Bureau plans to expand the respondents’ ability to complete their questionnaires via the Internet. As shown in table 1, the Bureau’s three-legged stool strategy is generally aligned with three of its four key goals for the 2010 Census and, if successful, could put the Bureau on track toward achieving them. Less clear is how the Bureau will achieve its goal of reducing operational risk using its current plan. Although the Bureau’s position that early testing will enable it to identify and correct flaws is both a common sense business practice and supported by its past experience (assuming its testing program is adequately designed), as described below, the operational and other hurdles associated with successfully implementing the three-legged stool actually introduce new risks and challenges. This does not necessarily mean that the Bureau’s design is flawed. To the contrary, the obstacles to a cost-effective head count call for the Bureau to consider bold and innovative initiatives, and these are not risk free. At the same time, given the enormity of the census and all of its complexities, the three-legged stool by itself will not automatically guarantee the successful accomplishment of the Bureau’s goals. Our work on transforming agencies into high-performing organizations has underscored the importance of an agency’s leadership and core business practices. Critical success factors include, among others, effective communication strategies to ensure coordination, synergy, and integration; strategic planning; aligning the agency’s organization to be consistent with the goals and objectives established in the strategic plan; and effective performance, financial, acquisition, and information technology management. In all, the Bureau faces at least three key challenges. Among the more significant challenges the Bureau faces is securing congressional approval for its proposed approach. As we noted in our January 2003 performance and accountability report, congressional support for the 2010 design is necessary to ensure adequate planning, testing, and funding levels. Conversely, the lack of an agreed-upon design raises the risk that basic design elements might change in the years ahead, while the opportunities to test those changes and integrate them with other operations will diminish. In other words, in order for the Bureau to conduct proper planning and development activities, the basic design of the 2010 Census needs to be stable. According to the Bureau, a go/no-go decision on key aspects of the design—a short-form-only census and replacing the long form with ACS— will need to be made around 2006. Bureau officials told us that if ACS were dropped after 2007, the Bureau would not be able to reinstate the long form with the short form in 2010 because of logistical obstacles. They noted that the Bureau is already testing the short-form-only census and, in late-2005 or early-2006, expects to sign a contract for data capture operations. If the Bureau had to revert to a long-form census at that point, it would add significant risks and costs to the 2010 Census. During the 2000 Census, the lack of an agreement between the administration and Congress on the fundamental design—and particularly, the Bureau’s planned use of sampling—increased the likelihood of an unsuccessful head count and was one of the principal reasons why, in 1997, we designated the 2000 Census a high-risk area. Members of Congress questioned the use of sampling and estimation for legal and methodological reasons. Contributing to Congress’s skepticism was the Bureau’s failure to provide sufficiently detailed data on the effects of its proposed approach. Although the U.S. Supreme Court settled the dispute in January 1999, the Bureau ultimately wound up having to plan for both a “traditional” census and one involving sampling, which added to the costs and risks of the 2000 decennial census. To help secure congressional support for its 2010 reform efforts, it will be important for the Bureau to convincingly demonstrate that it has chosen the optimum design given various resource and other constraints and that it will effectively manage operations and costs. A critical first step in this regard is to have comprehensive and transparent information that lays out the specifics of the Bureau’s plans, explains their benefits, and supports assumptions. However, as discussed more fully in the next section, while the Bureau’s planning for the 2010 Census has improved compared with its efforts for the 2000 Census, certain information gaps remain. For example, the Bureau’s most recent budget submissions have not included complete life cycle cost estimates that could enable Congress to make more informed decisions about the cost implications of the three-legged stool design, including ACS. Most of the reforms, savings, and improvements in accuracy the Bureau anticipates will not be possible unless it conducts a short-form-only census. However, the Bureau’s planned replacement for the long form, ACS, faces methodological and other questions that need to be resolved soon. Consequently, the Bureau is taking a significant risk by pinning the success of its reform efforts largely on a survey that may not be an adequate replacement for the long form. The Bureau believes that without ACS, it will need to repeat the Census 2000 design. One methodological question is whether to administer ACS as a mandatory or voluntary survey. Under the Bureau’s current approach, survey recipients will be legally required to respond to ACS. However, in response to congressional concerns that a mandatory survey is intrusive, the Bureau tested conducting ACS as a voluntary survey. Based on the results of the test, the Bureau estimates that a voluntary survey could produce a response rate around 4.2 percentage points lower than a mandatory survey. The Bureau estimates that costs would increase by $59.2 million per year to maintain the same level of reliability achieved from a mandatory survey. Moreover, the Bureau’s efforts to ensure that ACS data will serve as a satisfactory replacement for the long-form data are not yet complete. Among the remaining issues, most of which are critical to the reliability of the small geographic area ACS data, are the following: 1. Benchmarking ACS data or small geographic areas to the population counts and characteristics from the 2010 short form. 2. Inconsistency of ACS residency rules—which determine the geographic area in which a person is supposed to be counted—with those used for the census. 3. Consistency of ACS data with long-form data from the 2000 Census. 4. For multiyear averages of ACS data for small areas, consistency with annual ACS data for larger areas and utilization as a measure of change. Each leg of the Bureau’s three-legged stool is dependent on the other; that is, the implementation of one leg allows the other two legs to operate successfully. For example, ACS is facilitated by first updating the MAF/TIGER database for the ACS sample. Similarly, as noted above, the Bureau’s plan to conduct a short-form-only enumeration depends on ACS. Consequently, the Bureau’s design assumes that by 2008, (1) ACS will be in place nationwide and producing data, (2) a GPS-aligned and modernized MAF/TIGER will be available, and (3) all reengineering efforts will be complete to allow for a true dress rehearsal. Completing any one of these tasks would be a considerable undertaking; for 2010, the Bureau plans to develop, refine, and integrate all three in the space of just a few years. Moreover, the Bureau has no contingency plans other than to revert to a “traditional” census. According to the Bureau, while the failure of any one leg would not doom the census, it could jeopardize the Bureau’s goals. For example, if the MAF/TIGER modernization is not completed on schedule, the Bureau would be unable to employ the GPS-enabled handheld mobile computing devices that enumerators are to use when conducting nonresponse follow-up. This in turn could affect the efficiency of the effort and the quality of the data collected. In addition, the Bureau would not have time to conduct the research and testing necessary to improve the long form based on lessons learned in the 2000 Census. Because of limitations in census taking methods, some degree of error in the form of persons missed or counted more than once is inevitable. Since 1980, the Bureau has used statistical methods to generate detailed measures of the undercounts of particular ethnic, racial, and other population groups. To assess the quality of population data for the 2000 Census and to possibly adjust for any errors, the Bureau conducted the Accuracy and Coverage Evaluation (A.C.E.) program. Although the U.S. Supreme Court ruled in 1999 that the Census Act prohibited the use of statistical sampling for purposes of apportioning seats in the House of Representatives, the Court did not specifically address the use of statistical sampling for other purposes. In March 2001, the Acting Director of the U.S. Census Bureau recommended to the Secretary of Commerce that only unadjusted data be released for purposes of congressional redistricting. The Acting Director made this recommendation when, after considerable research, the Bureau was unable to conclude that the adjusted data were more accurate for use in redistricting. Specifically, the Acting Director cited the apparent inconsistency in population growth over the decade as estimated by the A.C.E., and demographic analysis, which estimated the population using birth, death, and similar records. He noted that the inconsistency raised the possibility of an unidentified error in either the A.C.E. estimates or the census numbers, and the inconsistency could not be resolved prior to April 1, 2001, the legally mandated deadline for releasing redistricting data. Later that year, following additional research, the Acting Director decided against using adjusted census data for nonredistricting purposes, such as allocating federal aid, because A.C.E. estimates failed to identify a significant number of people erroneously included in the census. The Acting Director noted that “this finding of substantial error, in conjunction with remaining uncertainties, necessitates that revisions, based on additional review and analysis, be made to the A.C.E. estimates before any potential uses of these data can be considered.” As the Bureau turned toward the 2010 Census, it needed to decide whether it would have a coverage measurement program and how the results would be used. Because of the 1999 U.S. Supreme Court ruling noted earlier, the Bureau could not use coverage measurement results to adjust census data for purposes of congressional apportionment. However, adjusting census data for other purposes remained an open question. In our January 2003 report on the objectives and results of the 2000 A.C.E. program, we noted that an evaluation of the accuracy and completeness of the census is critical given the many uses of census data, the importance of identifying the magnitude and characteristics of any undercounts and overcounts, and the cost of the census overall. We cautioned that the longer the 2010 planning process continues without a firm decision on the role of coverage measurement, the greater the risk of wasted resources and disappointing results. Consequently, we recommended that the Bureau, in conjunction with Congress and other stakeholders, come to an early decision on whether and how coverage measurement will be used in the 2010 Census. In reaching this decision, we recommended that the Bureau (1) demonstrate both the operational and technical feasibility of its coverage measurement methods, (2) determine the level of geography at which coverage can be reliably measured, (3) keep Congress and other stakeholders informed of its plans, and (4) adequately test its coverage measurement methodology prior to full implementation. The Bureau agreed with our recommendations, noting that we had identified important steps that should be followed in developing a coverage measurement methodology for the 2010 Census. While certain aspects of the Bureau’s coverage measurement plans are still being developed, the Bureau is not currently planning to develop a procedure that would allow it to adjust census numbers for purposes of redistricting. According to the Director of the U.S. Census Bureau, although the Bureau plans to evaluate the accuracy of the coverage it achieves in the 2010 Census, its experience during the 2000 Census demonstrated “that the science is insufficiently advanced to allow making statistical adjustment to population counts of a successful decennial census in which the percentage of error is presumed to be so small that adjustment would introduce as much or more error than it was designed to correct.” Furthermore, irrespective of whether it is both legal and appropriate to do so, the Bureau does not believe that it can both collect coverage measurement data and complete the analysis of those data’s accuracy in time to deliver the information to the states to meet their redistricting deadlines. Although the Bureau’s experience during the 2000 Census shows that its approach to measuring coverage needs to be improved if it is to be used to adjust census numbers, the Bureau has not yet determined the feasibility of refinements to the 2000 approach or alternative methodologies. Consequently, the Bureau’s decision on how coverage evaluation data will be used in 2010 appears to be premature. Indeed, while the Bureau has reported that the 2000 Census had better coverage compared to the 1990 Census, as noted below, the U.S. population is becoming increasingly difficult to count, a factor that could affect the quality of the 2010 Census. More generally, the decennial census is an inherently fragile endeavor, where an accurate population count requires the near-perfect alignment of a myriad of factors ranging from the successful execution of dozens of census-taking operations to the public’s willingness to cooperate with enumerators. External factors such as the state of the economy and world events might also affect the outcome of the census. The bottom line is that while the census is under way, the tolerance for any breakdowns is quite small. Therefore, the Bureau’s ability to maintain the level of quality reported for the 2000 Census is far from guaranteed. Thus, to ensure that the nation uses the best available census data, it will be important for the Bureau to research procedures that depending on what the results of the coverage evaluation say about the quality of the census data, would allow adjustment, if necessary, for those purposes for which it is both legal and appropriate to do so. The Bureau should conduct this effort on a timetable that allows it to adequately test and refine those procedures, as well as obtain input from both majority and minority parties in the Senate and House of Representatives. In June 2001, the Bureau estimated that the reengineered 2010 Census would cost $11.3 billion in current dollars. This would make the 2010 head count the most expensive in the nation’s history, even after adjusting for inflation. According to the Bureau estimates in June 2001, a repeat of the 2000 approach would cost even more, over $11.7 billion. This estimate of repeating the 2000 approach was revised to approximately $12.2 billion in April 2003. Moreover, the actual cost of the census could end up considerably higher as the Bureau’s initial cost projections for previous censuses proved to be too low because of such factors as unforeseen operational problems or changes to the fundamental design. For example, while the Bureau estimated that the 2000 Census would cost around $4 billion using sampling, and that a traditional census without sampling would cost around $5 billion, the final price tag for the 2000 Census (without sampling) was over $6.5 billion. The Bureau’s cost projections for the 2010 decennial census continue an escalating trend. As shown in figure 1, in constant 2000 dollars, the estimated $9.3 billion cost of the 2010 Census represents a tenfold jump over the $920 million spent on the 1970 Census (as noted above, the Bureau estimates the 2010 Census will cost $11.3 billion in current dollars). Although some cost growth can be expected in part because the number of housing units—and hence the Bureau’s workload—has gotten larger, the cost growth has far exceeded the housing unit increase. The Bureau estimates that the number of housing units for the 2010 Census will increase by 10 percent over 2000 Census levels. Meanwhile, the average cost per housing unit for 2010 is expected to increase by approximately 29 percent from 2000 levels (from $56 to $72), nearly five and a half times greater than the $13 it cost to count each household in 1970 (see fig. 2). As for previous censuses, the major cost for the 2010 Census is what the Bureau calls “field data collection and support systems.” Over half of decennial census life cycle costs are attributed to this area. Specific components include the costly and labor-intensive nonresponse follow-up operation as well as support activities such as the opening and staffing of hundreds of temporary local census offices. One reason why field data collection is so expensive is because the Bureau is finding it increasingly difficult to locate people and get them to participate in the census. According to Bureau officials, societal trends, such as the increasing number of respondents who do not speak English, the growing difficulty of finding respondents at home, and the general increase of privacy concerns, impede a cost-effective head count. Further, the legal requirement to count everyone leads the Bureau to employ operations that only marginally improve coverage but that are relatively expensive to conduct. Societal changes have also reduced the cost-effectiveness of the census, and it has become increasingly difficult to simply stay on par with the results of previous enumerations. For example, during the 1990 Census, the Bureau spent $0.88 per housing unit (in 2000 dollars) to market the census and encourage participation and achieved a response rate of 65 percent. During the 2000 Census, the Bureau spent $3.19 per housing unit (in 2000 dollars) to promote participation, but the response rate was 64 percent. The constitutional mandate to count the nation’s population explicitly commits or “exposes” the government to spending money on the census each decade. In this way, the census is similar to other fiscal exposures such as retirement benefits, environmental cleanup costs, and the payment of Social Security benefits in that the government is obligated to a certain level of future outlays. Early in each census cycle, expenditures are relatively low as the Bureau plans the census and conducts various tests. As the decade continues, spending increases, spiking during the decennial year when costly data collection activities take place. As shown in figure 3, during the 2000 Census, $4.1 billion—almost two-thirds of the money spent on the entire census—was spent in fiscal year 2000 alone. Current budget reporting, however, does not always fully capture or require explicit consideration of some future fiscal exposures. In fact, this is particularly true with the census, as annual budget requests and reports provided to Congress early in the decennial census life cycle do not reflect the full cash consequences of the spending and policy decisions. Thus, as it begins funding the 2010 Census early in the decade at relatively low levels, Congress will have implicitly accepted a future spike in costs—essentially a balloon payment in 2010—without requiring the Bureau to clearly define what those costs might be, why they are justified, and what alternatives might exist. As we noted in our January 2003 report on improving the budgetary focus on long-term costs and uncertainties, information on the existence and estimated cost of fiscal exposures needs to be considered along with other factors when making policy decisions. With respect to the census, not capturing the long-term costs of annual spending decisions limits Congress’s ability to control the government’s exposure at the time decisions are made, consider trade-offs with other national priorities, and curtail the growth in census costs. Consequently, fiscal transparency is critical to better reflect the magnitude of the government’s long-term spending on the census and signal unanticipated cost growth. Greater fiscal transparency can also facilitate an independent review and provide an opportunity to improve stakeholder confidence and commitment to the Bureau’s reengineered decennial census design. Our January 2003 report noted that increased supplemental reporting could help improve fiscal transparency and described several options for how to accomplish this. Although that report recommended that OMB consider implementing these options governmentwide, at least two options could be adapted specifically for the Bureau and its parent agency, the Department of Commerce. The two options are (1) annually reporting the planned life cycle cash flow and explaining any material changes from previous plans (currently, the Bureau does not make this information available) and (2) setting triggers to signal when the amount of money expected to be spent in any one year exceeded a predetermined amount. Combined, these actions could prompt more explicit deliberations on the cost and affordability of the census and help inform specific cost control measures by Congress, if warranted. The assumptions the Bureau used to develop the life cycle cost estimate could also benefit from greater transparency. More robust information on the likelihood that the values the Bureau assigned to key cost drivers might differ from those initially assumed, and the impact that any differences would have on the total life cycle cost, could provide Congress with better information on the range and probability of the fiscal exposure the nation faces from the upcoming census. As shown in figure 4, the Bureau derived the baseline for its 2010 cost estimate using the actual cost of the 2000 Census and assumptions about certain cost drivers, estimating the cost of the 2010 Census as if the Bureau were to repeat its 2000 design. The key assumptions include a 35 percent decrease in enumerator productivity, a pay rate increase for census workers from 2000 levels, a mail-back response rate lower than Census 2000 levels, and inflation. The projected costs and savings of repeating the 2000 design versus the Bureau’s approach based on the three-legged stool, are shown in table 2. Transparent information is especially important since decennial cost estimates are sensitive to many key assumptions. In fact, for the 2000 Census, the Bureau’s supplemental funding request for $1.7 billion in fiscal year 2000 primarily involved changes in assumptions related to increased workload, reduced employee productivity, and increased advertising. Given the cost estimates’ sensitivity to key assumptions, greater transparency could be obtained by showing the range and likelihood of how true cost drivers could differ from those assumed and how those differences would affect estimates of total cost. Thus, if early research and testing show that response rates may be higher than originally anticipated, or that enumerator productivity could be better than expected, the Bureau can report on the nature of its changing assumptions and its effect on life cycle costs. Also important, by providing information on the likely accuracy of assumptions concerning cost drivers, the Bureau would better enable Congress to consider funding levels in an uncertain environment. Other key areas in which changes in assumptions can greatly affect costs are salary rates for enumerators, the future price of handheld mobile computing devices, and inflation. Our prior work has described how agencies provide supporting information when developing budget assumptions. For example, the Nuclear Regulatory Commission identifies a basis and a certainty level for its budget assumptions used for internal reporting. A basis summarizes the facts that were evaluated to justify the assumption, while a certainty level depicts the likelihood of its occurrence as high, medium, or low. Finally, it is important to have timely cost information for congressional decision making. The Bureau’s life cycle estimates were updated in April 2003 after being first presented in June 2001—nearly a 2-year interval. In addition, when we requested additional information on life cycle costs the Bureau took several months to provide information on its life cycle cost estimates and assumptions, ultimately revising its total cost estimates before providing us with the data. The Bureau has taken several positive steps to correct problems encountered planning past censuses, and the Bureau appears to be further along in planning the 2010 Census than at this same point during the 2000 Census cycle. Although an improvement over past efforts, the Bureau’s 2010 planning process still contains certain weak points that if not addressed could undermine a cost-effective head count and make it more difficult to obtain the support of Congress and other stakeholders. The characteristics of the census—long-term, large-scale, high-risk, costly, and politically sensitive—together make a cost-effective enumeration a monumental project management challenge, one that demands meticulous planning. To help determine the principal ingredients of successful project planning, we reviewed a number of guides to project management and business process reengineering. Although there is no one best approach to project planning, the guides we reviewed contained many elements in common, including the following: Developing a project plan. The plan should consider all phases of the project and should have clear and measurable goals; all assumptions, schedules, and deadlines clearly stated; and needed skills and resources identified. Evaluating human resource implications. This includes assessing needed competencies and how they will be acquired and retained. Involving stakeholders and incorporating lessons learned. Stakeholders—both internal and external to an organization—have skills and knowledge that could contribute to a project and should be involved in the decision-making process. An organization should focus on the highest priority stakeholder needs and mission goals. Evaluating past performance and capitalizing on lessons learned is also important for improving performance. Analyzing and mitigating risks. This involves identifying, analyzing, prioritizing, and documenting risks. Ideally, more than one alternative should be assessed. Monitoring progress. Measurable performance goals should be identified and performance data should be gathered to determine how well the goals are being achieved. The Bureau has made considerable progress in planning the 2010 Census, and some of the positive steps taken to date include the following efforts. Early in the decade, senior Bureau staff considered various goals for the 2010 Census and articulated a design strategy to achieve those goals. Senior Bureau officials collaborated on this initial design plan to set the stage for further refinements during later field testing and research activities. The Bureau has involved experienced staff in the design process through cross-divisional planning groups. Staff involved in these planning groups will ultimately be responsible for implementing the 2010 Census. According to Bureau officials, this is a departure from the 2000 Census planning effort when Bureau staff with little experience in conducting the census played a key role in designing the decennial census, which resulted in impractical reforms that could not be implemented. The Bureau has recognized the importance of strategically managing its human capital to meet future requirements. The planning and development of the 2010 Census will take place at a time when the Bureau could find itself experiencing substantial employee turnover (three senior Bureau managers left the agency in 2002, and according to a report by the Department of Commerce Inspector General, the Bureau could lose through retirement around half of the senior staff who carried out the 2000 Census). The Bureau, as part of a broader risk assessment, plans to provide less experienced staff the opportunity to obtain operational experience prior to the actual 2010 Census. In addition, the Bureau has provided training in project management and has encouraged staff to take training courses in management and planning. However, other aspects of the Bureau’s 2010 planning process could be improved. A more rigorous plan would better position the Bureau to fulfill its key objectives for the 2010 Census and help demonstrate to Congress and other stakeholders that it can effectively design and manage operations as well as control costs. Although the Bureau has developed project plans for some of the key components of its 2010 strategy, the Bureau has not yet crafted an overall project plan that (1) includes milestones for completing key activities; (2) itemizes the estimated cost of each component; (3) articulates a clear system of coordination among project components; and (4) translates key goals into measurable, operational terms to provide meaningful guidance for planning and measuring progress. OMB Circular A-11 specifies that an agency’s general goals should be sufficiently precise to direct and guide agency staff in actions that carry out the agency’s mission and aid the agency in developing annual performance goals. The importance of this information for improving accountability and performance can be seen, for example, in the Bureau’s principal goal to increase coverage and accuracy. Though laudable, the Bureau has yet to assign any numbers to this goal. This makes it difficult to evaluate the costs and benefits of alternative designs, determine the level of resources needed to achieve this goal, measure the Bureau’s progress, or hold managers accountable for results. Bureau managers provided us with several documents that pieced together present 2010 Census goals and strategies, life cycle costs, and schedules, but no single, comprehensive document exists that integrates this information. For example, the Bureau’s life cycle cost estimates and information on its performance goals were contained in two separate documents, making it hard to see the connection between cost and the Bureau’s objectives. Likewise, a draft document, entitled 2010 Reengineered Census Milestone Schedule, included various milestones by fiscal quarter, but did not contain information on dependencies and interrelationships among the various aspects of the project. In its 2001 letter to the Bureau’s acting director, the National Academy of Sciences’ (NAS) Panel on Research on Future Census Methods raised similar concerns about the need for a coherent project plan. The panel noted that it wanted “to see a clearer case for components of the 2010 census strategy, itemizing the goals, costs, and benefits of each initiative and indicating how they integrate and contribute to a high quality census.” To that end, NAS recommended that the Bureau develop what it called a business plan for 2010. The Bureau is making an effort to develop and incorporate the lessons learned from the 2000 Census and, in fact, created an elaborate evaluation program to help inform this effort. Moreover, the Bureau chartered 11 planning groups consisting of knowledgeable census staff (see app. III for the 2010 planning organization). However, the Bureau’s ability to build on the results of 2000 could be hampered by the fact that while the evaluation program assessed numerous aspects of the census, the Bureau still lacks data and information on the performance of key census activities, as well as on how specific census operations contributed to two of the Bureau’s key goals for 2000: improved accuracy and cost-effectiveness. For example, as noted earlier, the cost of the 2010 Census is increasing relative to 2000 partly because the Bureau expects nonresponse follow-up enumerators will be less productive in 2010. Because of various societal factors, it will simply take enumerators more time to complete their work. And yet, despite the importance of accurate productivity data to inform the Bureau’s planning and budgeting processes for 2010, the Bureau had trouble obtaining quality productivity data following the 2000 Census. Although the Bureau later committed additional resources to refine the numbers, the adjustment was coarse and addressed just one of the two known problems. Moreover, because of differences in the way the Bureau measured staffing levels and hours worked from census to census, none of the productivity data from the last few censuses are comparable. Another area in which the Bureau lacks useful performance information is in the extent to which the dozen or so separate activities used to build MAF in 2000 contributed to its overall accuracy relative to one another. Without this information, the Bureau has limited data with which to guide investment and trade-off questions for 2010, such as which activity provided the biggest “bang for the buck” and should thus be repeated, or whether it would be more effective for the Bureau to improve accuracy and coverage by putting more resources into MAF-building activities or some other operation altogether, such as marketing. To date, the Bureau’s planning groups have incorporated a variety of lessons learned from the evaluations of the 2000 Census. As an example, the Coverage Improvement Planning Group observed an increase in inconsistent responses from 2000 Census compared to the previous census (e.g., some questionnaires were marked “uninhabited,” but individuals were enumerated at the sites). According to a Bureau official, one hypothesis for the higher rate of inconsistent responses was that enumerators were encouraged to fill in information even when not all of the relevant information was known. The Bureau plans to address this issue by building in “edits” to its planned handheld mobile computing devices so that inconsistent data cannot be entered. In addition, the Coverage Improvement Planning Group also looked at the 2000 Census experience to provide recommendations for the Bureau’s 2004 test. Risk management is important for preparing for contingencies or changes in the external operating environment. At the time of our review, the Bureau had completed a risk assessment of some aspects of its operations as part of its OMB Circular A-11, Exhibit 300 submission, and for certain aspects of the reengineering efforts. However, the Bureau had not developed a risk assessment that addressed the entire 2010 Census, including ACS and the MAF/TIGER modernization. The risk assessment for the reengineering effort uses a consistent scoring system to assess the severity of the risks identified and addresses various contingencies and mitigation strategies, such as preparing for the retirement of key personnel and using succession planning to offset the attrition. The scoring system and how it was applied is clearly described in the plan, making it easy to evaluate the way it was used. However, the assessment does not provide extensive detail on the mitigation actions proposed. Also, it does not indicate how risks were identified and whether any risks were excluded. A notable exclusion was that it did not address the risks if ACS or MAF/TIGER fail or are not funded and the impact this might have on the census as a whole. As mentioned earlier, the Bureau’s three-legged stool strategy assumes that all three legs must work together to achieve its goals. One of the reasons for doing a risk analysis is to prepare to make trade-offs when faced with inevitable budgetary pressures, operational delays, or other risks. Lacking information on trade-offs, the Bureau maintains that its only alternative to the reengineering is to repeat the 2000 Census design, an approach that Bureau officials believe will be extremely expensive. The obstacles to conducting a cost-effective census have grown with each decade, and as the Bureau looks toward 2010, it confronts its biggest challenge yet. Consequently, the Bureau will need to balance the growing cost, complexity, and political sensitivity of the census with meticulous planning. As the Bureau’s past experience has shown, early investments in planning can help reduce the costs and risks of its downstream operations. Moreover, a rigorous plan is essential for securing early agreement between the Bureau and Congress on the Bureau’s fundamental strategy for 2010. Congressional support—regardless of whether the Bureau’s current approach or an alternative is ultimately selected—is crucial for creating a stable environment in which to prepare for the census and avoiding a repeat of the 2000 Census when disagreement over the Bureau’s methodology led to late design changes and additional costs and risks. The Bureau has laid out an ambitious schedule of planning, testing, and evaluation for the coming years, culminating with a “dress rehearsal” in 2008. While midcourse corrections are to be expected as a result of these efforts, it will be important for the Bureau to proceed with as few alterations to its fundamental strategy as possible so that all of the operations used in 2010 have been thoroughly road tested. The Bureau appears to be further along in planning the 2010 Census compared to a similar point during the 2000 Census cycle, and its efforts to enhance past planning practices are commendable. Focusing its activities on early design, research, and testing and organizing its reengineering activities around cross-divisional planning groups, are just some of the noteworthy improvements the Bureau has made. However, the Bureau’s plans for 2010, while not unreasonable on the surface, lack a substantial amount of supporting analysis, budgetary transparency, and other information, making it difficult for us, Congress, and other stakeholders to properly assess the feasibility of the Bureau’s design and the extent to which it could lead to greater cost-effectiveness compared to alternative approaches. Questions surrounding the Bureau’s underlying budget assumptions; uncertainties over ACS; the failure to translate key goals into measurable, operational terms; and the lack of important performance data from the 2000 Census to inform 2010 decision making are just some of the problematic aspects of the 2010 planning process. More than simply paperwork or documentation issues, this information is essential for improving the performance and accountability of the Bureau and of the decennial census in particular. To be sure, some challenges are to be expected in an endeavor as demanding as counting a population that is mobile and demographically complex and whose members reside under a multitude of living arrangements. Further, shortcomings with prior censuses call for the Bureau to consider bold initiatives for 2010 that entail some risk. However, if Congress is to accept and fund the Bureau’s approach—now estimated at more than $11 billion—then the Bureau needs to more effectively demonstrate that it has (1) selected a design that will lead to the most cost- effective results and (2) establish a rigorous capacity to manage risks, control costs, and deliver a successful head count. Moreover, to ensure the nation uses the best available data, it will be important for the Bureau to research procedures that would allow it to adjust census results for purposes for which it is both legal and appropriate to do so, if it is determined that the adjusted figures would provide greater accuracy than the enumeration data. Such procedures could function as a safety net should there be problems with the initial census count. It will also be important for policymakers to consider, early in the decade, the long-term costs associated with the census and finding the right balance between controlling mushrooming costs and improving accuracy. Although initial spending on the census is relatively low, it will accelerate in the years ahead, culminating with a balloon payment in 2010 when data collection and other costly operations take place. Greater fiscal transparency prior to getting locked into a particular level of spending could help inform deliberations on the extent to which (1) the cost of the census is reasonable, (2) trade-offs will need to be made with competing national priorities, and (3) additional dollars spent on the census yield better quality data. Just over 6 years remain until Census Day 2010. While this might seem like an ample amount of time to shore up the Bureau’s planning process and take steps to control costs, past experience has shown that the chain of interrelated preparations that need to occur at specific times and in the right sequence leave little room for delay or missteps. To help control the cost of the 2010 Census and inform deliberations on the acceptability of those costs, we recommend that the Director of the Office of Management and Budget take steps to ensure that the Bureau improves the transparency of the fiscal exposure associated with the census. Specifically, OMB should ensure that the Bureau, in a notational item in the Program and Financing schedule of the President’s budget, include an updated estimate of the life cycle costs of the census and the amount of money the Bureau expects to spend in each year of the cycle, as well as an explanation of any material changes from previous plans. The information should also contain an analysis of the sensitivity of the cost figures to specific assumptions, including a range of values for key cost assumptions, their impact on total cost estimates of the census, the likelihood associated with those ranges, and their impact on the total estimated cost of the census. As part of this process, OMB should establish triggers that would signal when the yearly 2010 Census costs, total 2010 Census costs, or both exceeded some predetermined amount. In such instances, the Bureau should then be required to prepare a special report to Congress and OMB justifying why the additional costs were necessary and what alternatives were considered. Further, to enhance the Bureau’s performance and accountability, as well as to help convince Congress and other stakeholders that the Bureau has chosen an optimum design and will manage operations and control costs effectively, we recommend that the Secretary of Commerce direct the Bureau to improve the rigor of its planning process by developing an operational plan that consolidates budget, methodological, and other relevant information about the 2010 Census into a single, comprehensive project plan that would be updated as needed. Individual elements could include specific performance goals, how the Bureau’s efforts, procedures, and projects would contribute to those goals, and what performance measures would be used; risk and mitigation plans that fully address all significant potential risks; detailed milestone estimates that identify all significant annually updated life cycle cost estimates, including a sensitivity analysis, and an explanation of significant changes in the assumptions on which these costs are based. Moreover, to help ensure that the nation has at its disposal the best possible data should there be problems with the quality of 2010 Census, the Bureau, with input from both majority and minority parties in the Senate and House of Representatives, should research the feasibility of procedures that could allow it to adjust census results for those purposes for which it is both legal and appropriate to do so and, if found to be feasible, test those procedures during the 2006 census test and 2008 census dress rehearsal. The Secretary of Commerce forwarded written comments from the U.S. Census Bureau on a draft of this report that we received November 6, 2003. The comments are reprinted in appendix I. The Bureau generally disagreed with many of our key findings, conclusions, and recommendations. The Bureau believes that the report, in its discussion of escalating census costs, ignores the fact that a key cost driver is stakeholders’ demand for better accuracy. We agree with the Bureau that its mandate to count each and every resident in the face of countervailing societal trends is an important reason for the cost increases. As we noted in the report, societal changes have reduced the cost-effectiveness of the census, and it has become more and more difficult to stay on par with the results of previous enumerations. Similarly, we stated that “the legal requirement to count everyone leads the Bureau to employ operations that only marginally improve coverage but that are relatively expensive to conduct.” Further, we do not, as the Bureau asserts, treat the cost issue in a vacuum, and agree with the Bureau that little would be gained by focusing on the cost of the 2010 Census alone. Rather, any deliberations on the 2010 Census need to focus on how changes in spending on the census might affect the quality of the count. Our draft report emphasized this exact point noting that “The growing cost of the head count, at a time when the nation is facing historic budget deficits, highlights the importance of congressional deliberations on the extent to which each additional dollar spent on the census results in better data, as well as how best to balance the need for a complete count, with the need to ensure the cost of a complete count does not become unreasonable.” Similarly, we concluded that “it will also be important for policymakers to consider, early in the decade, the long-term costs associated with the census and finding the right balance between controlling mushrooming costs and improving accuracy.” The Bureau also believes the report implies that the cost increases are caused by the reengineering effort. Our draft report did not state, nor did we intend to imply, that the reengineering effort would cause most of the projected cost increases for the 2010 Census. In fact, our report even notes that the Bureau's reengineering strategy has the potential to reduce costs relative to a design that would repeat the Census 2000 approach. To help clarify this point, we added text that describes how a repeat of the 2000 approach would be more costly than the reengineered design, according to Bureau estimates. The Bureau disagreed with our recommendation to OMB regarding the need for greater budgetary transparency, noting that the real reason for the vagueness of out-year cost estimates stems from a fundamental difference of opinion between the administration and Congress over the appropriate time to share that information. We believe that it is important for the administration to provide details of out-year cost projections for the decennial census for the reason stated in our draft report: annual budget requests and reports provided to Congress early in the decennial census life cycle do not reflect the full cash consequences of the spending in later years of the decade. We understand that the Bureau has followed the administration’s guidance on providing out-year cost estimates; this is also why we directed our recommendation for greater fiscal transparency to OMB, which we discuss in greater detail below. The Bureau disagreed with our recommendation to improve the rigor of its planning process by developing an operational plan that consolidates budget, methodological, and other relevant information into a single, comprehensive project plan. The Bureau noted that these documents already exist and are widely available, and the Bureau already shares them with Congress, us, the National Academy of Sciences (NAS) Panel on Research on Future Census Methods—the panel responsible for reviewing the census, and other stakeholders. While we agree with the Bureau that some of this information is available (and we noted this fact in our draft report), it is piecemeal—one can only obtain it by cobbling together the Bureau’s budget submission, its strategic plan, and several other documents, and even then, key information such as performance goals would still be lacking. Further, although the Bureau notes that it has provided this information to the NAS panel, as we stated in our report, NAS, like us, also found the information wanting. As we described in the report, the panel shared our concerns over the need for a coherent project plan, and called on the Bureau to develop a business plan that among other things, itemized the goals, costs, and benefits of each census component and a describes how they contributed to a high-quality census. Whether it is called a business plan or a project plan, such information is not, as the Bureau maintains, simply “more process.” Quite the contrary, this information is essential for improving performance; facilitating a thorough, independent review of the Bureau’s plans; and demonstrating to Congress and other stakeholders that the Bureau can effectively design and manage operations and control costs. The Bureau incorrectly asserts that our report criticizes it for not completing the evaluations of the 2000 Census in a timely manner. Our report did not address this matter, although NAS’s Second Interim Report on Planning the 2010 Census urged the Bureau to “give high priority to evaluation studies” and complete them as expeditiously as possible. We agree that the Bureau’s planning staff do have access to the draft evaluations, and in fact, we noted in the report that they are using them in planning for the 2010 Census. The key point, however, is the Bureau’s ability to build on the results of the 2000 Census. This could be hampered by the fact that while the evaluation program assessed numerous aspects of the census, the Bureau still lacks data on the performance of key census activities as well as how specific census operations contributed to two of the Bureau’s key goals for 2000: improved accuracy and cost-effectiveness. The Bureau agreed with us that it is important to bring closure to the discussion on whether and how coverage measurement will be used in the 2010 Census. However, the Bureau believes that the approach used for the 2000 Census proved that it was not feasible to produce a final analysis of coverage measurement in time to meet redistricting requirements. We agree with the Bureau’s assessment that the coverage measurement approach used in the 2000 Census needs to be reworked. However, this should not preclude it from researching alternative approaches for the 2010 Census in light of the fact that the Bureau’s ability to maintain the level of quality reported for the 2000 Census is less than certain. Finally, the Bureau questioned our assessment that the only contingency plan for conducting the 2010 Census, if the reengineered effort fails, was to fall back on the Census 2000 methods. The Bureau maintains that the 2000 Census was by most accounts a very successful census and, accordingly, the Bureau already has available the methods and procedures for taking an excellent census. Our report does not advocate the development of another set of census methods. Rather, we were trying to illustrate the challenge the Bureau faces in implementing its reengineering plans, where the failure of any one leg could compromise the other two, thereby requiring the Bureau to rely on the approach it used for the 2000 Census. According to Bureau officials, this in turn could make it difficult for the Bureau to accomplish its goals for the 2010 Census, which include cost containment and better quality data. On October 14, 2003, the Associate Director for General Government Programs, OMB, provided written comments on a draft of this report, which are reprinted in appendix II. OMB shared our view that the costs and risks associated with the 2010 Census must be carefully monitored and evaluated throughout the decade. OMB also agreed that it is essential to understand the key cost drivers and said that it is working with the U.S. Census Bureau to ensure that the Bureau develops high-quality, transparent life cycle cost estimates. However, OMB disagreed with our recommendation that it ensure that the Bureau, include a notational item in the Program and Financing (P&F) schedule of the President’s Budget with an updated estimate of the life cycle costs of the census and the amount of money the Bureau expects to spend in each year of the cycle, as well as an explanation of any significant changes from previous plans. OMB believes that the Bureau’s report on the life cycle costs, which is updated regularly, is the best mechanism to present estimates of the total life cycle costs and explanations for any material changes from previous plans. Further, OMB noted that presenting this information in the P&F schedule is cumbersome and unnecessary because the Analytical Perspectives volume of the President’s Budget currently shows out-year estimates that incorporate anticipated programmatic changes of the Decennial Census within the Periodic Censuses and Program account. As noted in our report, we do not believe the information OMB currently reports to Congress is sufficiently timely or detailed to provide the level of transparency needed for effective congressional oversight and cost control. Indeed, while OMB cites the Bureau’s life cycle report, the document that we reviewed for this report took the Bureau nearly 2 years to revise. Moreover, the revised estimates, like the original estimates, overstated the life cycle cost estimate by $300 million because the Bureau did not take into account a surplus of that amount that it identified near the end of fiscal year 2000. Although the Bureau is to reissue the 2010 life cycle cost estimates early in calendar year 2004, the incorrect estimates will have been in circulation for more than 2 years. Additionally, the information contained in the Analytical Perspectives volume of the President’s Budget is limited. For example, it only provides information on out-year estimates for 5 years. As a result, the volume will not include estimates for the high-cost year of 2010 until the release of the President’s fiscal year 2006 budget. Further, the Analytical Perspectives volume lacks information on the sensitivity of cost figures to specific assumptions and the likelihood of these estimates. It also does not contain any explanations of changes in cost estimates from year to year. Complete and transparent information on out-year costs is important to inform deliberations on the acceptability of these costs and to ensure that Congress understands the possible range of census life cycle costs. OMB also disagreed with our recommendation to establish triggers to signal when the yearly 2010 Census costs, total 2010 Census costs, or both exceeded some predetermined amount. OMB noted that it has established internal procedures within its budget reviews to monitor 2010 Census costs and believes they are sufficient for ensuring that estimates are not exceeded without clear justification. OMB added that this justification could be included in the Bureau’s updates to its life cycle cost estimates. Although OMB’s internal procedures might be sufficient for OMB’s requirements, they do little to address the fundamental need for greater fiscal transparency. Continued reliance on these procedures would inhibit independent review by congressional and other external stakeholders, as well as limit informed discussion of the trade-offs of dollars versus accuracy and what cost control measures, if any, might be needed to make the 2010 Census more affordable. In closing, OMB commented that the Bureau’s reengineering plan is being reviewed by NAS as well as seven advisory committees. OMB stated that the analyses stemming from these reviews, such as NAS’s recently issued report on census planning, enhance the Bureau’s accountability and help ensure “that the ultimate 2010 Census design is optimal.” We agree with OMB that NAS and the advisory committees are important for reviewing the Bureau’s plans and holding the Bureau accountable for a cost-effective census in 2010. And this is precisely why we made the recommendations that we did. Without a transparent budgeting and planning process, a thorough, independent review by these and other external groups would be difficult to impossible. That greater transparency is needed in both these areas is highlighted not just in our report, but in the very NAS study that OMB cites. Indeed, NAS found that “a major conclusion of the panel is that discussion of the 2010 Census design needs to be more fully informed by the evaluation of various trade- offs—the costs and benefits of various reasonable approaches in order to make wise decisions.” As agreed with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time we will send copies to other interested congressional committees, the Secretary of Commerce, the Director of the U.S. Census Bureau, and the Director of the Office of Management and Budget. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or [email protected] or Robert Goldenkoff, Assistant Director, at (202) 512-2757 or [email protected]. Key contributors to this report were Richard Donaldson, Ty Mitchell, Robert Yetvin, and Christine Bonham. The U.S. Census Bureau (Bureau) has given its Decennial Management Division responsibility for planning the 2010 Census, including the American Community Survey (ACS) and the Master Address File/Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) enhancements. The division established an executive steering committee, shown in figure 5, for this purpose. Each group under the 2010 Census planning organization has specific activities that it was charged with studying. Listed below are the research and development planning groups chartered at the time of our review and a list of these activities. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The key to a successful census is meticulous planning as it helps ensure greater cost-effectiveness. However, the 2000 and previous censuses have been marked by poor planning, which unnecessarily raised the costs and risks of those efforts. GAO was asked to (1) review the U.S. Census Bureau's (Bureau) current plans for 2010 and whether they might address shortcomings of the 2000 Census, (2) analyze the Bureau's cost estimates, and (3) review the rigor of the Bureau's 2010 planning process. While preparations for the 2010 Census appear to be further along compared to a similar point prior to the 2000 Census, cost and design information had to be pieced together from various documents. The Bureau's plans also lack a substantial amount of supporting analysis, budgetary transparency, and other information that made it difficult to verify the Bureau's assertions concerning the costs and benefits of its proposed approach. Further, unlike in previous censuses, the Bureau does not intend to develop coverage measurement procedures that would allow it to adjust census data for certain purposes. Although its experience in 2000 shows that its coverage measurement methodology needs improvement, GAO believes the Bureau should have researched alternative approaches more thoroughly and disclosed the results of its research before making a decision. In designing the 2010 Census, the Bureau hoped to address several shortcomings of the 2000 enumeration, namely to (1) increase the relevance and timeliness of data, (2) reduce operational risk, (3) increase coverage and accuracy, and (4) contain costs. To achieve these goals, three components--all new operations--are key to the Bureau's plans for 2010. They include enhancing procedures for building the census address list and associated maps, replacing the census long-form questionnaire with a more frequent sample survey, and conducting a short-form-only census. The Bureau's approach has the potential to achieve the first three goals, but reducing operational risk could prove to be more difficult as each of the three components actually introduces new risks. The Bureau will also be challenged to control the cost of the 2010 Census, now estimated at over $11 billion. The current budget reporting process masks the long-term costs of the census, most of which will be incurred in 2010; making it difficult for Congress to monitor the Bureau's planned expenditures. Certain actions by the Office of Management and Budget could produce greater fiscal transparency, and thus help inform congressional deliberations on how to best balance the need for an accurate census, with the need to ensure a reasonable cost for this endeavor.
Individuals diagnosed with ESRD may be influenced by a variety of factors when choosing the type of dialysis to receive. One factor that may influence the individual’s choice of dialysis is the individual’s awareness about the different types of dialysis available. For example, some individuals may not be aware that peritoneal dialysis is an option to replace kidney functioning and, as a result, would not choose to undergo peritoneal dialysis. The individual’s choice of which dialysis to perform can also be influenced by the type of dialysis that the individual’s physician recommends and if the individual has a partner to assist with dialysis treatments. Additionally, some individuals may have physical conditions that prevent them from self-performing dialysis—such as vision problems or dexterity issues. The individual’s choice may also be influenced by how quickly the dialysis treatments need to begin—as individuals who need to urgently start dialysis may not have time to be trained in conducting dialysis at home. Hemodialysis conducted in a facility typically consists of three dialysis treatments per week. Peritoneal dialysis is conducted daily. Recent technological changes in hemodialysis equipment have occurred, making it easier for hemodialysis to be done more frequently. For example, a new hemodialysis machine—designed for use at home—requires patients to dialyze five to seven times per week and is reported by some dialysis providers to be more user-friendly than traditional dialysis machines. As a result, most home hemodialysis patients dialyze five to seven times per week. Data from USRDS show that, compared to patients who dialyzed in a facility, in 2006, home dialysis patients were more likely to be younger, white, located in rural areas, employed, and have employer or group health insurance coverage, and were less likely to be Hispanic. USRDS data for 2006 also indicate that patients who received home dialysis may be healthier than patients who dialyzed in a facility. Home dialysis patients were more likely to be on the wait-list for a kidney transplant (which requires a certain level of health status) and had lower rates of diabetes and hypertension as the primary disease that caused their ESRD compared with patients who received dialysis in a facility. Limited evidence suggests and several dialysis provider officials and medical experts we interviewed believe that home dialysis results in better clinical outcomes for individuals with ESRD. These better clinical outcomes include better control over fluid levels, less need for dialysis drugs, fewer hospitalizations, and better quality of life. Improved clinical outcomes may be due to the features of home dialysis that its supporters believe more closely mimic natural kidney functioning—home dialysis can be done more frequently with less time between treatments, for longer periods of time than dialysis received in a facility, three times a week. Perhaps as a result of this more frequent dialysis, USRDS reported that the overall Medicare costs for peritoneal dialysis patients—including hospitalization costs as well as costs for dialysis services—were about 26 percent less than the total Medicare costs for hemodialysis patients in 2006. Similarly, a Medicare health maintenance organization (HMO) reported to us that moving some of its patients from facility hemodialysis to home hemodialysis has substantially reduced hospitalizations, and overall health costs, for those patients. That HMO has also published a study documenting relatively low hospitalization rates for its home hemodialysis patients. However, in general, it is challenging to determine the causes of differences in clinical outcomes between patients who receive dialysis at home versus in a facility because—as we previously noted—the characteristics of patients who dialyze at home are different than those who dialyze in a facility. The National Institutes of Health (NIH) is conducting randomized clinical trials that are intended to provide information on the clinical outcomes associated with more frequent dialysis received in a facility compared to dialysis received three times a week in a facility, and with home nocturnal hemodialysis compared to three times weekly home hemodialysis. Results from the NIH trials are expected to be available in 2010. The self-reported cost information we obtained from the six dialysis providers indicated variation in the cost to provide home dialysis when compared with dialysis provided in a facility. The six dialysis providers reported lower costs per treatment to provide home dialysis than to provide dialysis at a facility, though the amount by which home dialysis costs were lower varied widely among the providers. Because patients who dialyze at home typically receive dialysis treatments more than three times per week, some providers’ costs to provide home dialysis on a weekly basis can be higher than their costs to provide dialysis at a facility. However, other dialysis providers reported lower costs per week to provide home dialysis compared with dialysis provided in a facility. Additionally, several dialysis providers indicated that, for home dialysis patients, the costs of a dialysis treatment with a training session were significantly higher than the costs of a dialysis treatment without a training session. The self-reported cost information that we obtained from six dialysis providers indicated that the average costs per treatment for home dialysis were lower than the average costs per treatment for dialysis provided in a facility. However, there was a wide range among the dialysis providers in terms of how much lower the average costs per treatment for home dialysis were than dialysis provided in a facility. For home hemodialysis, dialysis providers reported to us that their average costs per treatment were 17 to 50 percent lower than the average costs per treatment for dialysis provided in a facility. For peritoneal dialysis, dialysis providers reported to us that their average costs per treatment were 48 to 68 percent lower than the average costs per treatment for hemodialysis provided in a facility. The average costs per treatment that the dialysis providers reported to us include costs for certain items associated with providing dialysis services, including supplies, equipment, drugs, overhead, and staff. Officials from dialysis providers indicated to us that supply costs are higher for home dialysis compared with dialysis provided in a facility. One reason that supply costs for home dialysis patients are higher is because certain supplies that can be reused for patients who receive dialysis in a facility often cannot be reused by home patients. For example, patients who receive dialysis in a facility can reuse their own dialyzer—the artificial kidney used to filter the blood during hemodialysis—because the facility is able to sterilize the dialyzer between dialysis treatments. Patients who dialyze at home need to use dialyzers that are intended for one-time use, which results in higher supply costs. In contrast, other cost items (such as drugs and staff) were reported to be lower for home dialysis than for dialysis provided in a facility. For example, after home dialysis patients have been trained to conduct dialysis, there are lower staffing costs associated with home dialysis because patients require less staffing resources—as the patients (or their caregiver) are performing the dialysis treatments at home that are performed by staff for dialysis provided in a facility. Table 1 provides one dialysis provider’s self-reported average costs per treatment in 2008 for hemodialysis provided in a facility compared to hemodialysis provided at home, which indicates that the supply costs are higher for home hemodialysis while the other items are lower for home hemodialysis compared with hemodialysis provided in a facility. Table 2 provides another dialysis provider’s self-reported average costs per treatment in 2006 for hemodialysis provided in a facility compared to peritoneal dialysis provided at home. The provider reported that its supply costs were higher for peritoneal dialysis provided at home, while the other items were lower for peritoneal dialysis compared with hemodialysis provided in a facility. All six dialysis providers in our review reported lower average costs per treatment for home dialysis when compared to dialysis provided in a facility; however, some dialysis providers reported higher costs per week for home dialysis compared with dialysis provided in a facility, while others reported lower costs per week for home dialysis. For home hemodialysis, three of the five dialysis providers included in our review reported higher costs per week for providing home hemodialysis compared with the costs per week of providing dialysis in a facility. Officials from these three dialysis providers indicated that the costs per week for patients who dialyze at home were higher because these patients typically dialyze more frequently than three times per week. Home hemodialysis is often performed five to seven times per week. For example, using one provider’s self-reported average costs per treatment from table 1, the average costs per treatment for home hemodialysis were lower ($133 per treatment) compared with dialysis provided in a facility ($243 per treatment); however, for patients who received six dialysis treatments per week, the provider’s weekly costs for home hemodialysis were higher ($798 for six treatments during the week) compared with dialysis provided in a facility ($729 for three treatments per week). The other two providers reported lower costs per week for home hemodialysis compared with dialysis provided in a facility. However, one of these providers indicated that their home hemodialysis patients only dialyze three times per week, which is not more frequent than patients who dialyze in a facility. Providers also reported varying costs per week for peritoneal dialysis compared to dialysis provided in a facility. Of the five dialysis providers in our review, two providers indicated that their costs per week for providing peritoneal dialysis were higher than the weekly costs of providing dialysis in a facility. In contrast, three of the five dialysis providers in our review indicated that the costs per week of providing peritoneal dialysis were lower than the weekly costs of providing dialysis in a facility. Using one provider’s self-reported average costs per treatment from table 2, the average costs per treatment for peritoneal dialysis were lower ($94 per treatment) compared with dialysis provided in a facility ($251 per treatment) and the weekly costs of peritoneal dialysis were also lower ($658 for 7 days of peritoneal dialysis during the week) compared with dialysis provided in a facility ($753 for three treatments per week). Based on self-reported cost information from dialysis providers, the costs per week of providing peritoneal dialysis were lower than the costs of providing hemodialysis in a facility, in part, because costs for drugs, staff, and overhead were lower for peritoneal dialysis patients. As indicated by the dialysis providers’ self-reported cost information, the higher weekly costs of home dialysis for some providers may be due—in part—to the increased frequency of dialysis. For hemodialysis, this is consistent with a 2001 MedPAC report, which estimated that the weekly costs to provide hemodialysis more than three times a week were 15 to 20 percent higher than the weekly costs to provide hemodialysis three times per week. According to dialysis providers, the costs of training patients to dialyze at home can be significant. These costs are exclusively for home dialysis patients as patients who receive dialysis in a facility do not need to be trained. Dialysis providers reported to us that the costs of training patients to dialyze at home are significant because it typically takes 3 to 6 weeks, with up to 5 training sessions a week, to train a patient to perform home hemodialysis (approximately 15 to 30 sessions) and 1 to 2 weeks (approximately 5 to 10 sessions) to train a patient to perform peritoneal dialysis. In addition, training sessions are costly because they require the dedicated attention of one nurse for each training session. Table 3 shows an example of one dialysis provider’s self-reported average costs for a home hemodialysis training session (which includes a dialysis treatment) compared with the average costs of a home hemodialysis treatment session during 2008. At the time of our review CMS officials indicated that they are considering factoring the costs of home dialysis treatments and training into the expanded bundled payment, but the details for the expanded bundled payment are still under development. CMS officials told us that the expanded bundled payment could create incentives for providers to offer home dialysis instead of dialysis in a facility, because although some costs associated with home dialysis may be higher for providers, other efficiencies will offset those costs. However, concerns have been raised that the way in which the expanded bundled payment may account for home dialysis costs might not encourage providers to offer home dialysis, as CMS expects. CMS officials indicated that it intends to assess the effect of the expanded bundled payment on home dialysis utilization rates, but CMS has not established formal plans to monitor this utilization. In order to fulfill the requirements of MIPPA, CMS is developing an expanded bundled payment for ESRD services. Beginning in 2011, Medicare will pay for dialysis services using an expanded bundled payment, which will include both services currently paid under the composite rate and services that are separately billable. Although the details of the expanded bundled payment are still under development and subject to change, at the time of our review CMS officials said they were considering giving providers the same payment regardless of whether the dialysis treatments are provided in the patient’s home or at a facility. They noted that a base payment for dialysis services—based on several factors—could be calculated by totaling providers’ costs, including costs for home hemodialysis, peritoneal dialysis, and dialysis in a facility. CMS officials and an official from UM-KECC, the contractor assisting CMS with developing the expanded bundled payment, told us that they will obtain cost information from cost reports that dialysis providers are required to submit to CMS and from Medicare claims for separately billable ESRD-related services. Since dialysis providers submit cost reports to CMS, which include the costs of home dialysis, CMS officials told us that the costs associated with home dialysis could be factored into the development of the expanded bundled payment. CMS officials told us that when implemented, the expanded bundled payment could create incentives for providers to offer home dialysis. CMS officials explained that while some costs associated with home dialysis may be higher for providers (such as supplies), these costs will be offset by efficiencies created by lower cost categories for such items as drugs, staff, and overhead expenses. However, CMS officials said they have not conducted an analysis to determine whether these cost assumptions are accurate. Some home dialysis providers and officials we interviewed have raised concerns that the way that CMS is considering accounting for the costs of home dialysis may not encourage use of home dialysis. In particular, concerns have been raised that the cost information CMS and its contractor are using to develop the expanded bundled payment may not account for all of the costs associated with providing home dialysis. For example, one analysis of CMS cost reports found that some providers only report cost information to CMS for the three treatments per week for which Medicare reimburses, even though some home dialysis patients receive more frequent treatments. Also, USRDS officials reported to us that the claims information CMS is using to develop its expanded bundled payment does not always reliably distinguish between the costs for separately billable items and services for home hemodialysis and facility hemodialysis. Concerns have also been raised that the expanded bundled payment might not encourage providers to offer home dialysis depending on how home dialysis training costs are accounted for in the bundled payment. At the time of our review, CMS officials noted that they are considering factoring providers’ costs associated with training patients to dialyze at home into the expanded bundled payment rather than providing a separate, additional payment for training patients to dialyze at home. As we noted previously, some providers reported significant up-front costs to start a patient on home dialysis, in part because training for home dialysis requires one nurse to train one patient. Moreover, some home dialysis providers are also concerned that providers will not have an incentive to provide home dialysis if the expanded bundled payment restricts reimbursement to three dialysis treatments per week. Indeed, under the current partially bundled payment system, we found that some home dialysis providers now have been granted medical necessity exceptions to receive Medicare reimbursements for additional dialysis treatments beyond three per week. CMS officials told us that they are unlikely to allow these additional reimbursements under the expanded bundled payment system. CMS officials indicated that, after the expanded bundled payment system has been implemented, they plan to assess its effect on home dialysis utilization rates and, if necessary, adjust the expanded bundled payment accordingly. However, CMS officials said that no formal plan to assess the bundled payment system’s effect on home dialysis utilization rates has been established. Some dialysis experts and officials from dialysis providers have estimated that anywhere from less than 10 percent to up to 50 percent of patients could be good candidates to perform dialysis at home—higher than the current home dialysis utilization rate of about 8 percent. In its April 2008 final rule, CMS took steps to encourage home dialysis for appropriate patients, including requiring that patients be informed of all types of dialysis treatments (including home dialysis). CMS officials told us that they believe that home dialysis could be encouraged under the forthcoming expanded bundled payment if providers receive the same reimbursement under the expanded bundled payment for dialysis provided in a facility or at home, because the reduced costs of home dialysis for drugs and staff would make home dialysis less costly to provide than dialysis in a facility. However, CMS has not independently verified if these assumptions are correct. Additionally, some home dialysis providers and officials we interviewed raised concerns about whether a bundled payment would encourage home dialysis, including concerns that the sources of cost information used to calculate the expanded bundled payment rate may not include all of the costs of providing home dialysis, such as the up-front costs associated with training patients to conduct home dialysis, and its increased frequency. Furthermore, although CMS has said it plans to monitor the effect of the expanded bundled payment system on utilization of home dialysis, it has not specified how this will be done. For these reasons, we believe that the effect of the expanded bundled payment system on home dialysis utilization rates is uncertain and that it is important to monitor its effect on the utilization of home dialysis. To determine the effect of the expanded bundled payment system on home dialysis utilization rates, CMS should establish and implement a formal plan to monitor the expanded bundled payment system’s effect on home dialysis utilization rates to determine whether home dialysis utilization rates have increased as CMS expects. In written comments on a draft of this report, CMS concurred with our recommendation to establish and implement a formal plan to monitor the expanded bundled payment system’s effect on home dialysis utilization rates. CMS agreed with the need to establish a monitoring plan under the expanded bundled payment system and expects to establish a formal plan after it has promulgated the final rule associated with the ESRD bundled payment system. CMS also commented that our draft report implied that final decisions have been reached by CMS and the Secretary of HHS regarding the details of the expanded bundled payment system. We revised our draft report to clarify that the details of the expanded bundled payment are tentative and still subject to change. CMS also provided a few additional comments. First, CMS noted that one dialysis provider that operates multiple dialysis facilities has recently trained patients to conduct and self-perform hemodialysis in a dialysis facility. We added a reference to this option for dialysis treatment in the report. CMS requested that we clarify information in reference to a MedPAC report on the costs of frequent home dialysis. We made changes as appropriate. Additionally, CMS stated that Medicare claims submitted by dialysis facilities do distinguish home hemodialysis from facility hemodialysis. However, we confirmed with USRDS officials that the claims information does not always reliably make this distinction for separately billable items and services and we clarified this in the report. Finally, CMS noted that when dialysis providers have presented information to CMS regarding the percentage of patients who would be good candidates for home dialysis, these percentages are usually closer to 10 to 15 percent of all dialysis patients. However, medical experts and dialysis providers we interviewed indicated a range of less than 10 percent to up to 50 percent of all dialysis patients could be good candidates for home dialysis, although many of the experts and providers we interviewed estimated that from 15 to 35 percent of all dialysis patients would be good candidates for home dialysis. We have clarified this in the report. CMS’s written comments are reprinted in appendix II. We are sending copies of this report to the Administrator of CMS. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. This report examines (1) the extent to which the costs of home dialysis differ from the costs of dialysis provided in a facility, and (2) the Centers for Medicare & Medicaid Services’ (CMS) plans to account for home dialysis costs in the expanded bundled payment for end-stage renal disease (ESRD) services. To meet our objectives, we conducted interviews with representatives from 12 dialysis providers—including large chain providers, small nonprofit providers, and a hospital-based provider. Based on the officials’ self-reported estimates, these dialysis providers offered dialysis services to approximately 68 percent of all dialysis patients—including an estimated 77 percent of peritoneal dialysis patients and roughly all home hemodialysis patients. To examine the extent to which the costs of home dialysis differ from the costs of dialysis provided in a facility, we obtained cost information from six dialysis providers that we interviewed—including average costs per treatment reported in CMS’s renal facility cost reports for home dialysis and dialysis provided in a facility. Additionally, we requested that the dialysis providers include annual cost information for specific categories of costs associated with providing dialysis. The cost categories that we requested were supplies, overhead, equipment and maintenance, drugs, laboratory tests, staff, and administrative costs. We included descriptions of what services should be included in each cost category, basing the descriptions on CMS definitions from the renal facility cost reports. The average costs per treatment reported to us by the dialysis providers did not include the costs of training patients to dialyze at home. At our request, the dialysis providers gave us separate information on the costs of training patients to conduct home dialysis. Six of the 12 dialysis providers we interviewed shared with us cost information for a 12-month period, which ranged from August 2006 through June 2008. In total, we obtained cost information from these 6 providers on the costs for dialysis services provided in nearly 1,600 facilities to approximately 130,000 dialysis patients, including almost 11,000 peritoneal dialysis patients and over 850 home hemodialysis patients. We analyzed the cost information each provider sent to us if the provider had 20 or more patients on either home hemodialysis or peritoneal dialysis. Using this self-reported cost information from the providers, we calculated the percentage difference in average costs per treatment between dialysis provided at home and dialysis provided in a facility (or chain of facilities). We also used the cost information reported to us to calculate the providers’ weekly costs for providing home dialysis and dialysis in a facility. To calculate the weekly costs of home dialysis and dialysis provided in a facility, we multiplied the average cost per treatment by the frequency of the specific type of dialysis. We regard the cost information reported to us as testimonial and we did not independently assess the accuracy of that information. We identify the cost information as self-reported throughout the report, and we did not aggregate or average the self-reported costs across providers. We also conducted interviews with representatives from the Medicare Payment Advisory Commission and professional organizations, including the National Kidney Foundation, the Renal Physicians Association, the National Renal Administrators Association, and the American Association of Kidney Patients. We also conducted site visits to two dialysis facilities that offered both home dialysis and dialysis in a facility to obtain additional information on how patients are trained to conduct home dialysis as well as obtain patients’ perspectives on factors associated with performing home dialysis. Additionally, to obtain information on the extent to which the costs of home dialysis are different than the costs of dialysis provided in a facility, we examined over 30 articles about the costs of home dialysis published between 2002 and 2008, obtained through a MEDLINE literature search or recommended by representatives we interviewed. We also examined over 27 articles about the clinical outcomes associated with home dialysis published between 2002 and 2008, obtained through a MEDLINE literature search. To examine CMS’s plans to account for the costs of home dialysis in the expanded bundled payment, we reviewed CMS’s proposed design for the expanded bundled end-stage renal disease (ESRD) payment, outlined in the Secretary of the Department of Health and Human Services’ 2008 Report to Congress on the Proposed Design for a Bundled ESRD Prospective Payment System. Additionally, to obtain information on how the costs of home dialysis would be included in the expanded bundled payment, we conducted interviews with CMS and CMS’s contractor—the University of Michigan Kidney Epidemiology and Cost Center. We also conducted interviews with dialysis facilities’ officials, dialysis equipment suppliers, and medical experts on home dialysis to obtain their perspective on the expanded bundled payment. We conducted our work from October 2008 through May 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Martin T. Gahart, Assistant Director; George Bogart; Christie Enders; Krister Friday; and Hillary Loeffler made key contributions to this report.
Medicare covers dialysis--a process that removes excess fluids and toxins from the bloodstream--for most individuals with end-stage renal disease (ESRD), a condition of permanent kidney failure. Most patients with ESRD receive dialysis in a facility, while some patients with ESRD are trained to self-perform dialysis in their homes. The Centers for Medicare & Medicaid Services (CMS)--the agency that administers the Medicare program--has taken steps to encourage home dialysis and is in the process of changing the way it pays for dialysis services. Effective 2011, CMS will pay for dialysis services using an expanded bundled payment. The Tax Relief and Health Care Act of 2006 required GAO to report on the costs of home dialysis treatments and training. GAO examined (1) the extent to which the costs of home dialysis differ from the costs of dialysis received in a facility, and (2) CMS's plans to account for home dialysis costs in the expanded bundled payment. GAO obtained information from CMS, the U.S. Renal Data System, ESRD experts, and self-reported cost information from six dialysis providers. The self-reported cost information GAO obtained from dialysis providers--including a large chain provider, small nonprofit providers, and a hospital-based provider--indicated variation in the costs to provide home dialysis when compared with costs to provide dialysis in their facility. The six dialysis providers reported lower costs per treatment to provide home dialysis than to provide dialysis at a facility, though the amount by which home dialysis costs were lower varied widely among the providers. Because patients who dialyze at home typically receive dialysis treatments more than three times per week, some providers' costs to provide home dialysis on a weekly basis can be higher than their costs to provide dialysis at a facility. However, other dialysis providers reported lower costs per week to provide home dialysis compared with dialysis provided in a facility. Additionally, several dialysis providers indicated that, for home dialysis patients, the costs of a dialysis treatment with a training session were significantly higher than the costs of a dialysis treatment without a training session. At the time of GAO's review CMS officials said they are considering factoring the costs of home dialysis treatments and training into the expanded bundled payment, but the details for the expanded bundled payment are still under development and subject to change. CMS officials told GAO that the expanded bundled payment would create incentives for providers to offer home dialysis instead of dialysis at a facility, because although some costs associated with home dialysis may be higher for providers, other efficiencies will offset those costs. For example, although supply costs may be higher for home dialysis, other costs of providing home dialysis--such as drugs, staff, and overhead--will be lower, and thus, in CMS's view, will encourage providers to offer home dialysis. However, concerns have been raised that the way that CMS is considering accounting for the costs of home dialysis in the expanded bundled payment might not encourage providers to offer home dialysis, as CMS expects. For example, some dialysis providers raised concerns that because home dialysis generally consists of more than three dialysis treatments per week--which may result in higher weekly costs to provide home dialysis compared with dialysis received in a facility--providers may not be encouraged to offer home dialysis. CMS officials indicated that CMS intends to assess the effect of the expanded bundled payment on home dialysis utilization rates, but CMS has not established formal plans to monitor this effect.
DOE’s business model relies on contractors to carry out the bulk of the department’s mission activities through management and operating contracts and other site contracts for operations at DOE-owned facilities, while employing federal officials to set mission objectives and provide contract oversight. This business model dates from the Manhattan Project, when federal officials contracted with private companies and universities to develop and produce the atomic bomb. Under this business model, contractors manage and operate DOE facilities—including research laboratories, production and test facilities, and nuclear waste cleanup and storage facilities—located throughout the country. Generally, DOE requires these contractors to be corporate entities formed for the specific purpose of managing and operating a facility and requires the contractors to integrate their accounting systems and budget processes with those of the department. DOE also generally requires contractors that take over a contract to hire the existing contractor workforce at a facility. As a result, with the exception of top managers, the workforce at a facility generally remains in place despite changes in contractors. DOE oversees contractors’ activities through its headquarters program offices—primarily the National Nuclear Security Administration (NNSA), the Office of Environmental Management, and the Office of Science—and site offices located at each facility. Under its business model, DOE reimburses contractors for the allowable costs of employee compensation, including benefits such as pension and other postretirement benefits. DOE is ultimately responsible for reimbursing its contractors for the cost of these benefit plans, and reports a liability or asset in its financial statements for the funded status—that is, plan obligations minus plan assets—of these benefit plans. When site contracts are recompeted or expire, it is DOE’s policy to ensure the continuation of these benefits for incumbent contractor employees and eligible retirees by, for example, requiring the transfer of benefit plan sponsorship responsibilities to a successor contractor or related company. Although other federal agencies use contractors to operate facilities and reimburse those contractors for their allowable benefits costs, DOE is unique in the percentage of its budget that goes to site contractors. For example, while the National Institutes of Health funds a contractor- operated research facility and requires the facility contractor to assume sponsorship of existing employee benefit plans, the agency devotes only about 1.5 percent of its budget toward this contract. In contrast, 90 percent of DOE’s budget goes toward such contracts. As a result, a large increase in reimbursement costs for contractors’ employee benefits is more likely to have a significant impact on DOE’s budget than a similar increase would for agencies devoting a smaller percentage of their budget toward contracts for operating government-owned facilities. DOE’s contractors sponsor pension plans for their employees, including both traditional pension plans, known as “defined benefit” plans, and 401(k) or similar plans, known as “defined contribution” plans. As of September 2010, DOE was responsible for reimbursing contractors for 50 defined benefit plans, including 40 qualified plans and 10 nonqualified plans. Of the qualified defined benefit plans, 37 are private-sector plans while 3 are public-sector plans. DOE’s contractors that sponsor private- sector pension plans must comply with the Internal Revenue Code and the Employee Retirement Income Security Act of 1974 (ERISA), which establishes minimum funding standards for the amounts that private- sector plan sponsors must set aside in advance to pay benefits when they are due. DOE’s current policy is to reimburse contractors for contributions made by the contractors to their qualified defined benefit plans and to reimburse contractors for their nonqualified plans on a pay- as-you-go basis. DOE reimburses contractors for their contributions to defined contribution plans as well. DOE’s contractors also sponsor a variety of other postretirement benefits plans. Although these benefits can include dental and life insurance coverage, the majority of DOE reimbursement costs are for retiree health care benefits. As of September 2010, DOE was responsible for reimbursing 41 contractors for retiree health care payments, although the specific benefits offered to retirees varied across contractors. For these other postretirement benefits, DOE’s contractors typically do not set aside funds in advance because, in contrast to requirements for funding pension benefits, there are generally no requirements and few incentives to do so. As a result, DOE reimburses contractors on a pay-as-you-go basis for the amount needed to meet the employer’s annual share of these costs, and these benefit obligations represent a continuing liability for DOE. Since September 1996, DOE Order 350.1, Contractor Human Resource Management Programs, has set forth DOE’s policy for the oversight and reimbursement of contractor benefit plans. In particular, this order requires that DOE determine whether contractors’ benefit costs are reasonable and allowable and therefore reimbursable. To help make this determination, DOE Order 350.1 requires that contractors “benchmark” the value or cost of their total benefit package by conducting either a benefit value or cost study that compares the value or costs of this total benefit package to those of comparable organizations. A small number of contractor pension plans account for a large percentage of DOE’s contractor pension liabilities. As shown in figure 1, 12 plans have liabilities—specifically, projected benefit obligations—that exceed $1 billion and account for $31.4 billion, or 86 percent, of the $36.7 billion in total liabilities represented by all DOE contractor qualified defined benefit plans. Within those 12 plans, pension liabilities are concentrated among a handful of contractor plans. NNSA oversees contractors that sponsor 6 of the 12 plans, including the 3 largest plans that, combined, account for over one-third of all DOE contractor pension liabilities. As shown in figure 2, DOE’s costs for reimbursing contractor pension and other postretirement benefits have grown since 2000 and are projected to increase in coming years. From fiscal year 2000 to fiscal year 2010, DOE’s annual costs for reimbursing contractor pension contributions ranged from a low of $43 million in 2001 to a high of $750 million in 2009. Although projections of future contributions are inherently sensitive to underlying assumptions and can change significantly over time, DOE estimates, on the basis of data provided by its contractors in November 2010, that necessary contractor pension contributions may rise markedly in fiscal year 2012—to almost $1.7 billion—in large part because of expected increases among plans with the largest liabilities. Although useful as an indicator of the financial pressures that could lie ahead, this projection is subject to much uncertainty because of factors that could result in changes in the size or timing of needed contributions to meet future years’ funding requirements. Specifically, projections are particularly sensitive to the future economic environment, especially with respect to future interest rates and asset returns, and also could be affected by legislative changes to funding rules. For example, an October 2009 DOE analysis showed that projected minimum required contributions among the 10 largest contractor pension plans could vary by $2 billion or more in any given year during fiscal years 2012 through 2019, depending on changes in interest rates. Although DOE’s reimbursement costs for its contractors’ other postretirement benefits have not fluctuated as widely as contractor pension costs, those costs have grown steadily since 2000 at an average annual rate of 8 percent and are currently projected to rise at a slightly higher rate of 9 percent over the next 5 years. Under its current business model, DOE has limited influence over contractor pension and other postretirement benefit costs. Specifically, contractors sponsor the plans and therefore control the types of benefits offered employees and the investment strategies for allocating pension plan assets; they also determine the amounts paid into plans. In addition, external factors beyond both DOE’s and the contractors’ control, such as economic conditions and changes in statutory requirements, have significant effects on benefit costs incurred by contractors and, in turn, affect the amount of allowable costs that DOE reimburses contractors. Despite these constraints, however, DOE can exercise some limited influence over contractor pension and other postretirement benefit costs through its oversight efforts, reimbursement policy, and contract requirements. DOE has limited influence over contractor pension and other postretirement benefit costs under its current business model because contractors, not DOE, sponsor the plans. As shown in table 1, contractors control the types of benefits offered to employees and the benefit plans’ design. Moreover, contractors are responsible for managing those plans, including selecting strategies used to invest pension plan assets and determining, within statutory requirements, how much is paid into the plans. Because contractors control both the design and management of employee benefit plans, their decisions can significantly affect the magnitude of benefit costs and the volatility of pension contributions. Nevertheless, although DOE has limited influence over these decisions, the department is responsible for reimbursing its contractors for the allowable costs of providing pension and other postretirement benefits, including retiree health care, to current and former employees and their beneficiaries. According to DOE documents and officials, contractors decide what type of benefits to provide to their employees and how to design benefit plans, and these decisions are part of an overall compensation strategy devised to recruit and retain the workers they need to fulfill their mission. With respect to pensions, contractors may offer defined benefit plans, defined contribution plans, or both, and the content of the plans may vary. For example, the contractor at DOE’s Oak Ridge National Laboratory offers both a defined benefit plan and a defined contribution plan to all employees, while the contractor at DOE’s Savannah River Site offers a defined benefit plan and a defined contribution plan to employees hired before August 1, 2008, but only a defined contribution plan to employees hired after that time. Contractors may change the pension benefits offered to employees, in accordance with ERISA, and many have been doing so, echoing the overall national trend from defined benefit to defined contribution plans. For instance, the contractor at Sandia National Laboratories changed its benefit package so that non-union employees hired after December 31, 2008, have a defined contribution plan, while existing employees remain participants in a defined benefit plan. Contractors also determine other elements of plan design, such as eligibility and vesting requirements, within the parameters set by the Internal Revenue Code and ERISA. Moreover, in the case of defined benefit plans, contractors determine, among other things, the formula used to calculate benefits owed to employees, as well as additional provisions affecting costs, such as early retirement. In the case of defined contribution plans, they determine how much to match employee contributions and what investment options employees will have, among other things. In addition to their control over pension benefits, contractors also control their offerings for other postretirement benefits, and they can change these packages as they deem appropriate, subject to DOE approval for reimbursement purposes. For example, at Sandia and Los Alamos National Laboratories, new hires receive access-only postretirement health care benefits, which means that, as retirees, they will have to pay the plan’s full benefit premiums. At DOE’s Savannah River Site, the contractor has steadily increased its retirees’ share of health care and dental costs since 2003, although the contractor continues to subsidize a portion of the premiums. Contractors also manage the pension plans they offer, and they have a fiduciary responsibility to manage plan assets in the sole interest of the plans’ beneficiaries. The contractors’ fiduciary role takes precedence over their responsibility to DOE and therefore limits DOE’s influence over the plans and the associated costs. Plan management includes selecting investment strategies for defined benefit pension assets. In choosing strategies for investing defined benefit plan assets, contractors make a trade-off between risk and return. For example, bonds, because of their higher correlation to pension liabilities, can decrease the volatility in plan funding and potentially required contributions. On the other hand, equities generally come with greater risk, but also greater expected returns relative to bonds. Consequently, investment strategies relying relatively more on equity returns are likely to provide volatile plan funding and contributions. The performance of contractors’ investment portfolios can affect the contributions contractors make to the plans and, in turn, their reimbursable costs. For example, declines in the fair market value of plan investments decrease the funded status of the plan. In such a situation, contractors may be required to increase their annual pension contributions over a period of years, which DOE may in turn be obligated to reimburse. Moreover, volatile investment returns can result in fluctuations in pension contributions from year to year. As a result, DOE ultimately bears the investment risk incurred by the contractor sponsoring the plan. DOE officials stated that the agency encourages contractors to make investment decisions that reduce volatility but—because DOE’s role is limited to oversight and contractors have the fiduciary responsibility for plan administration—does not provide guidance on how to do so, nor otherwise dictate how contractors should allocate plan assets. Plan management also includes making decisions about funding contractor pension plans. Contractors, not DOE, are responsible for determining, within statutory requirements, the amounts they pay into their benefit plans. Funding requirements vary among the defined benefit plans offered by DOE contractors, making it difficult to obtain a clear picture of pension contribution requirements across plans and over time. For example, three different sets of funding requirements apply to the range of DOE contractor pension plans. Additionally, three contractor pension plans, including the second largest plan, were eligible for special provisions from plan year 2008 to plan year 2010, which reduced their plan liabilities. Figure 3 illustrates the distribution of general funding requirement types among DOE contractor pension plans. Contractors, according to the funding requirements applicable to their qualified pension plans, determine the minimum contribution they must make to the plans. A contractor’s minimum contribution is generally considered an allowable cost for reimbursement by DOE because the contractor incurs this cost to meet its contractual obligation with DOE to maintain the pension plan’s eligibility for favorable tax treatment under the Internal Revenue Code. While contractors are obligated to pay only the minimum required contribution, under DOE policy they can also ask the department to reimburse them if they contribute more than the minimum. A contractor might choose to contribute more than the minimum in the current year to, for example, avoid benefit restrictions that would otherwise come into effect on the basis of the pension plan’s funding level. In addition, a contractor might wish to contribute more than the minimum to build credit balances that it could use in future years to try to level the amount it budgets for pension contributions. External factors over which DOE and its contractors have no control, including economic conditions and changes in statutory requirements, can significantly affect contractor reimbursement costs. For instance, changes in economic conditions can significantly affect necessary pension plan contributions, which are, in part, determined by actuarial assumptions about the future, and these assumptions are used to calculate the value of plan assets and liabilities, such as employee turnover, and compensation increases. Furthermore, minimum contribution requirements can vary from year to year as the result of fluctuations in the investment performance of plan assets. For example, the significant decline in value of the financial markets in 2008 caused a considerable drop in plan assets. In addition, changes in the interest rate can significantly affect contractor pension contributions. For instance, according to DOE officials, a drop in interest rates has contributed to increases in calculated plan liabilities, which, along with other factors, has led to a significant increase in contractor pension contributions. Officials further noted that, in part because of recent economic conditions, some contractors contributed to their pension plans for the first time in years. For example, a contractor at Oak Ridge National Laboratory told us that in fiscal year 2010, in part as a result of the recent financial market crisis and changing interest rates, it budgeted for the first contributions to the site’s defined benefit plan since 1984. The variability in investment returns and interest rates, which influences the calculation of plan contributions, also adversely affects DOE’s ability to accurately forecast the costs of pension contributions in its budget requests. Changes in economic conditions can also affect other postretirement benefit costs. Changes in health care and other cost trends can influence the cost of these benefits and, in turn, the amount that must be reimbursed by DOE. For example, officials at DOE’s Savannah River Site explained that as health care costs increase nationally, the cost of providing retiree health care to their employees has also increased. Los Alamos National Laboratory officials attributed their rising health care costs to a variety of factors, including increases in emergency room and radiology costs. Changes in statutory requirements can also have a significant effect on contractor benefit costs. For example, the funding requirements that govern a large majority of DOE contractor private-sector pension plans have been changed or significantly amended over the last 5 years. One of the most sweeping amendments to ERISA and the minimum-funding rules occurred with the passage of the Pension Protection Act of 2006. This act—prompted, in part, by the default of several large pension plans— increased the minimum funding requirements for pension plans and sought to strengthen the private pension system. Many of the funding rule changes for single-employer plans came into effect slowly, however, and included special rules that provided funding relief for certain plan sponsors. Additionally, almost as soon as the act began to take effect in 2008, the economy weakened, and further statutory and regulatory changes occurred that had the overall effect of reducing or delaying pension contributions that would otherwise have been required. The number and timing of statutory and regulatory changes since the enactment of the Pension Protection Act of 2006 makes it difficult to determine how much of an effect the law has had on contractor contribution requirements. During our site visits, some contractors reported an increase to their minimum required contributions since the act’s implementation, but these increases are the combined result of multiple factors, including economic and demographic experience and legislative changes. Contractor costs for other postretirement benefits can also be influenced by changes in statutory requirements. For instance, according to DOE documentation, passage of the Patient Protection and Affordable Care Act in 2010 may affect contractors’ other postretirement benefits in a variety of ways, such as by levying an excise tax on high-cost health plans. Despite the constraints posed by DOE’s current business model, our analysis of DOE documents—including department policy, budget documents, and contract provisions—indicates that the agency has several means—oversight efforts, reimbursement policy, and contract requirements—by which it can exercise some limited influence over contractor pension and other postretirement benefit costs (see table 2). While DOE will ultimately have to reimburse the cost of contractor pension benefits that have already been accrued, it can use these means to exert some influence over future benefit costs. First, DOE decides its degree of oversight over benefit costs, including the amount of detailed information related to these costs that it collects and reviews and the extent to which it communicates that information to department officials and congressional decision makers. While DOE’s oversight efforts do not control costs directly, according to department officials, they help increase awareness of cost management on the part of both the department and contractors and can encourage discussion between the two on ways to mitigate costs where appropriate. Specifically, DOE determines the amount of information contractors must provide about benefit costs and the frequency with which they must do so, the degree of departmental review, and how readily available that information is to decision makers. For instance, DOE policy requires contractors to periodically assess their benefit packages and submit the results of these evaluations to the department. Generally, the contractor must take corrective action if the value of the benefit package exceeds 105 percent of comparable companies’ plans. Moreover, DOE requires contractors to provide the department with cost projections and information on direct and indirect costs as they relate to pension and other postretirement benefits. Second, through its reimbursement policy, DOE sets requirements that determine the extent to which contractor benefit costs qualify for reimbursement. In April 2006, in response to growing liabilities for contractor employee benefits, DOE issued Notice 351.1, which provided that the department would continue to reimburse contractors for the allowable benefit costs for incumbent employees and eligible retirees but limit reimbursement for new employees to the costs of “market-based” pension and health benefit plans. A pension plan was deemed market- based when, among other things, the plan was a defined contribution plan. In June 2006, however, DOE suspended the notice, and in response to stakeholder and congressional concerns, subsequently decided not to reissue it. Although the policy change was ultimately reversed, it demonstrated that DOE can potentially use its reimbursement policy to exercise some influence over contractor benefit packages. Currently, DOE policy provides for reimbursement of a contractor’s minimum required pension contribution while giving program offices the discretion to approve higher reimbursement levels. In addition, in accordance with department policy, contractors must obtain DOE approval for any plan changes that can affect reimbursement costs. In asking to change a plan, a contractor must submit justification that, among other things, estimates savings or costs and provides the basis for this determination. Third, DOE establishes contract requirements that determine the degree of flexibility contractors have in structuring and modifying their benefit packages and, through these requirements, can exercise some influence over contractor decisions on benefits. Although DOE cannot unilaterally alter an existing contract, it can negotiate with a contractor to include new provisions in an existing contract, as well as in a contract that is extended or newly awarded. By contractually obligating successor contractors to assume sponsorship of existing benefit plans, DOE has generally required that benefits be continued for existing employees and eligible retirees. Since 2005, however, the department has used a contract provision requiring contractors to provide market-based pension and health care benefit plans for new employees. As a result, some contractors have shifted from providing all employees defined benefit plans to offering new employees defined contribution plans, and some contractors have also stopped providing other postretirement benefits to new employees. For example, in 2006, the new contractor that assumed responsibility at Los Alamos chose to offer new employees only a defined contribution plan, while giving incumbent employees who worked at the site before the transition the option of participating in a defined benefit plan or the defined contribution plan. The contractor at Savannah River also closed its defined benefit plan to employees hired after 2009 and, in addition, ceased offering some other postretirement benefits to new employees. According to NNSA officials, NNSA is now exploring a further shift in contract requirements for sites it oversees, to allow successor contractors to alter existing employees’ benefit packages. Since the economic downturn deepened in 2008, DOE has taken steps to enhance its management of contractor benefit costs—particularly for contractor pensions—but gaps remain in its approach. Before 2008, DOE had made some changes but had not exercised the full range of measures at its disposal. Since then, DOE has taken additional steps to address its approach to contractor benefit costs, but more could be done. Specifically, DOE has strengthened its oversight of contractor pension costs, but it has yet to review its approach to overseeing other postretirement benefit costs or to clearly inform Congress of those costs and their potential impact on mission work. As a result, DOE may be delayed in improving its oversight of those benefits, and potentially not provide important information to Congress that could inform annual funding decisions. DOE has, for the most part, continued to insert the same contract requirements since 2005 and use the same reimbursement policy since 1996, but it lacks clarifying guidance to ensure a consistent approach to evaluating contractor benefit costs. As a result, DOE is unable to ensure that program offices apply that policy consistently, and it continues to reimburse contractors for benefit packages that have exceeded its standard for a prolonged period. DOE has strengthened its oversight of contractor pension costs by changing how it collects, analyzes, and communicates information on those costs. First, in a January 2010 memo, the department announced the creation of an annual review process to more systematically analyze the status of each contractor’s pension plan and the contractor’s strategy for managing the plan. Second, DOE created a central database in October 2010 to regularly collect and report information on contractor benefit costs. Third, DOE increased the information it communicates to Congress on contractor pension costs by adding an explanation of those costs to its fiscal year 2011 budget request. The department has done less on other postretirement benefit costs, however. DOE officials had stated that they expected to begin a review in spring 2010 of the department’s approach to other contractor benefits similar to the one done for contractor pensions, but as of January 2011, the department has not followed through with these plans. Moreover, DOE has not added information to its budget request on its contractors’ nonpension postretirement benefit costs. While contractor pension costs have risen sharply since 2009, the cost of other postretirement benefits is also significant and growing. In January 2010, DOE set up an annual review process for contractor pensions that allows the department to more systematically analyze contractor pension data and each contractor’s strategy for managing its plan. Specifically, DOE guidance requires each contractor to submit a standard report on its pension plans at the start of each year. This report must include information on the plan’s current funding status and the contractor’s estimates for how much it will need to contribute to the plan during the current fiscal year and the 4 fiscal years after that. In addition, the contractor must provide the key assumptions and methods used to develop its estimates. If the estimates indicate that a plan’s funding level could drop enough to force the contractor to impose benefit restrictions, the contractor must describe the impact of the benefit restrictions, the number of employees the restrictions might affect, the additional funds needed to avoid those restrictions, and whether it recommends contributing those funds to the pension plan. The contractor must also include an assessment of its pension plan’s investment management and the results of its current investment strategy. After contractors submit this report, DOE guidance requires contractor personnel to meet with department officials from headquarters and the field to discuss the contractor’s pension strategy and the reasons for any differences between its current pension contribution estimates and prior estimates. In addition, DOE officials and contractor personnel are to discuss how the contractor intends to increase the predictability of its pension contributions and contain current and future costs. According to DOE guidance, a goal of this annual review process is to improve the accuracy and predictability of DOE budget forecasting for funding its contracts by requiring contractors to provide their estimated contribution amounts to their pension plans, both for the immediate year and the subsequent years, on the basis of a range of actuarial assumptions. In addition, these discussions are meant to provide DOE with opportunities to increase its ability to share information concerning contractor costs with key stakeholders across the department. DOE has also created a central database to regularly collect information on contractor benefit costs, which is intended to facilitate analysis, as well as ensure current reporting on those costs. DOE set up the database in October 2010 and is requiring contractors to regularly update information on their benefit plan costs and characteristics. Specifically, DOE guidance requires contractors to report on, among other things, their current pension assets and liabilities and 5-year budget projections for pension and other benefits. Before implementing the database, DOE relied on ad hoc data requests to contractors to collect information on pension plans and other postretirement benefits. For example, in 2010, DOE requested pension and other benefit data from each contractor and shared that information in the form of site-specific “snapshots” to all of its contractors and program offices. In contrast to these data requests, which DOE and contractor officials at several sites found redundant or time-consuming, DOE guidance explains that the database is intended to provide a structure for capturing information obtained through the annual pension review process, as well as to expand data collection to other contractor benefits and readily report information on those benefits. By scheduling regular data requests and storing information in a central system, DOE has taken actions that help to streamline the data collection process and facilitate analysis and up-to-date reporting on contractor benefit costs. While DOE has reviewed its approach to overseeing contractor pension plans, it has yet to devote a similar level of attention to other postretirement benefits. While our analysis of DOE financial data indicates that other postretirement benefit costs have generally been less volatile than those of pension plans, these costs have steadily risen over the last 10 years, amounting to $385 million in fiscal year 2010. According to federal standards for internal control, federal agencies are to employ internal control activities, such as top-level review, to help ensure that management’s directives are carried out and to determine if the agencies are effectively and efficiently using resources to assess risks from both internal as well as external sources. Consistent with these standards, DOE has collected and analyzed some information on the risk it faces from other postretirement benefits. For example, in May 2010 DOE issued a summary of its analysis on contractor pension plan and other benefits, including postretirement health care benefits. But it has not comprehensively reviewed its approach to overseeing those benefits or correspondingly changed its policy on how it manages other contractor benefit costs. DOE officials had stated that they expected to begin a review of benefits other than pensions in spring 2010. As of January 2011, however, DOE had yet to begin its planned review, according to an agency official, because of the department’s continuing work on contractor pensions. This official stated that the department still planned to review its approach to other postretirement benefits, but it was not clear when the review would begin. Without comprehensively reviewing its approach to overseeing other contractor benefits, including postretirement benefits other than pensions, DOE may be delayed in improving its oversight of those benefits and identifying policy options that might reduce or better address the growth of reimbursement costs. For example, in a 2004 report on this topic, we recommended that DOE incorporate into its oversight process a focus on the long-term costs and budgetary implications of decisions pertaining to each component of contractor benefit programs, especially pension and postretirement health benefits, which have budgetary requirements beyond the current year. DOE has taken steps to incorporate such a focus into its oversight of contractor pension costs through its annual review process, but it has yet to incorporate a similar focus on long-term costs and budgetary implications into its oversight process for other postretirement benefit costs. Further, while DOE reimburses other postretirement benefits on a pay-as-you-go basis, an option for addressing its liabilities is to reimburse contractors for prefunding some or all retiree benefits, particularly those associated with health care, before employees retire. By reimbursing contractors for prefunding these benefits, DOE may be able to reduce the unfunded liability reported in its financial statements and take advantage of the compounding effects of investment returns on plan assets. Nevertheless, while prefunding more effectively recognizes costs when the associated work is being performed, in the short term prefunding might require higher contractor contributions, which would in turn increase DOE’s short-term reimbursement costs. In addition, opportunities for prefunding other postretirement benefits and nonqualified pension benefits are more restricted than for tax-qualified benefits. By not comprehensively reviewing its approach to its contractors’ other postretirement benefits, DOE has yet to systematically weigh the advantages and disadvantages of these and other potential policy changes that might enhance its approach. Moreover, although DOE has taken steps to improve its communication to Congress of key information concerning contractor pension costs, it has yet to provide similar information on the costs of other postretirement benefits and their potential impact on mission work. DOE expanded the information it provides to Congress in its fiscal year 2011 budget request by adding a discrete section explaining contractor pension costs. This section outlined contractor pension costs for the upcoming fiscal year and 2 prior years by program office and site. In addition, the section included a discussion of the challenges and risks DOE faces in managing contractor pension costs. The addition of this information is an improvement over prior budget requests, which included only isolated references to contractor pension costs and did not provide an agencywide picture of the magnitude of those costs. But DOE did not provide the pension information in a format consistent with the appropriation accounts that Congress uses to provide funding to the department. As a result, the information may be less useful to Congress than it otherwise could be. Moreover, DOE did not include agencywide information on other postretirement benefit costs in its fiscal year 2011 request, nor did it add such information to its fiscal year 2012 request. Yet DOE reimbursements to contractors for other postretirement benefits have risen steadily from roughly $306 million in fiscal year 2005 to $385 million in fiscal year 2010. By not including an explanation of these costs in its budget request, DOE is not providing Congress with complete information on the full cost of its contractor retirement benefits and their potential impact on the resources DOE has available to accomplish its mission work. As a result, Congress lacks important information that could inform its annual funding decisions. DOE has, for the most part, continued to insert the same contract requirements since 2005 and use the same reimbursement policy from 1996, but it lacks complete guidance on how program offices should evaluate contractor requests to contribute more than the minimum required to their pension plans, and it also lacks a comprehensive timeline for modifying contractor benefit packages with values that exceed DOE standards. DOE has inserted language into new and renewed contracts that ties the reimbursement of contractor benefit costs for new employees to market-based benefit packages and increases how frequently contractors must assess their benefit packages. DOE Order 350.1 does not reflect these updated requirements, although DOE officials said the department plans to revise the order by removing certain sections and adding applicable provisions to its acquisition regulations. But because of procedural delays and the sensitive nature of the order’s content, officials stated they do not expect this revision of the order for some time. Aside from a brief change in 2009, DOE has largely maintained its reimbursement policy for contractor benefits. The Office of Management and Budget’s implementing guidance emphasizes the need for agencies to develop policies that ensure the effectiveness and efficiency of their operations. In keeping with this guidance, DOE’s policy, as reflected in DOE Order 350.1, is to reimburse contractors for the minimum amount required by ERISA or more on a case-by-case basis. Also in keeping with Office of Management and Budget guidance, DOE reimbursement policy defines the reasonableness of reimbursement costs by requiring contractors to benchmark the value of pension and other benefits with those of comparable companies and to reduce the value of benefits if they exceed the overall benchmarked average by more than 5 percent. Nevertheless, DOE lacks complete guidance on how its program offices should evaluate contractor requests to contribute more than the minimum required to their pension plans in order to carry out this revised policy. In particular, DOE has not outlined a standard process or criteria for evaluating requests to contribute more. Despite Office of Management and Budget criteria, DOE program officials we interviewed were not aware of any departmentwide guidance on factors to consider when deciding to approve or deny contractor requests to contribute more than the minimum. For instance, DOE does not specify whether program offices should place a higher priority on minimizing contribution volatility or reducing cost when evaluating contractor requests. As a result, DOE’s program offices use different evaluation procedures and may not consider the same factors when deciding whether to approve or deny contractor reimbursement requests. By using different evaluation procedures, DOE program offices may implement DOE’s reimbursement policy inconsistently. For example, according to DOE officials, NNSA and the Office of Science have generally approved contractor requests to contribute additional funds to their pension plans for reasons such as leveling site or program office budget costs, while the Office of Environmental Management has generally denied such requests and instead directed those additional funds toward mission work. In particular, one site official stated that Environmental Management’s denial of a contractor request to contribute more than the minimum in 2010, with the intent of reducing future reimbursement costs, prompted the site to alter its planned budget allocations and mission work. Additionally, this denial resulted in a drop in the plan’s funding to a level at which plan restrictions went into effect for employees. In contrast, NNSA approved a similar request aimed at managing the anticipated rise in future pension costs. NNSA officials stated that in making these decisions, the office considers whether the contractor has made a compelling case that the higher contribution will reduce or level future budget costs. According to another DOE official, the Office of Science uses a set of criteria based on prior pension management performance, as well as an analysis of the contractor’s assumptions and investment strategies. An official from the Office of Environmental Management stated that, unlike NNSA and the Office of Science, the office will not reimburse pension contributions exceeding the minimum unless funding at the minimum level would restrict benefits. Without standard guidance for its program offices, DOE is unable to ensure that its offices are deciding on contractor requests on the basis of consistent criteria reflecting departmentwide goals for managing contractor pension costs. As a result, program offices may not systematically consider both near-term mission needs and potential spikes in future reimbursement costs when reaching their decisions. In addition, DOE’s existing process is incomplete for correcting contractor benefit packages that exceed its reimbursement standard. Specifically, DOE lacks a comprehensive timetable for when a contractor must modify the value of its benefit package to fall within DOE’s reimbursement standard. DOE requires contractors to regularly assess whether the value of their benefit packages is reasonable relative to comparable companies and to take corrective actions if they do not meet that standard. Specifically, DOE guidance requires contractors to implement corrective action plans if the assessments, known as value studies, show that the value of a contractor’s benefit package is more than 5 percent of the average value of 15 selected entities in similar lines of industry. DOE guidance stresses that the goal of the value study is to measure the relative worth of a contractor’s total benefits package, regardless of the actual payroll costs associated with the benefits. DOE guidance requires contractors to implement corrective action plans within 2 years, but the guidance does not include a defined timeline by when contractors must submit, and DOE contracting officers must decide whether to approve, contractors’ corrective action plans. As a result, some contractors with benefit package values exceeding 105 percent may spend several years developing corrective action plans. From our analysis of DOE data, of the 20 contractor benefit packages most recently assessed as exceeding the 105 percent standard, 3 were being corrected as of February 2010. Of those 3 benefit packages, one is expected to be reduced to below DOE’s standard, but another is expected to exceed DOE’s standard even after the contractor finishes taking its corrective actions, and it is unclear whether the third benefit package’s corrective actions will bring the score to below DOE’s standard. Contractors for 5 benefit packages that exceed DOE’s standard, some from as early as 2008, have yet to implement corrective action plans, either because the contractors are developing them or because DOE has yet to approve them. For example, one contractor whose August 2008 value study showed its benefits exceeding the threshold value does not yet have an approved corrective action plan, more than two and a half years after discovering its benefits were too high. According to a DOE document, the contractor submitted a corrective action plan that was disapproved by NNSA and, after analyzing different alternatives at NNSA’s request, decided to resubmit its original plan for reconsideration. As of February 2011, NNSA had notified the contractor that approval of its corrective action plan was being deferred pending the results of the contractor’s 2011 value study. In another instance, DOE directed a contractor to develop a corrective action plan in May 2010 after the contractor’s July 2009 value study exceeded DOE’s standard. According to a DOE document, the contractor submitted a corrective action plan in September 2010, but that plan has yet to be approved by NNSA because the plan, as submitted, was lacking in detail. As a result, only one contractor with benefit packages exceeding DOE’s standard for the most recent evaluation period is expected to bring its benefits in line with DOE’s requirements. Furthermore, DOE guidance states that, on the basis of a contractor’s written justification, contracting officers may waive the requirement for contractors to develop a corrective action plan. But neither DOE policy nor guidance provide details on the process the contracting officers should use or the factors they should consider when deciding whether to waive the corrective action plan requirement. Moreover, the DOE headquarters offices with responsibility for overseeing contractor human resource management issues are not required to review the contracting officers’ decisions to issue waivers. In addition, a DOE official stated that the agency does not have departmentwide criteria for evaluating contracting officers’ rationale for waiving corrective action. As a result, DOE lacks assurance that contractor requests to waive corrective action plans are being consistently evaluated across the department or that decisions to allow benefit plans to remain above DOE’s standard— sometimes significantly—are based on departmentwide criteria. For the most recent value study, our analysis of DOE data showed that contracting officers issued waivers to eight contractors for a range of reasons, including marginal differences between DOE’s standard and the contractor’s score and recognition of a contractor’s previous efforts to reduce its score. Also according to DOE data, officials waived the requirement for one contractor whose score exceeded the DOE standard in part because of opposition from the site’s employee group. As a result, contractors whose scores exceed DOE’s standard may remain above that level for an undefined period and continue to accrue liabilities and be reimbursed for the cost of benefits that may not meet DOE’s standard. Given DOE’s long history of using contractors to accomplish its mission and its growing unfunded liabilities for contractor pension and other postretirement benefits, it is important that DOE manage its contractual obligations associated with those benefits so as to ensure both the successful accomplishment of its mission objectives and the cost-effective use of government resources. While contractor retirement benefits are only one piece of total contractor compensation, in an era of federal budget constraints, DOE will likely continue to face significant challenges managing the costs of those benefits and mitigating their impact on funding available for the department’s mission activities. In particular, in some cases it will have to reimburse the costs of the substantial pension liabilities its contractors have accumulated over decades. While the volatility of pension contributions and the growth in other postretirement benefit costs are not unique to DOE’s contractors, the department’s extensive reliance on contractors and its limited influence over their benefit packages makes the department’s budget particularly sensitive to these factors. DOE’s recent review of contractor pension plans and the resulting oversight and transparency improvements are positive steps. Nevertheless, DOE has yet to comprehensively review its approach to managing other postretirement benefit costs as it has for contractor pensions, although the cost of these benefits is growing and could put pressure on the department’s budget in coming years. Without comprehensively reviewing its approach to managing other contractor benefit costs, DOE may miss opportunities to make policy changes that could improve oversight, enhance efficiency, and potentially reduce its reimbursement costs in the future. Moreover, given the potential magnitude of contractor benefit costs, it is important that DOE keep Congress informed about amounts budgeted for all such costs, the factors that affect those costs, and the department’s plans for mitigating possible mission impacts if contractor benefit costs rise. DOE is collecting this information from its contractors but, with the exception of defined benefit plans, has yet to provide Congress with agencywide information on contractor benefit costs for use in annual budget deliberations. Without this information, policymakers will not have a full understanding of the context in which they are making funding decisions or of how benefit reimbursement costs might affect the department’s mission work in coming years. It is also important that DOE consistently apply its policies for overseeing and reimbursing contractor benefit costs to ensure timely compliance by all contractors. Without consistent criteria for program offices to consider when evaluating contractor requests to contribute more to their pensions than the minimum required by law, department management lacks assurance that its offices are systematically considering both near-term mission needs and potential spikes in future reimbursement costs when reaching their decisions. Furthermore, without a defined timetable for when corrective action plans need to be in place or clear criteria for DOE contracting officers to use in deciding to waive corrective action, DOE will continue to have contractor benefit packages with values exceeding its standards and will accrue additional liabilities—which the department must ultimately reimburse—for an extended period of time. In addition, without headquarters review of contracting officer decisions to waive corrective action plans, DOE lacks assurance that contractor waiver requests are being evaluated consistently across the department. Energy take the following four actions: Conduct a comprehensive review, similar to the review of contractor pensions, of the department’s approach to managing other contractor benefit costs, including other postretirement benefits, and evaluate options for improving oversight and better managing the cost of these benefits. Expand the information provided to Congress during its annual budget deliberations to include, for example, nonpension postretirement benefit costs by site, program office, and appropriation account, as well as a discussion of factors that affect these contractor benefit costs and DOE’s plans for managing those costs in coming years. Issue guidance to program offices overseeing contractors with defined benefit plans that defines criteria to be considered when evaluating contractor requests to contribute more than the minimum to their pension plans. Clarify existing guidance on correcting contractor benefit packages that exceed DOE’s standard by: establishing a defined timeline by when contractors must submit corrective action plans to their DOE contracting officer if the value of their benefit package is determined to exceed DOE’s standard, as well as a timeline for when DOE contracting officers must reach a decision on such plans; developing criteria for contracting officers to use when deciding whether to waive a required corrective action plan; and requiring review of these contracting officer decisions by the responsible headquarters office to help ensure consistent application of the criteria across the department. We requested comments on a draft of this report from the Secretaries of Defense, Energy, and Health and Human Services, and from the Administrator of the National Aeronautics and Space Administration. The Secretaries of Defense and Health and Human Services and the Administrator of the National Aeronautics and Space Administration had no comments. On April 18, 2011, we received written comments from the Department of Energy, which are summarized below and reprinted in appendix I. In addition, DOE provided technical comments, which we incorporated in the report as appropriate. In its written comments, DOE did not state whether it concurred with our findings. DOE agreed with three of our recommendations but disagreed with the recommendation that the Secretary of Energy issue guidance to program offices that defines criteria to be considered when evaluating contractor requests to contribute more than the minimum to their pension plans. DOE stated that more stringent guidance regarding the use of additional funds is not needed and that each program office is best suited for determining whether additional contributions are the best use of funds in a given year. We did not recommend that DOE issue more stringent guidance or that program offices should have less flexibility in deciding whether to approve or disapprove contractor requests. Rather, we noted that DOE lacks complete guidance to its program offices on the common factors that they should consider when making their decisions. We agree that program offices may reasonably come to different decisions given their particular circumstances. Nevertheless, we continue to believe that DOE should provide a consistent set of factors for program offices to consider when making those decisions. Without such criteria, DOE lacks assurance that program offices are systematically considering both near- term mission needs and potential spikes in future reimbursement costs when reaching their decisions. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; Secretaries of Energy, Defense, and Health and Human Services; Administrator of National Aeronautics and Space Administration; and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The number of current Department of Energy (DOE) contractor defined benefit plans open to new entrants has been dropping over the last decade, particularly since 2005, while the number of frozen plans has increased. Of the 40 tax-qualified defined benefit plans currently sponsored by DOE contractors, only 1 was frozen as of 2000 (see fig. 4). By 2006, about one- fourth (9) of currently sponsored tax-qualified DOE contractor defined benefit plans were frozen in some way. By 2010, of the 40 tax-qualified defined benefit plans sponsored by DOE contractors, 21 were frozen in some way, and 19 plans were open to new entrants. This trend in plan freezes over time is similar to the trend discussed in another report, which found that, among currently frozen plans nationwide, half of plan freezes were implemented after 2005. Table 3 shows the investment allocations of each DOE contractor tax- qualified defined benefit plan by percentage and dollar values as of September 30, 2010. As noted in the report, contractors—not DOE—are responsible for selecting strategies used to invest pension plan assets. Each tax-qualified, DOE contractor defined benefit plan is unique in its investment allocations. For example, certain plans have as much as 73 percent of plan assets invested in equities, whereas a few plans have no equity investment. The overall mix of assets across DOE contractor plans is 58 percent equities, 33 percent bonds, and 9 percent other assets (see last row of table 3). An asset allocation of 60 percent equities and 40 percent bonds is often considered a “typical” asset allocation for many defined benefit plans. Other key contributors to this report were Kimberley M. Granger and Diane G. LoFaro, Assistant Directors; Charles Ford; David Marroni; Ken Stockbridge; Fatema Wachob; and Marie Webb. Important contributions were also made by Joseph A. Applebaum, Ellen W. Chu, Charles Jeszeck, Mehrzad Nadji, Robert Owens, Cheryl Peterson, Anne Rhodes-Kline, Christopher Ross, Cynthia Saunders, Kiki Theodoropoulos, Roger Thomas, Frank Todisco, Craig Winslow, Melissa Wolf, and William Woods.
The Department of Energy (DOE) relies on contractors to conduct its mission activities. DOE reimburses these contractors for allowable costs, including the costs of providing pension and other postretirement benefits, such as retiree health care plans. Since the economic downturn, DOE has had to devote significantly more funding toward reimbursing these benefit costs, in part because of a decline in interest rates and asset values that has increased contractor pension contributions. In a challenging budgetary environment, further growth in these costs could put pressure on DOE's mission work. GAO was asked to report on (1) the level of control DOE has over contractor pension and other postretirement benefit costs under its current business model and (2) the changes DOE has adopted since the national economic downturn to manage those costs and the extent to which those changes have enhanced its approach. To do so, GAO reviewed relevant laws, regulations, and DOE guidance; analyzed agency financial data; and interviewed officials. Under its current business model, DOE has limited influence over contractor pension and other postretirement benefit costs. For example, contractors sponsor benefit plans and, as a result, control the types of benefits offered to their employees and the strategies for investing pension plan assets. DOE nevertheless ultimately bears the investment risk incurred by the contractors. Moreover, external factors beyond both DOE's and the contractors' control, such as economic conditions and changes in statutory requirements, can significantly affect benefit costs. For example, the investment performance of plan assets can affect pension contributions, while changes in health care law can affect postretirement benefit payments. Even with these constraints, however, DOE can exercise some influence over contractor pension and other postretirement benefit costs through its oversight efforts, reimbursement policy for contractor benefit costs, and contract requirements. Still, the department will ultimately have to reimburse the cost of contractor pension benefits that have already been accrued. Since the economic downturn deepened in 2008, DOE has taken steps to enhance its management of contractor benefit costs--particularly for contractor pensions--but has not comprehensively reviewed its approach to managing its contractors' other postretirement benefit costs, such as retiree health care coverage. In addition, DOE has not added agencywide information on the costs of its contractors' other postretirement benefits to its annual budget request. As a result, DOE may be delayed in identifying options that might better address the growth of its reimbursement costs and may not provide important information to Congress that could inform annual funding decisions. Moreover, while DOE has, for the most part, continued to use the same reimbursement policy and contract requirements from before the economic downturn, it lacks complete guidance on how program offices should evaluate contractor requests to contribute more than DOE's minimum requirement to their pension plans. DOE is therefore unable to ensure that its offices decide on contractor requests on the basis of consistent criteria reflecting departmentwide goals for managing contractor pension costs. In addition, DOE's existing process for having contractors align their benefit packages with DOE's reimbursement standard is incomplete. Specifically, DOE lacks a comprehensive timetable for when contractors must modify benefit packages whose values exceed DOE's standard. As a result, only 1 of the 16 contractors with benefit packages exceeding DOE's standard for the most recent evaluation period is expected to bring its benefits in line with that standard. Further, DOE guidance allows contracting officers to waive the requirement for contractors to correct benefit packages exceeding DOE's reimbursement standard, but does not detail the criteria contracting officers should follow in making that decision or require a review by DOE headquarters. As a result, some contractors may continue for an undefined period to accrue liabilities and be reimbursed by DOE for benefit packages exceeding the department's reimbursement standard. GAO recommends, among other things, that DOE comprehensively review how it manages contractor postretirement benefit costs and define criteria for evaluating contractor requests to contribute more than the minimum to their pension plans. DOE agreed with three of GAO's recommendations but disagreed with the need to define such criteria.
The Census Bureau counts the U.S. population once every decade through its decennial census. For the years in between, the Bureau estimates states’ populations from annual data on changes in births, deaths, and net migration (including net movements of military personnel). These annual population estimates are called postcensal population estimates because they are based on the prior census (see table 1 for definitions of different population counts used in this report). This process of making annual postcensal population estimates continues until the next census. Once the new census is taken, the Bureau compares the population estimates to the census population counts for the same date. The difference between the population estimate and the census count is called the error of closure. Subsequently, annual population estimates are revised for the prior decade using the counts from the new census. For example, after the 2000 census, the annual population estimates from the 1990s were revised to be consistent with both the 1990 and 2000 censuses. These revised population estimates are called the intercensal population estimates because they rely on the preceding and the succeeding censuses. Of the four programs we analyzed, Medicaid is the largest, comprising 43 percent of all federal formula-based programs and 94 percent of the total funding for the four programs analyzed for this report (see table 2). The SSBG formula allocates an amount of funding, set by annual appropriation, directly to the states. A state’s allocation is proportional to its share of the total U.S. population. State allocations for fiscal year 2002 used the April 2000 census, and allocations for prior years used postcensal population estimates that were based on the 1990 census. In contrast with the SSBG’s fixed appropriation, the Medicaid, Foster Care, and Adoption Assistance programs are open-ended entitlement programs—the states determine the level of program expenditures, and the federal government reimburses a share of their expenditures according to matching rates, called the Federal Medical Assistance Percentages (FMAP), set by statutory formula. All three programs use the same formula, which is based on a 3-year average of state per capita income— the ratio of aggregate personal income to state population. As a state’s per capita income increases, its matching rate decreases, and vice versa. In addition, unless a state experiences changes in aggregate personal income, its federal payment generally declines if the state’s population growth is less than the national average. Matching rates range from a minimum of 50 percent to a maximum of 83 percent of a state’s Medicaid expenditures. The minimum 50 percent rate affects only the high per capita income states. For fiscal year 2002, for example, a high-income state such as Connecticut would receive a 15 percent federal matching rate if the 50 percent minimum was not in place. For fiscal year 2002, the federal matching rates for Medicaid, Foster Care, and Adoption Assistance were based on a 3-year average of per capita income from 1997 through 1999. Rates for fiscal year 2003 are based on a 3-year average from 1998 through 2000. Although the formulas use overlapping years, the state population numbers used to compute per capita income differ depending on which fiscal year the grant is for. For these three programs, the fiscal year 2002 formula calculations used postcensal population estimates derived from the 1990 census for 1997 through 1999 to calculate per capita income. Fiscal year 2003 formula calculations used population estimates for 1998 through 2000 derived from the 2000 census. Thus, the 2000 census affects matching rates for these programs beginning in fiscal year 2003 (see table 3). The difference between the 2000 census count and the 1999 postcensal population estimate was 3.2 percent, which is large compared with the 1 percent average annual growth rate estimated over the preceding decade. Most of the difference was due to the correction of the error that had occurred during the 1990s. According to the Census Bureau, the size of the error was the result of an underestimate in the measurement of net international migration during the 1990s and the improved coverage of the 2000 census compared with the 1990 census. Consequently, the postcensal population estimate for 2000 was smaller than the 2000 census count. Every state’s population growth was underestimated and needed correction, but the correction amounts varied widely. Among the four Census regions, only the Midwest showed a consistent pattern: all 12 states were close to or below the national average correction. California, Florida, and New York accounted for a high percentage of the correction in population estimates in their respective regions. The 2000 census count of 281.4 million people as reported by the Census Bureau exceeded the 1999 postcensal population estimate by 8.7 million people, or 3.2 percent. Slightly more than three-quarters of this difference (2.5 percent) was the result of correcting errors in the population estimates that occurred over the decade, called the error of closure (see app. I for detailed data for all states). The error of closure was 6.8 million people, substantially larger than the 1.5 million error of closure associated with the 1990 census. The error of closure for the 2000 census was four times the corresponding percentage error for the 1990 census (2.5 percent compared with 0.6 percent). The large error of closure in 2000 was due to underestimating the annual growth in population during the 1990s and to the improved coverage of the 2000 census over the 1990 census. The postcensal population estimates for the decade grew an average 1.0 percent annually. However, the 2000 census showed that the average annual growth rate in population was 0.2 percent higher than the estimated rate, or 1.2 percent. The Census Bureau revised its annual population estimates upward when it released its intercensal population estimates in the spring of 2002. The Census Bureau cited two reasons for the size of the error in its postcensal estimated population growth through the 1990s. First, the net international migration was underestimated during the decade, especially for the Hispanic population. The Hispanic population was underestimated by approximately 10 percent, four times higher than the national average population underestimate, 2.5 percent. Second, the 2000 census was more accurate than the 1990 census. The population undercount from the 2000 census was much smaller compared with the 1990 census (1.18 percent, compared with 1.62 percent, making the 2000 census more accurate); the 2000 census counted people who were probably missed in the 1990 census. The error of closure shows a wide variation across states. For example, West Virginia and Michigan had the smallest percentage corrections, 0.27 and 0.34 percent, respectively. The District of Columbia and Nevada had the largest percentage corrections in their population estimates, 10.2 percent and 7.5 percent, respectively. Twenty-eight states had a lower- than-average percentage difference, and 23 states had a greater-than- average percentage difference (see fig. 1 for the correction percentages for all states). Among the four Census regions, the Midwest had the smallest correction in population, 1.5 percent; all 12 Midwest states had corrections close to or below the national average. In the other three regions, a single state accounted for a large share of the population change for the region. For example, in the South, Florida’s correction in population of 4.7 percent constituted about 25 percent of the correction for the entire region. Similarly, New York’s correction was 44 percent of the northeastern states’ correction, and California’s correction was 26 percent of the correction for the western states. The correction to the population estimates generally redistributes federal funding for the four programs we analyzed from the states with the smallest corrections to those having the largest. Federal funding for the 28 states that had below-average corrections decreases by an estimated $380.3 million. In contrast, federal funding in the 23 states with above- average corrections to their population estimates increases by an estimated $388.8 million. Most of the change in funding is concentrated in states with larger populations. Michigan and Ohio, for example, account for 57 percent of the total decrease in funding for states with below- average population corrections. A number of high-income states, including California and New York, are largely unaffected by the correction in their populations because their matching rates for the Medicaid, Foster Care, and Adoption Assistance programs cannot decrease below the minimum 50 percent matching rate. Without this minimum, more funding would be shifted among the states. While the redistribution of funding in the four programs began to occur in fiscal year 2002, almost all of it occurs in fiscal year 2003, when the 2000 census data are used to determine federal matching rates in the three open-ended entitlement programs. The correction in state populations resulting from the 2000 census causes significant changes in the funding levels among the states for the four programs we examined. We estimate that the funding for the 28 states that had below-average corrections in their populations decreases by a total of $380.3 million. Conversely, funding for the 23 states that had above- average corrections in their populations increases by an estimated $388.8 million (see table 4). These results are dominated by a few highly populated states whose corrections were among the largest—meaning they are estimated to receive the most additional money or to lose the most. For example, Michigan, the eighth most populous state, has an estimated $119 million decline in funding because of its 0.34 percent correction in population. Michigan’s federal funding decrease accounts for about one-third of the decreases for the 28 states with a below-average correction in population. Moreover, when Michigan’s decrease is combined with that of Ohio, the seventh most populous state, the two states account for 57 percent of the estimated total decline in funding from the corrections of the population estimates. Conversely, Florida, the fourth most populous state, has the largest estimated increase in funding (about $126 million) because of the 4.7 percent correction in its population estimate. This is almost double the national average correction and accounts for about one-third of the estimated increase for the 23 states with an above-average correction in population. Funding changes did not occur in some states and were muted in others because the states’ federal matching rates were fixed by the minimum 50 percent rate for the three open-ended entitlement programs. For example, on the basis of its fiscal year 2000 spending levels, California would receive an estimated $305 million less in matching aid in the three entitlement programs if its matching rate were allowed to fall below the minimum. Because of the 50 percent minimum federal matching rate, however, California only receives an estimated $2.8 million decrease—all of it linked to the SSBG. For the three entitlement programs, the correction in population had no effect in 11 states that were affected by the 50 percent minimum, and for 2 states the correction in population had a diminished effect because of the floor. The funding changes due to the population corrections showed little regional pattern except in the Midwest, where all 12 states had a correction in population estimates close to or below the national average that resulted in an estimated $289.5 million loss in funding owing to the correction in their populations. Most of the change in funding resulting from the corrections in population estimates is the result of changes in Medicaid funding. The federal share of total Medicaid payments was approximately $111 billion in fiscal year 2000 and constituted 96 percent of the share of funding to the states for the four programs and approximately 96 percent of the total estimated change in funding as well. The SSBG distributed $1.69 billion for fiscal year 2002, representing 1.5 percent of the funding we analyzed. It accounted for a slightly higher percentage, 2.2 percent, of the estimated funding changes. Finally, the Foster Care and Adoption Assistance programs represented 1.6 and 0.6 percent of the funding, respectively. They account for 1.4 and 0.7 percent, respectively, of the estimated funding changes for 2003. The earliest effect of the 2000 census on any of the four programs we analyzed occurred when it was used to calculate fiscal year 2002 SSBG grants. For the Medicaid, Foster Care, and Adoption Assistance programs, the 2000 census is first used for fiscal year 2003 payments. We provided the Department of Commerce a draft of this report for comment. The department provided technical comments, which we have incorporated where appropriate. As arranged with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Commerce; the Secretary of Health and Human Services; and the Director, Bureau of the Census. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have questions about this report, please call me at (202) 512-7114 or Jerry Fastrup at (202) 512-7211. Major contributors to this report are Gregory Dybalski, Elizabeth T. Morrison, and Michael Rose. This appendix compares the postcensal population estimates for July 1, 1999, with the census count for April 1, 2000 (table 5), and compares the April 1, 2000, postcensal population estimates (based on the 1990 census) with the census counts (table 6). States are listed in tables 5 and 6 by the magnitude of the percentage correction in population. This appendix contains the supporting data for our calculations of the estimated change in funding due to correcting the population estimates. Specifically, for each state, we provide the funding amounts for the four programs and the estimated funding changes due to the correction in population estimates. States are listed in tables 7 through 11 by the magnitude of the percentage correction in population. The Medicaid, Foster Care, and Adoption Assistance programs are open- ended entitlement programs for which states determine the level of program expenditures. The federal government reimburses states for a share of eligible state spending based on state per capita income. To calculate the effect of the population correction on the Federal Medical Assistance Percentages (FMAP)—also called federal matching rates—we compared actual matching rates for fiscal year 2003, based on the 2000 census, with the estimated matching rates based on the 1990 census (shown in table 7). Subtracting the estimated rates from the actual fiscal year 2003 rates shows the effect on the matching rates of correcting population estimates. In general, the states that had a below-average correction in population have a decrease in federal matching rates, while the states that had an above-average correction in population have an increase in matching rates. For 13 high-income states, the correction in population had no effect or had a diminished effect because of the minimum 50 percent matching rate. (Under the matching rate formula, no state can receive less than a 50 percent matching rate.) In our analysis, 11 states receive the 50 percent matching rate for fiscal year 2003; hence, under the estimated rates, the correction in population shows no change in these states’ matching rates. Two additional states, Washington and Nevada, are partially affected. Washington’s actual fiscal year 2003 matching rate is at the 50 percent minimum, while its estimated matching rate is slightly above the 50 percent minimum. Conversely, Nevada’s actual fiscal year 2003 matching rate is above the minimum, and its estimated matching rate is at the 50 percent minimum. The 70 percent matching rate for the District of Columbia is established by a special statutory provision. Accordingly, the District of Columbia’s matching rate remains unchanged, and the correction in population has no effect on funding. The census is a population count made at the beginning of each decade as of April 1; it is based on a count of the entire population. Postcensal population estimates are made annually throughout a decade, usually as of July 1 of each year. Such estimates are based on the prior census and include annual population changes due to births, deaths, and domestic and international migration. To measure the effect of the correction in the population estimates on federal payments, we estimated what federal payments would be using matching rates calculated on the basis of postcensal population estimates derived from the 1990 census. Specifically, multiplying the two sets of state matching rates in table 7 by program expenditures (fiscal year 2000 Medicaid expenditures) yields the estimated payments. The 2000 program expenditures were the latest year for which the data were available. (See table 8.) Overall, the states that had a below-average correction in population show a decrease in payments, while the states that had an above-average correction in population show an increase in payments. As discussed in the previous section, 11 states show no effect, and 2 states show a partial effect because of the minimum 50 percent federal matching rate. The District of Columbia is also unaffected because of its special statutorily set matching rate. The effects on the funding for Foster Care and Adoption Assistance are similar to the effects on the Medicaid programs because these programs use the same matching rates. Table 9 shows the Foster Care program expenditures for fiscal year 2000, the estimated federal payments, and changes in funding for Foster Care based on these estimated payments. Table 10 shows the Adoption Assistance program expenditures for fiscal year 2000, the estimated federal payments, and the changes in funding for the program based on the estimated payments. The fiscal year 2002 formula allocations for the SSBG are based on the April 1, 2000, decennial census population counts. To calculate the effect of the correction in population estimates, we compared fiscal year 2002 allocations that were calculated using the April 1, 2000, decennial census (actual allocations) with allocations using the 1990 postcensal population estimates for April 1, 2000 (estimated allocations). The differences in these allocations represent the effect of the population correction reflected in the 2000 census. The change in funding is directly proportional to the percentage correction in population because the SSBG allocations are calculated exclusively on the basis of population data (see table 11).
In fiscal year 2000, about $283 billion in federal grant money was distributed to state and local governments by formula, about half of it through four formula grant programs--Medicaid, Foster Care Title IV-E, Adoption Assistance, and the Social Services Block Grant (SSBG). States receive money based in part on factors such as annual population estimates derived from the previous decennial census, which is conducted by the Department of Commerce, Bureau of the Census. GAO was asked to measure the effect that using the 2000 census data has on redistributing funding for federal formula grant programs. To do this, GAO analyzed the change in the U.S. and state populations between 1999 and 2000 that was the result of correcting prior population estimates and estimated for the four programs the extent of any redistribution of federal funding among states. The 2000 census count of 281.4 million people exceeded the 1999 population estimate by 8.7 million people, or 3.2 percent. Three-quarters of this 1-year population increase, 6.8 million people, was the result of correcting errors in population estimates over the preceding decade; the remaining portion of the increase, 1.9 million people, was the result of population growth from 1999 to 2000. Every state's population had been underestimated during the 1990s, but the extent varied, from the smallest correction in West Virginia--0.3 percent--to the largest in the District of Columbia--10.2 percent. Twenty-eight states had a correction below the national average of 2.5 percent, and 23 states had a correction above the national average. Correcting population estimates for the 2000 census redistributes among states about $380 million in federal grant funding for Medicaid, Foster Care, Adoption Assistance, and SSBG. Funding for the 28 states that had below-average corrections to their populations decreases by an estimated $380.3 million; funding for the 23 states that had above-average corrections increases by an estimated $388.8 million. Most of the change in funding is concentrated in states with larger populations. However, changes in funding are smaller in several large states because the matching rates for Medicaid, Foster Care, and Adoption Assistance are limited by statute--matching rates cannot fall below 50 percent. Some higher-income states would receive matching rates below 50 percent if not for this limitation. Most of the shift in funding occurs in fiscal year 2003 when federal matching rates for the Medicaid, Foster Care, and Adoption Assistance programs are based on population estimates derived from the 2000 census. A small portion of the shift occurred in fiscal year 2002 because that is when the SSBG began using the 2000 census counts. The Department of Commerce provided technical comments on a draft of this report.
State’s Foreign Service promotion process is governed by the Foreign Affairs Manual (FAM), the Foreign Service Act, and the Procedural Precepts for the Foreign Service Selection Boards—referred to as the procedural precepts. The procedural precepts are negotiated each year between State and the American Foreign Service Association (AFSA), and establish the scope, organization, and responsibilities of the Foreign Service selection boards that evaluate candidates for promotion. The procedural precepts cover areas such as the conditions for eligibility for promotion, guidance for boards on evaluating candidates, and the information boards are required to submit to the Director General of the Foreign Service and Director of Human Resources (Director General). The procedural precepts are provided to all selection board members at the convening of the boards and, according to State officials, are made available to all Foreign Service personnel worldwide. The decision criteria for promotion in the Foreign Service, also known as the core precepts, provide the guidelines by which selection boards evaluate Foreign Service personnel for promotion. At least every 3 years, State and AFSA negotiate the core precepts. The core precepts define specific skills and levels of accomplishment expected at different grades and across the core competencies of Foreign Service personnel. These competencies include leadership skills, management skills, interpersonal skills, communication and foreign language skills, intellectual skills, and substantive knowledge—the skills, knowledge, and ability an employee applies to the job. State’s Foreign Service promotion system follows an up-or-out principle, under which failure to gain promotion to higher rank within a specified period in a single salary class leads to mandatory retirement for personnel in certain occupational categories. State’s FAM outlines the time-in-class and time-in-service limits for specific occupational categories. Various offices and entities play key roles in the Foreign Service promotion process. HR, under the direction of the Director General, has authority over the Foreign Service promotion process. The Office of Performance Evaluation, within HR, manages the promotion process, including recommending selection board members and processing final promotion results and other selection board outcomes. The office also provides various types of assistance and services related to the promotion process, such as guidance to employees in preparing their evaluation materials and monitoring of selection board activities. Grievance staff, also within HR, process grievances relating to the Foreign Service promotion process or underlying performance information relied upon by the selection boards or performance standards boards. The Foreign Service Grievance Board provides an appeal mechanism for employees not satisfied with the outcome of grievances at the agency level. The board currently consists of 20 members. Each member, as well as the chairman, is appointed by the Secretary of State for a term of 2 years, subject to renewal. AFSA provides Foreign Service personnel guidance in preparing evaluation materials. In addition, AFSA attorneys provide assistance to staff who file a grievance with State or the Foreign Service Grievance Board. State’s Foreign Service promotion process includes several types of boards that evaluate and rank order candidates for promotion, identify other candidates for possible separation from the Service, and address promotion process-related grievances. Foreign Service selection boards identify certain candidates for promotion, “low rank” others, and make other determinations. Performance standards boards then review low- ranked candidates for possible separation from the Service. There are several mechanisms to resolve grievances relating to the promotion process, including through the convening of reconstituted boards. State carries out several key steps prior to convening the Foreign Service selection boards. Selection boards then evaluate and rank order candidates for promotion. Next, HR officials process board outcomes before announcing promotions. Figure 1 provides information on key steps of the promotion process. State carries out several key steps before convening selection boards. The Director General determines the number of available promotion opportunities, evaluating factors including vacancies, estimated attrition, and projected staffing needs. HR designates selection board members. After the Director General determines how many and what type of boards are needed, HR seeks to fill the boards by soliciting volunteers and recruiting members to meet specific needs in terms of rank or work experience. Each board typically has four to six members of the Foreign Service along with a public, or non-State, member. According to HR officials, public members can offer a different perspective than Foreign Service members, and can act as an additional safeguard over the integrity of the process. All selection board members must be approved by the Director General and cannot serve on a selection board for 2 consecutive years. Selection boards include generalist and specialist Once board members are chosen, State announces them in boards.a cable sent to posts worldwide, which includes instructions on the conditions under which promotion candidates can request certain board members be recused from reviewing their file. Board members can also recuse themselves from evaluating a candidate if they believe they may be unable to render a fair and unbiased judgment. HR office of performance evaluation staff present official performance folders for each candidate eligible for promotion. The Employee Evaluation Report (EER) is a key document used by selection boards to evaluate candidates, and includes sections on the candidate’s work requirements, performance, and areas for improvement. The EER is developed by the employee and the employee’s designated rating and reviewing officers, and is screened by a review panel that is to provide feedback on any technical mistakes that should be corrected before the EER is formally submitted to the office of performance evaluation. The folder also includes information on the employee’s training record and any commendations, official reprimands, or awards, among other information. Selection boards are to evaluate candidates based only on information in official performance folders, along with other employee records specified in the procedural precepts. Selection boards follow a series of steps to evaluate and rank order candidates for promotion and identify other candidates for possible separation from the Service. The boards first screen all candidate files, and sort them into one of three categories: promotion, mid-rank, or low- rank. Those candidates mid-ranked are generally not reviewed again for promotion by that board. Next, each board member ranks each promotable candidate using a forced distribution scale of 1-10. Any time there is a discrepancy between board members of at least four points in the ranking of a given candidate, the members must discuss the case and, if the discussion results in any changes, adjust rankings accordingly to comply with the forced distribution requirement. Each board has a chairperson responsible for leading such discussions and helping to ensure that board procedures are followed. Once all candidates have been considered and ranked by each board member, the board chair consolidates the scores for promotable candidates into one rank-order list. Once a final rank ordering is established, the board submits its final results as part of its official board report, which includes, among other elements: the rank-order list for each competition group of all candidates recom- mended for promotion; an alphabetical list of those mid-ranked; an alphabetical list of those low-ranked; an alphabetical list of any candidates referred directly to a performance standards board to be considered for possible separation; and recommendations concerning policies and procedures for subsequent boards and improvements to the performance evaluation system. According to HR officials, the board report is the only document retained from each selection board. All other documents, such as notes and score sheets, are destroyed soon after the board’s dismissal. HR officials explained that these documents are destroyed to encourage open and frank discussions and note-taking during the board sessions. After receiving the selection boards’ official reports, HR officials undertake several steps before announcing promotions. First, HR officials told us they draw a “cut-line” on selection boards’ ranked lists of candidates recommended for promotion based on the number of available promotion slots. Then, HR officials coordinate the vetting of candidates ranked above the cut line with several entities, including the Office of Inspector General, the Office of Civil Rights, the Office of Employee Relations, the Bureau of Diplomatic Security, and the Office of the Legal Adviser. These offices respond indicating whether there are any outstanding issues concerning individual candidates, such as a pending investigation or other matter, that could lead to their removal from the promotion list. In addition, in response to personnel changes, HR officials annotate the official board reports’ rank-ordered lists of candidates recommended for promotion, indicating which candidates have been permanently removed and lowering the cut line accordingly. Next, according to HR officials, several staff, including the director of HR’s Office of Performance Evaluation, review the revised list of candidates for promotion to ensure it accurately reflects changes due to vetting outcomes. State then publishes the list of promotions. Table 1 provides summary data for the 2011 and 2012 selection boards. Performance standards boards convene each year to assess low-ranked candidates for possible separation from the Foreign Service. According to HR officials, there are typically two performance standards boards convened each year—one for generalists and one for specialists. Performance standards boards are governed by a set of procedural precepts outlined in the FAM.alongside no fewer than 10 randomly selected employee files from the same competition group and decide whether to recommend the employee for counseling or for separation. The board is required to submit a report to the Director General that includes a list of the members designated for separation along with individual statements justifying the board’s findings in each case. In 2011, 11 employees were designated for separation by performance standards boards. In 2012, 14 employees were designated for separation. The boards review each employee’s file Employees selected for separation from the Foreign Service have several remedial options. The Director General first sets a separation date. According to HR officials, employees can, until that date, choose to retire, if eligible; resign; grieve; or request a special review board, where the matter is adjudicated by a judge from outside the Department of State. Foreign Service personnel have several options to seek relief through the grievance process in response to promotion-related matters. Foreign Service personnel are encouraged to first attempt to resolve their concerns about their EERs with their supervisor at the post or bureau level. According to the director of State’s grievance staff, State does not track the number of grievances that are resolved between employees and supervisors at post or the bureau. Employees can also formally submit a grievance in writing to the agency grievance staff, providing information such as the nature of the grievance, its effect, which law or regulation the grievant believes was violated, and any relief requested. Grievance staff process these submissions and, according to the director of the grievance staff, will grant interim relief from separation at the agency level, if requested. According to the director of State’s grievance staff, grievances typically pertain to performance, discipline, or financial matters. Grievances relating to a low ranking typically pertain to the employee’s EER, or alleged errors in applying the applicable precepts. For example, in certain cases employees have alleged that their EERs contained falsely prejudicial information, or that selection boards misapplied the procedural precepts in arriving at a decision to low rank a candidate. Employees also have several options beyond the agency grievance staff. A member whose grievance is not resolved satisfactorily under the agency procedures described above can file an appeal with the Foreign Service Grievance Board no later than 60 days after receiving the agency decision. According to the director of the grievance staff, Foreign Service personnel can also file charges relating to prohibited personnel practices via the Office of Special Counsel at any point in the process. Grievants may also appeal a decision of the Foreign Service Grievance Board by filing a complaint in federal district court. Reconstituted boards may be convened if HR officials or the Foreign Service Grievance Board determines a candidate was not properly reviewed or that the official performance folder contained incomplete or inaccurate documentation of performance. reconstituted boards in response to 16 grievances. The members of a reconstituted board are to be chosen, to the extent possible, on the same basis as members of the original selection board, and, to the extent applicable, are to observe the precepts and procedures for the original board. Reconstituted boards review, in addition to the employee under consideration, the files of the four individuals immediately above the cut line for promotion as designated in the final board report for the competition group and the files of three individuals immediately below the cut line. The reconstituted board rank orders the files under review from one to eight. If the employee for whom the board was reconstituted is ranked by the reconstituted board among the top four files, he or she will be considered ranked for promotion. Prompted by concerns identified by the OIG and Foreign Service Grievance Board in 2010, State took a number of actions to strengthen procedures governing selection boards and reconstituted boards. For example, in response to concerns identified by the OIG, State revised procedures governing the improper introduction of information about candidates and recusal requests. State also updated standard operating procedures for reconstituted boards in response to concerns raised by the OIG and Foreign Service Grievance Board. In addition, State initiated two other practices to strengthen safeguards over the promotion process. Reconstituted boards are governed by a set of Standard Operating Procedures established by State in October 2011. State developed a requirement that selection board members sign an oath in response to the OIG’s concerns about the improper introduction of information about candidates during board deliberations. The OIG reported that it became aware of selection board members allegedly improperly removing documents from, or attempting to introduce information not already contained in, a candidate’s record. In response, State implemented a requirement that each board member sign an oath to protect the confidentiality of board materials and report any improper introduction of information about a candidate. The oath also addresses board members’ adherence to the procedural precepts and promotion criteria. A copy of each signed oath should be filed in the final board report. State revised its procedures governing candidate and board member recusal requests in response to OIG concerns about them. The OIG found the procedural precepts to be ambiguous regarding the allowable involvement by a selection board member who has voluntarily recused him- or herself from consideration of an individual candidate in other board deliberations, and also found the bases for recusal requests to be too limited. In response, State developed revised language covering selection board recusal requests, which was incorporated in the 2011 procedural precepts. The revised language broadened the circumstances under which an individual under review may request a board member’s recusal. The revised precepts also describe the steps a board member should take to recuse him- or herself and make clear that, while this member will be excused from further consideration of the particular individual, the member will continue to participate in the other activities of the board. State updated its procedures for reconstituted boards in response to OIG and Foreign Service Grievance Board concerns about the operations of these boards. The OIG reported there was no regulation in place establishing the conditions that cause a reconstituted board to be formed, its membership, purpose, or the outcome of its recommendations. In addition, the Foreign Service Grievance Board found serious deficiencies and irregularities in the operation of six reconstituted boards, including destruction of underlying board records; inability of board members to confirm that the results reported in the final reports accurately reflected the board’s decisions; evidence that the boards failed to incorporate the safeguards followed by regular selection boards; and lack of evidence that HR staff prepared board reports with sufficient attention to detail. In response, State negotiated and published updated standard operating procedures for reconstituted boards in October 2011. The updated procedures require documentation of steps associated with reconstituted boards. For example, the procedures call for HR to retain an official folder on each reconstituted board, which should include, among other items, documentation of the notification to employees of the names of board members; final board scoresheets that are signed or initialed by all board recusal forms, if applicable; and the final board members; signed oaths;report, signed by all members or their proxy. State said it would place renewed emphasis on ensuring that all board members sign board results in response to an OIG concern regarding board result certification. The OIG reported that several former board members asserted that HR officials submitted to the Director General rank-ordered lists of candidates for promotion without those board members certifying the lists, or with results that differed from the members’ recollections. State responded that, while a certification requirement, by signature and initials, of board results was already in place prior to the OIG’s report, it would re-emphasize the need to ensure that this procedure is followed prior to remitting any board list to the Director General. An HR official noted that the use of proxy signatures for board members’ certification of results was considered acceptable, so long as signed by another board member and not by an HR official. State reported it discontinued annotating promotion lists in response to the OIG’s concern about this practice. The OIG reported that an existing practice of annotating candidate promotion lists—such as by computer, pen, or pencil—could be used to influence the board in favor of certain candidates, such as those who were nearly promoted in prior promotion cycles. According to HR officials, some HR staff had previously annotated promotion lists by noting employees who had the previous year received Meritorious Service Increases, which are given to some employees rank- ordered by selection boards but not promoted due to limited number of promotion opportunities. State reported that it ended this practice. State said it would try to increase the number of nonspecialists on specialist boards—which evaluate Foreign Service personnel who provide technical, support, or administrative services—in response to the OIG’s concern about these boards’ composition. The OIG reported that, since specialist board members are drawn from a smaller universe than generalist boards, there is a greater possibility these members will personally or by reputation know the candidates being reviewed. State responded that it endorsed the OIG’s recommendation to include at least two nonspecialists on each specialist board. However, State noted that while it would seek to do so in the future, it needed to retain some flexibility to make exceptions in cases where two nonspecialists were not available. The OIG reported there was no consolidated procedural manual for training new HR employees. State developed such a manual and distributed copies to staff. The manual includes information on procedures relating to the promotion process. In addition to State’s actions taken in response to others’ identified concerns about the promotion process, we found that State initiated two additional documentation practices to strengthen promotion process safeguards. The first is to have selection board members, in addition to signing final board results, initial each page of the promotable, mid-rank, and low-rank lists in official board reports, thereby attesting to the accurate rank order of all candidates the selection board evaluated. The second is to include selection board and reconstituted board member recusal memos in each final board report. We found that selection boards, performance standards boards, and reconstituted boards complied with many updated procedures in the 2011 and 2012 Foreign Service promotion cycles; however, some selection boards and reconstituted boards had documentation gaps for certain internal controls. Our review of board files, related grievances filed since October 2011, and responses to an online data collection tool sent to 2011 and 2012 board members revealed limited concerns about the operations of some boards. Our review of 77 selection boards, performance standards boards, and reconstituted boards for the 2011 and 2012 promotion cycles found that board members and HR staff complied with many updated internal controls. For example, all 41 selection board reports we reviewed included a memo certifying final results signed by board members or by proxy, and documentation indicating that a public member served on the board. In addition, all 32 reconstituted board reports we reviewed included a statement describing the board’s purpose, a notification to the employee of the board’s composition, documentation indicating a public member served on the board, final board score sheets, and a final board report signed by all members or by proxy. However, we found that some board reports, which constitute the master record of proceedings, had a number of documentation gaps. As shown in figure 2, there were several instances of missing oaths and incomplete documentation of recusals among the 41 selection boards we reviewed. For example, we found that 2012 selection board reports did not include 45 of 122 required signed oaths from members, or nearly 40 percent of the required total. Subsequent to our file review, State officials provided a portion of these missing oaths and other missing documents from ancillary records. We also checked for discrepancies between boards’ rank-ordered promotion lists and official promotion announcements and found a total of 74 names recommended for promotion in 2011 and 2012 selection board reports that did not appear on corresponding promotion announcements. State officials explained that these individuals were not included on promotion lists due to requirements outlined in the FAM relating to the (1) permanent removal of names from promotion lists due to personnel actions such as retirement, and (2) temporary removal of names from promotion lists due to outcomes of the vetting process described earlier. State provided documentation to account for each removed name. Compliance with procedures was better, but still not complete, for performance standards boards and reconstituted boards. For example, some reconstituted boards lacked final board score sheets that were signed or initialed by all members, and some lacked signed oaths from all board members. Compliance for performance standards boards and reconstituted boards is shown in figures 3 and 4. Our review of board files, related grievances, and responses to an online data collection tool sent to board members revealed limited concerns about the operations of some boards. Our review of selection boards’ observations about and recommendations for improving the promotion process revealed no concerns relating to the boards’ ability to adhere to core and procedural precepts. Our grievance file review revealed one allegation of bias concerning a board member, which was not sustained. Our online tool revealed a limited number of procedural concerns pertaining to a few specific boards. The Director General requests that all selection boards provide observations about the promotion process and recommendations for improving it. Our review of the 41 selection boards’ observations and recommendations revealed no reported concerns relating to the boards’ ability to adhere to the core precepts and procedural precepts. However, we found that a number of boards provided observations and recommendations for improving the promotion process in several areas. For example, more than half of the 41 boards made observations and recommendations concerning the following: completeness, accuracy, or accessibility of the official performance folder; promotion criteria, policies, or related practices; and technological issues affecting board operations. In addition, more than a third of the 41 boards made observations and recommendations concerning the following: uncertainty over how to interpret some performance appraisal information; and sufficiency or quality of promotion process guidance and training. State’s HR staff review and respond to board-identified issues each year and discuss proposed solutions with the Director General. In both 2011 and 2012, State issued worldwide cables with additional guidance to employees, raters, and review panel members to address key board- identified issues. For example, with regard to observations and recommendations about promotion criteria, policies, and related practices, the guidance stressed that candidates (and raters) need to demonstrate the extent of work experience within their “cone” and whether they are serving in “stretch” positions above their current grade level. In addition, in response to board feedback, State has developed a process, which was implemented in 2013, whereby employees can self- certify the accuracy of their eligibility for review and related performance information. We reviewed all grievances related to 2011 and 2012 board actions and found that, with one exception, none alleged that any board or board member violated core integrity and fairness precepts such as the intentional introduction of extraneous material into the proceedings or overt bias toward an individual. The one exception was a case in which an employee alleged that a board member held a personal bias toward him and should have recused himself from considering the individual’s file. According to State officials, this grievance was denied at the agency level in April 2013. Grievances generally focused on a complaint that a board misapplied a given precept in arriving at a low-ranking decision. For example, some grievants alleged that a board relied on comments made in the Area for Improvement section of the EER to support a low-ranking determination without corroborating evidence of poor performance elsewhere in the EER, as is required by the procedural precepts. We noted that HR officials often agreed with the grievant in low-ranking complaints and provided the requested relief of expunging the low- ranking statement from the employee’s performance folder, along with related modifications to their “scorecard.” Our online data collection tool revealed a limited number of procedural concerns relating to the operations of three specific boards. Our online tool was designed to provide board members with an opportunity to identify whether they observed any actions, behaviors, or concerns that could have compromised their board’s integrity and fairness. Our online tool was sent to 293 of 298 members who served on the 2011 and 2012 selection boards, 2011 and 2012 performance standards boards, and reconstituted boards since October 2011. We received 206 completed forms. From this total, two responses identified a total of four concerns with the operation of a board in 2011 or 2012. One response claimed that a board member had refused to follow precept instructions to consider candidate service in Afghanistan, Iraq, and Pakistan in a favorable light. The same response noted that the board did not follow proper recusal procedures in all cases. The second response claimed that an “HR official” had inappropriately instructed a board member. The same response noted that the board did not follow proper recusal procedures in all cases. We obtained permission from one respondent to provide the respondent’s two concerns to State’s HR staff and the OIG for further review and follow-up as appropriate. State’s Foreign Service promotion process is conducted within the context of an up-or-out system and the practice of identifying a set percentage of staff each year for possible separation from the Service. Within an organizational culture that emphasizes performance and career advancement, safeguards to ensure the fairness and integrity of the promotion process are of particular importance. While we found that State had responded to previously identified concerns about its Foreign Service promotion process and taken a number of actions to strengthen internal controls over the process, documentation supporting the full implementation of these controls was sometimes missing. For example, we found that many selection board member oaths were missing from 2012 selection board reports and some boards did not include documentation of recusal requests. In the absence of a fully documented system of controls, there is a risk that intentional or unintentional failures to implement safeguards, by board members or HR staff, will go undetected and uncorrected. A failure to implement safeguards, in turn, increases the risk that promotion results could be intentionally or inadvertently compromised. To improve and better document State’s compliance with key safeguards governing the Foreign Service promotion process, we recommend that the Secretary of State instruct the Director General of the Foreign Service and Director of the Human Resources Office of Performance Evaluation to take steps to ensure that selection board, performance standards board, and reconstituted board reports are complete and fully document compliance with internal controls, including but not limited to signed oaths and recusal memos. We provided a draft of this report to State for its review and comment. State provided written comments, which are reprinted in appendix II. State concurred with our recommendation to ensure board reports are complete and fully documented. In particular, State noted that, during the course of our review, it examined areas we had brought to the department’s attention, and made adjustments in procedures for filing signed oaths, recusal memos, and board reports. State added that it would continue to improve record-keeping in this regard. State also provided technical comments, which we have incorporated throughout this report as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of State. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines (1) the Department of State’s (State) process for ranking and promoting Foreign Service personnel, (2) procedural changes State has made to its Foreign Service promotion process in response to identified concerns, and (3) the extent to which updated procedures were consistently followed in 2011 and 2012 and whether any notable concerns about the promotion process remain. To review State’s process for ranking and promoting Foreign Service personnel, we reviewed relevant laws, regulations, and procedures governing State’s promotion and grievance processes, including the Foreign Service Act of 1980, the Foreign Affairs Manual, the Procedural Precepts and Core Precepts for the 2012 Foreign Service Selection Boards, and the training and information materials provided to 2012 selection board members. To understand how these procedures are implemented in practice, we interviewed State officials within the Bureau of Human Resources (HR), including officials from the offices of performance evaluation and grievances. We also interviewed the president and several other officials from the American Foreign Service Association (AFSA), the exclusive bargaining agent for Foreign Service personnel, to understand AFSA’s role in the promotion process. We interviewed four public, or non-State, members of selection boards, which evaluate and rank order candidates for promotion. We reviewed State data on the 2011 and 2012 Foreign Service promotion cycles, and the number, resolution, and status of grievances filed by candidates who were “low ranked” by selection boards in 2011 and 2012. We discussed with State officials how these promotion process and grievance-related data were collected and checked for accuracy. State HR officials told us the promotion data were compiled by HR’s Office of Resource Management and Organizational Analysis. According to HR officials, HR staff members manually enter these data into a system referred to as the Board Maintenance Application. HR Office of Resource Management and Organizational Analysis staff members work with HR performance evaluation staff to verify the number of promotions, as well as the rankings of those promoted, and the number of those recommended for promotion but not promoted. These results are published annually, every spring. For grievance data, the director of State’s grievance staff told us grievance staff obtained the names of every person low ranked or referred to a performance standards board in 2011 and 2012 from HR performance evaluation staff, then cross-checked those names against the names of individuals who had filed grievances. Grievance staff then manually searched the grievance files to determine whether the individuals’ grievances involved low rankings, by year. We determined these data were sufficiently reliable for our purposes. To review the procedural changes State has made to its Foreign Service promotion process in response to identified concerns, we focused on concerns identified since March 2010, when the State Office of Inspector General (OIG) issued its Report of Inspection, “Review of the Integrity and Fairness of the Foreign Service Selection Board Process.” We also focused on State procedural changes made since March 2010. In addition to the OIG’s report, we reviewed the record of proceedings for the Foreign Service Grievance Board’s case 2008-051 from July 2010, which addressed concerns with State’s procedures governing reconstituted boards, which are convened if it is determined a promotion candidate was not properly reviewed. We also reviewed Foreign Service personnel grievance cases related to the promotion process, from 2011 through February 2013. In particular, we reviewed those cases the grievance office had categorized as one of the following grievance types: promotion, low-ranking, performance standards boards (which review low-ranked candidates for possible separation), or separation. We selected these categories after reviewing a spreadsheet State provided that listed all filed grievances by category, as we determined they were most applicable to our review of the promotion process, compared with other categories such as discipline, financial, and leave restoration. We also reviewed all Foreign Service Grievance Board filings related to the same universe of filed grievance cases. In addition, we reviewed selection board recommendations to the Director General from the 2011 and 2012 promotion cycles, which we discuss further below. We also reviewed responses to our online data collection tool, discussed below, that was sent to 2011 and 2012 selection board, performance standards board, and reconstituted board members. To learn about State’s procedural changes developed in response to the OIG’s recommendations, we reviewed State and OIG documents showing actions taken by State to comply with the OIG’s recommendations. We also interviewed State HR officials and officials from the OIG. To examine State actions taken in response to selection board member recommendations to the Director General, we reviewed HR summary memos and cables explaining suggested actions, and interviewed HR officials to discuss the status of these actions. We further discuss these actions in our report’s third objective. To review the extent to which updated procedures were consistently followed in 2011 and 2012, and whether any notable concerns about the process remain, we reviewed selection board and performance standards board records for 2011 and 2012 and reconstituted board records from October 2011 through April 2013. According to State officials, the official reports, referred to as board reports, from these boards are the only records retained; the remaining records are destroyed as a standard practice. We reviewed the three types of board reports within the following timeframes: All 41 selection boards reports from 2011 and 2012, the 2 years subsequent to the Inspector General’s report, to enable us to assess the extent to which State had implemented and consistently followed changes made subsequent to that report. To ensure we reviewed the accurate universe of selection board reports, we compared the list of board reports provided against State cables announcing the results of the promotion process for 2011 and 2012, which also identified all boards convened in 2011 and 2012. This comparison identified no discrepancies. All four performance standards board reports from 2011 and 2012. There were two performance standards board reports each year for 2011 and 2012, contained within a single folder for each year. We received both reports for each year. All 32 board reports for reconstituted boards—convened in response to grievances pertaining to selection board promotion or low-ranking determinations, and convened after October 2011, when State issued new Standard Operating Procedures for these boards, through April 2013. To ensure we reviewed the accurate universe of reconstituted board files within this timeframe, we compared the files provided by State against an inventory list provided by State; identified several discrepancies and resolved them with HR officials; and presented a master list of reconstituted boards to State officials, who confirmed the list reflected all reconstituted boards relating to promotions within our requested timeframe. Near the conclusion of our engagement, we reviewed those reconstituted boards that had taken place following our initial file review. To help conduct our review of the three types of boards, we developed a data collection instrument with data categories reflecting the information required and routinely captured in each type of report. To determine the data included in these reports, we reviewed applicable precepts and standard operating procedures outlining required information, reviewed a sample of the board reports, and discussed the reports with HR officials. The analysts discussed the data categories with two research methodologists and a senior manager and reached agreement on them before coding the reports. For all reports, two analysts reviewed each report jointly. Another independent party reviewed the results of this process after the fact. To summarize and organize the selection boards’ written recommendations or observations to the Director General, two analysts read and entered this information into the data collection instrument. In 41 selection board reports, the team identified 306 separate recommendations or observations. To analyze this written information, the two analysts developed a set of summary statements and higher-level categories to be used for reporting purposes. The summary statements provided a detailed explanation of the nature of the board recommendations or observations, including examples to illustrate what types of recommendations or observations would be coded under these statements. The higher-level categories served as abbreviated headings or titles of these more detailed statements. These statements and categories were based on an inductive exercise involving an in-depth reading and comparison of the board recommendations. The two analysts then tested these statements on an initial set of five board reports, by coding the text in them jointly. The statements and categories were developed iteratively, with modifications made as appropriate. The text was coded “yes” or “no.” If a segment of text was coded as “yes,” it indicated that the particular board had made one or more recommendation that fell into this category. If a recommendation addressed more than one of our categories, we coded it into all applicable categories. The analysts coded the remainder of the reports independently. Once concluded, the analysts met to discuss codes and reconcile disagreements as needed. The two analysts were able to reconcile all disagreements. A third party reviewer reviewed the team’s work and provided several suggestions, informed by discussions with team members, on revisions to some of the categories to more clearly capture how the team defined and interpreted them. Final tallies of the analysis were obtained by counting, for each statement, the number of “yes” and “no” responses and reflecting the number of times a category of recommendation occurred in the 41 reports. To determine whether State was following its procedures for promoting candidates recommended for promotion by selection boards, we reconciled selection board report lists of rank-ordered candidates for promotion against State official promotion announcements. We first identified discrepancies between the two lists by reviewing selection board rank-ordered promotion lists against the State promotion announcements. We provided a copy of our list of discrepancies to HR officials, who provided lists explaining the reasons why certain candidates were removed from promotion lists. Through this process, State accounted for all missing names. To corroborate State’s explanation of why names were removed from promotion lists, we requested documentation from State attesting to the reasons given for these removals for a selected number of these individuals. State provided us with this documentation, and therefore fully accounted for all discrepancies between selection board report lists of rank-ordered candidates for promotion against State official promotion announcements. Similarly, we took steps to compare recusals we found documented in selection board reports against State’s master list of recusal requests. We cross-referenced State’s master list of recusal requests against the recusals documented in board reports. Through this process, we identified discrepancies, namely that four board reports had recusal information that was not reflected in State’s master list and that nine recusal requests included in State’s master list were not documented in the board reports. We discussed and provided specific information on these discrepancies with State officials. State subsequently presented copies of the nine recusal requests that were not initially documented in selection board reports. To supplement our promotion process file review, we distributed an online tool to allow selection board, performance standards board, and reconstituted board members an opportunity to anonymously comment on any actions, behaviors, or other concerns relating to the board on which they served that they believed could have compromised the board’s integrity and fairness. The online tool’s intended use was to gather any information received from selection board members, and, to the extent possible, follow up on any concerns or allegations; it was not intended to present frequencies or tabulations based on the responses we received, or to report comprehensively on the attitudes of board members on the promotion process. We determined this type of online tool was appropriate for this case because of the prior allegations of improper behavior related to the process. Before distributing the online tool, we shared it with three selection board members and incorporated their comments, as appropriate. We also shared the online tool with State and AFSA. We received comments from both State and AFSA; incorporated some of their suggestions, as appropriate; and explained, where applicable, why we did not incorporate certain other suggestions. We requested that State provide us with email addresses for board members from the following board types and within the following timeframes: selection boards in 2011 and 2012, performance standards boards in 2011 and 2012, and reconstituted boards from October 2011 to February 2013, as we distributed the online tool in March 2013. State provided email addresses for all but several of these names; we obtained some of the missing email addresses through our own research and others from AFSA. Through our own cross-referencing of State-provided lists of board members against our lists of boards and board members, we discovered that certain board member email addresses were missing. We requested that State provide email addresses for these names. Overall, we identified a universe of 298 individuals who served on the three types of boards within our specified timeframes. We were unable to contact five of these members for various reasons, including one retired board member for whom State had no email address, and several others due to missing or invalid e-mail addresses. We sent an e-mail with a link to the online tool to 293 board members on March 11, 2013. Responses were accepted through April 7, 2013. We received 206 completed forms. We cannot report a response rate as it is possible that respondents submitted multiple forms or individuals responded who were not board members in 2011 or 2012. Board members could respond anonymously, and some respondents did not provide contact information. We reviewed written responses to identify any obvious or apparent duplicate or multiple entries, and identified one such entry. We reviewed all 206 completed forms and identified four responses indicating a concern or problem with the operation of a particular board. Two of these four responses constituted the one apparent duplicate entry, and pertained to a selection board convened in 2009, which is outside the scope of this engagement. We nonetheless provided this concern to State’s HR staff and the OIG for further review and follow-up as appropriate. The remaining two responses, each of which identified two separate concerns, fell within our engagement’s scope and are discussed in the body of the report. We obtained permission from one respondent to provide the respondent’s two concerns to State’s HR staff and the OIG for further review and follow-up as appropriate. We conducted this performance audit from July 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following is GAO’s comment on the letter from the Department of State 1. This statement is incorrect. Although Figure 4 did note that State was able to locate some of the missing documentation in ancillary files, it did not note that this was due to the need to compartmentalize sensitive information. In addition to the contact named above, Timothy J. DiNapoli (Director), Anthony Moran (Assistant Director), Joe Carney, Martin De Alteriis, Karen Deans, Etana Finkler, Ernie Jackson, Jill Lacey, Mike ten Kate, and Ramon Rodriguez made key contributions to this report.
State’s Foreign Service promotion process follows an up-or-out principle, under which failure to gain promotion to higher rank within a specified time leads to mandatory retirement for personnel in certain occupational categories. State’s OIG and the Foreign Service Grievance Board identified procedural concerns relating to the process in 2010. GAO was asked to review the Foreign Service promotion process. This report examines (1) State’s process for ranking and promoting Foreign Service personnel, (2) procedural changes State has made to its Foreign Service promotion process in response to identified concerns, and (3) the extent to which updated procedures were consistently followed in 2011 and 2012 and whether any notable concerns about the promotion process remain. GAO reviewed laws and procedures; analyzed selection, performance standards, and reconstituted board files as well as grievance case files for the 2011 and 2012 promotion cycles; interviewed State officials; and contacted 2011 and 2012 board members to offer them an opportunity to comment on the process. The Department of State's (State) Foreign Service promotion process includes convening several types of boards to evaluate candidates for promotion and identify other candidates for possible separation from the Service. State has a separate process to address related grievances. Selection boards review all candidates and sort them into one of three categories: promotable, mid-ranked, and low-ranked. The selection boards produce rank-ordered lists of those candidates recommended for promotion, and a "cut line" is subsequently determined based on the number of available promotion slots. Before announcing promotions, State vets all recommended candidates to determine whether there are outstanding issues, such as a pending investigation, that can lead to their removal from the promotion list. Subsequently, State convenes performance standards boards to assess low-ranked candidates for possible separation from the Service. There are several mechanisms to address grievances relating to the promotion process. For example, State may initiate reconstituted boards to reassess candidates if a board failed to follow the procedures or if the underlying performance information contained omissions or inaccuracies. Employees not satisfied with grievance outcomes can file an appeal with the Foreign Service Grievance Board. In response to concerns identified by the Office of Inspector General (OIG) and Foreign Service Grievance Board in 2010, State has taken a number of actions to strengthen its Foreign Service promotion process internal controls. For example, in response to concerns about improper introduction of information about candidates, State instituted a requirement that board members sign an oath to adhere to the promotion criteria and protect the confidentiality of board materials. State also revised its procedures governing recusal requests, thereby broadening the provisions under which a candidate can request an individual board member's recusal from reviewing their file. In addition, State updated its reconstituted board procedures, outlining a set of required documents, such as signed board member score sheets, to be included in each board's official record. In addition to actions taken in response to others' identified concerns, State initiated other practices to strengthen promotion process safeguards, such as including selection board and reconstituted board member recusal memos in the final board report. GAO found that Foreign Service selection boards, performance standards boards, and reconstituted boards complied with many of State's updated procedures in the 2011 and 2012 Foreign Service promotion cycles, but some board reports had documentation gaps for certain internal controls. For example, all 41 selection board reports we reviewed included a signed memo certifying final results. However, only 29 of 41 selection boards had signed oaths from all board members, and 45 of 122 required oaths were missing from 2012 selection board reports. In addition, some board reports lacked documentation of some recusal requests. The absence of a fully documented system of controls increases the risk that intentional or unintentional failures to implement safeguards, by board members or State Human Resources staff, may go undetected and uncorrected. Such a failure to implement safeguards, in turn, increases the risk that the integrity of promotion results could be intentionally or inadvertently compromised. GAO recommends that State take actions to ensure full implementation of promotion process internal controls. State concurred with GAO's recommendation.
In managing federal lands, the Forest Service and BLM often contract for services such as road maintenance, forest thinning, and other activities. They also frequently contract to sell forest resources such as timber or firewood. Traditionally, these contracts have been executed separately— service contracts have generally been funded with appropriated funds from the agencies’ budgets, while timber has been sold through contracts with private purchasers. The Omnibus Consolidated and Emergency Supplemental Appropriations Act for 1999 authorized the Forest Service to combine these contracting mechanisms by entering into “stewardship end result contracts,” under which the agency could use the value of forest products sold to offset the cost of the contracted services. Under such goods-for-services contracts, the Forest Service could, for example, pay for thinning operations by using the proceeds from any commercial timber sold as part of the project. In addition to authorizing contracts, the act authorized the use of agreements to carry out stewardship projects. According to Forest Service and BLM guidance, the decision on whether to use contracts or agreements should be based on the principal purpose of the award, including its intended primary beneficiary. Contracts. The primary beneficiary of a contract is the federal government. Contracts are used for the purchase of goods and services for the direct benefit of the government or for the sale of government property such as timber. A contract is a mutually binding legal relationship obligating the seller to furnish supplies or services and the buyer to pay for them. Agency guidance directs that contracts rather than agreements be used for projects that are highly complex or financially risky. Agreements. Agreements are typically used to transfer a thing of value to a state or local government, or other recipient, to carry out a public purpose. According to the agencies, agreements are often used for projects that are for the mutual interest and benefit of the government and a cooperating organization—often a nonprofit organization or a state or local government. Under such agreements, both the government and the cooperating organization share the costs of the project, with the cooperator contributing funding, personnel, or equipment. A variety of agreements, including those entered into under the Wyden Amendment, may be used to implement stewardship contracting projects. Under the Wyden Amendment, the Forest Service and BLM may enter into cooperative agreements with landowners for the protection, restoration, and enhancement of fish and wildlife habitat and other resources on public or private land, as long as the agreement benefits the fish, wildlife, and other resources on national forest and BLM lands within the watershed. Additional contracting authorities were also included in the legislation; the full list of authorities follows. (Stewardship contracting authority was initially granted only to the Forest Service; in 2003 it was extended to BLM.) Goods for services allows the agency to use the value of commercial products, such as timber, to offset the cost of services received, such as thinning, stream improvement, and other activities. Designation by description or prescription allows the agency to conduct a timber harvest by providing the contractor with a description of the desired end result of the harvest. For example, the agency might require that all ponderosa pine less than 10 inches in diameter be harvested. Ordinarily, cutting any standing tree before an agency employee has marked or otherwise designated it for cutting is prohibited. Multiyear contracting allows the agency to enter into stewardship contracts of up to 10 years in length. (Standard service contracts are limited to 5 years, although timber sale contracts of up to 10 years were already authorized for the Forest Service.) Retention of receipts allows the agency to retain receipts generated from the sale of commercial products sold through stewardship contracts, rather than returning the funds to the Department of the Treasury’s general fund. The receipts are available for expenditure, without further appropriation, on other stewardship contracting projects. Exception to advertising exempts the agency from the requirement under the National Forest Management Act that all sales of timber having an appraised value of $10,000 or more be advertised. Supervision of marking and harvesting of timber sales exempts the agency from the requirement that only federal agency employees supervise the harvesting of trees on agency-managed lands. This authority has allowed the agencies to use certain state agencies to assist in stewardship contracting. Best-value contracting requires the agency to consider other factors— such as past performance or work quality—in addition to price when making stewardship contract award decisions. The 1999 law authorized 28 stewardship contracts by the Forest Service; the authority of the Forest Service to enter into these contracts was to end on September 30, 2002. Contracts were to “achieve land management goals for the national forests that meet local and rural community needs.” The goals listed in the legislation included, but were not limited to, maintaining or obliterating roads and trails to restore or maintain water quality; noncommercially cutting or removing trees or other activities to promote healthy forest stands, reduce fire hazards, or achieve other noncommercial objectives; and restoring and maintaining wildlife and fish habitat. The law also required that the Forest Service establish a multiparty monitoring and evaluation process to assess each stewardship contract. Subsequent laws modified the requirements of the initial stewardship contracting authority. For example, the Consolidated Appropriations Act of 2000 changed the requirement from 28 stewardship contracts to 28 stewardship projects, allowing for the possibility that individual projects might involve more than one contract. Subsequent legislation in the following 2 years increased the number of authorized projects and changed the end date of the demonstration period from 2002 to 2004. Most recently, the Consolidated Appropriations Resolution of 2003 extended the authority to enter into stewardship contracts to 2013, extended stewardship contracting authority to BLM, removed the restriction on the number of projects that could be implemented under this authority, removed the emphasis on noncommercial activities among the land management goals listed, and replaced the requirement for multiparty monitoring and evaluation of each project with a requirement to monitor and evaluate the overall use of stewardship contracting. Stewardship contracting projects are subject to environmental and resource management laws—such as the National Environmental Policy Act, the Endangered Species Act, and others—that also apply to nonstewardship projects. Responsibility for administering stewardship contracting authority at the Forest Service lies within two agency offices: the Forest and Rangeland Management Group and the Acquisition Management Group. Each of the nine Forest Service regions has designated a stewardship contracting coordinator to facilitate stewardship contracting activities. These nine regions oversee 155 national forests; the forests, in turn, oversee more than 600 ranger districts. Within BLM, authority for administering stewardship contracts resides within its Division of Forests and Woodlands. Each of BLM’s 12 state offices also has a stewardship contracting coordinator. The state offices oversee the activities of field-level units, including 144 district and field offices that carry out the on-the-ground activities. References to “field units” in this report include the Forest Service’s national forests and ranger districts and BLM’s district and field offices. Both agencies generally consider stewardship contracting to be a tool, rather than a program, because it has no associated budget or official accomplishment targets. Instead, the agencies must use existing appropriations to plan and administer their stewardship contracting activities. The Forest Service primarily relies on its fuel reduction and vegetation and watershed management funds to carry out stewardship contracting activities; BLM primarily relies on its forestry and fuel reduction funds. When the agencies use agreements to carry out stewardship projects, the partner organizations typically contribute resources such as funding, volunteer labor, or equipment. The agencies awarded increasing numbers of stewardship contracts during fiscal years 2003 through 2007; however, details about their overall use of stewardship contracting are incomplete because the agencies did not begin to collect nationwide data until recently, and even these data are not complete or consistent across agencies. As a result, certain data are available only for more recent years or are not tracked at all, limiting the agencies’ ability to evaluate their implementation of stewardship contracting and provide information on its use to Congress and other interested parties. From fiscal years 2003 through 2007, the number of stewardship contracts that the agencies awarded increased each year. For the Forest Service, the number of contract awards increased from 36 in fiscal year 2003 to 121 in fiscal year 2007, for a total of 352; for BLM, the number increased from 2 to 51 during the same period, for a total of 183 contracts awarded through fiscal year 2007. For other aspects of stewardship projects, however, reliable data were available only for more limited periods of time. For example, complete and comparable data on the volume of timber sold (i.e., sold for cash or exchanged for services) were available only for fiscal years 2005 through 2007. During that period, Forest Service stewardship projects sold about 130 million cubic feet of timber; BLM projects sold about 8 million cubic feet. During the same 3- year fiscal period, Forest Service projects treated about 172,500 acres; BLM did not maintain data on acres treated through stewardship contracts. And during fiscal years 2006 and 2007, the Forest Service sold at least $8.2 million worth of timber through stewardship contracts, while BLM sold about $5.9 million. During the same 2-year fiscal period, BLM procured services valued at about $10.5 million through stewardship contracts; comparable data were not available for the Forest Service. The agencies’ stewardship projects generally involved removing timber or other vegetation to reduce hazardous fuels or to otherwise improve forest health; the projects also encompassed various activities that benefited communities or met other restoration objectives, such as controlling disease or improving wildlife habitat. Neither the Forest Service nor BLM maintains data that provide a complete national view of stewardship projects. The agencies did not begin maintaining nationwide data on stewardship contracting projects until recently—primarily because of difficulties in adapting their systems to account for all aspects of stewardship projects. The agencies have adopted ways of collecting and reporting data specific to their respective needs and current capabilities, but the agencies must assemble data from various automated and manual sources to capture a complete picture of their stewardship contracting projects and accomplishments. Further, neither agency has a system that separately tracks data on stewardship agreements. The Forest Service has modified its existing Timber Sale Accounting (TSA) system to incorporate information on stewardship projects, including the collection and distribution of revenues stemming from stewardship contracts. But the Forest Service did not begin consistently distinguishing stewardship contracts (and their associated service credits) from conventional timber sale contracts in TSA until the beginning of fiscal year 2007. This approach tracks actual dollar values within TSA but has been challenging because the barterlike aspect of stewardship contracting makes it difficult to account for using traditional accounting systems like TSA. TSA was designed to account only for the value of timber sold and the cash received for it, and it was difficult for the Forest Service to adapt the system to account for the value of services received in exchange for timber. Additionally, when entering data, regions vary in whether they assign one number for an entire contract or a number for each task order within a contract. Other nonmonetary information about stewardship projects, such as the number of acres treated, is collected by the national stewardship contracting coordinator through a variety of other sources, including direct contact with regional and forest staff. Information on the value of services over $3,000 purchased as part of certain stewardship projects is maintained in the Federal Procurement Data System—Next Generation (FPDS-NG). However, the system contains information on only some stewardship contracts—those in which the value of services exceeds the value of the timber. Further, these contracts are not consistently distinguished from other types of contracts (i.e., standard procurement contracts) in this system, so complete information specific to stewardship projects cannot be extracted. The Forest Service does not maintain national data on stewardship activities conducted through agreements rather than contracts. The Forest Service has not yet determined how to modify its systems to incorporate data from agreements under which, as with contracts, forest products may be exchanged for services. The expectation is that stewardship agreements will go through the same accounting measures as contracts do, but it is unclear how forests are to keep track of the services performed under stewardship agreements. This is made more complicated by the fact that partnership agreements are no longer the simple instruments they have traditionally been. Now, for example, timber might be harvested under stewardship agreements, whereas it was traditionally harvested under contracts. In fact, lacking data on agreements, Forest Service officials were not certain whether timber has yet been sold under an agreement or by what means it would be tracked in agency databases if it were. In contrast to the Forest Service’s approach, BLM developed a dedicated stewardship contracting tracking system that BLM staff began using during calendar year 2005, but not all data in this system are validated, and the system does not interface with any other BLM system. Prior to the availability of this tracking system, staff in the field offices generated and maintained their own spreadsheets to track the stewardship project data they found useful. When the agencywide tracking system was developed, according to the system manager, the agency did not impose standards to guide the range and format of data entries or ensure consistency of data elements, such as contract award dates or the format of numerical values. The system contains data on the value of timber sold and services purchased; these data are reconciled manually with BLM’s accounting system rather than being directly tied to the system to allow automated reconciliation. Other information about stewardship contracts, such as the volume of products harvested, is collected by BLM’s stewardship contracting data manager through a variety of other sources, including direct contact with field staff. Also, unlike the Forest Service, which does not track agreements in its system, BLM includes agreements in its data system but cannot readily distinguish them from contracts. However, BLM has an effort under way to upgrade its system to improve data consistency and bring the system into compliance with accounting standards; the upgrade is expected to be completed in October 2008. Once completed, this upgraded system is intended to allow BLM to standardize data definitions, as well as to aggregate multiple contracts associated with a single project, in order to better track costs and accomplishments. It is unclear, however, whether the upgraded system will be able to accurately account for the values of products and services procured through stewardship agreements. The lack of complete data hampers the agencies’ ability to evaluate their use of stewardship contracting and to provide details on its use to Congress and other interested parties, including the public. Without such data, for example, the agencies cannot compare the costs and accomplishments of stewardship contracting projects with those of other projects that have similar goals, nor can the agencies accurately track year-to-year trends in the costs and accomplishments associated with stewardship contracting. Likewise, without a complete picture of the agencies’ use of stewardship contracting, Congress cannot fully assess the merits of this tool or its role in the agency’s larger land management efforts. The agencies’ inability to fully account for the values of products sold and services procured through agreements further clouds the picture of stewardship projects and potentially hampers congressional oversight. As we have previously reported, barterlike transactions are not reflected in the budget because no federal government cash flows are involved. As a result, congressional budget decision makers do not have an opportunity to consider whether the value of the exchanged property should be reallocated to other competing resource needs. From fiscal years 2003 through 2007, the Forest Service and BLM awarded a total of 535 stewardship contracts. The Forest Service, the first to receive the stewardship contracting authority, awarded 352 contracts, or over 65 percent of the total for the period; BLM awarded 183 contracts. While the Forest Service’s contract awards generally increased each year throughout the 5-year fiscal period, BLM’s followed a more inconsistent pattern, as shown in figure 1. Our count of contracts awarded includes both contracts and task orders because one stewardship project may encompass multiple contracts, and one large contract or “contract action” (e.g., a task order), may be used for several projects. Because the agencies’ tracking systems maintain data by contract or task order, the number of stewardship contracts may not match up with historical information on the number of projects. For BLM, our count also includes the four cooperative agreements that the agency had entered into with nonfederal partners between fiscal years 2003 and 2007, although these make up only a small portion of the total. Although field units in all Forest Service regions and BLM state offices have used stewardship contracts, the extent of their use varied widely among regions and state offices. For example, while almost 70 percent (16 of 23) of the national forests in the Forest Service’s Pacific Northwest Region had awarded stewardship contracts at the time of our review, less than half (17 of 37) of the forests in the Southern Region had used this tool. Data below the state office level were not available for BLM. Figures 2 and 3 show the distribution of contract awards by Forest Service regions and BLM state offices through the end of fiscal year 2007, as well as the extent to which these have been completed. The number of contracts alone is not necessarily an accurate indicator of stewardship activity; the duration of the contracts must also be considered. If some locations use multiple-year instead of single-year contracts, the number of contracts may decrease, even though the overall use of stewardship contracting is increasing. This also holds true for completion rates: Locations that use longer-term contracts for projects, such as BLM’s Oregon/Washington State Office, for example, may show lower completion rates despite making substantial progress on the projects. Of the total 535 contracts awarded for both agencies, the Forest Service currently has awarded 2 10-year contracts, in Arizona and southern Oregon, while BLM has awarded 30 10-year contracts: 25 in Oregon, 3 in Wyoming, and 2 in California. The types of long-term contracts used by the two agencies differ, however. Whereas each of the Forest Service’s long-term contracts is with a single contractor, the BLM contracts are umbrella contracts within which individual task orders are issued, sometimes to different contractors, to accomplish specific tasks. Under this type of contract, BLM issues task orders to meet specific needs as they arise. The Forest Service reported treating or planning to treat, through stewardship contracts, about 227,000 acres from fiscal years 2003 through 2007. The Pacific Northwest Region reported treating the most acres, while the Alaska Region reported the fewest (with 0 acres accomplished during those years). BLM does not maintain data on stewardship project treatment acreage separately from its other activities, so overall figures for BLM’s acres treated through stewardship contracts were not available. The Forest Service sold (i.e., sold for cash or exchanged for services) an increasing amount of timber as part of its stewardship projects from fiscal years 2005 through 2007. The Forest Service’s standard unit of measure for wood products is 100 cubic feet, or ccf. Thus, 100 cubic feet of wood would be measured as 1 ccf. In 2005, stewardship projects sold almost 200,000 ccf of timber; by 2007, that amount had grown to about 650,000 ccf. The timber sold during this period represented 8.5 percent of the total timber volume the Forest Service sold during those years. BLM’s figures are much smaller, and decline from year to year: In 2005, BLM stewardship projects sold about 38,000 ccf of timber; by 2007 that amount had shrunk to about 17,000 ccf, altogether representing about 7.4 percent of the agency’s total timber volume sold during those 3 years. Table 1 compares the volume of timber sold through stewardship contracting as a percentage of the total timber volume sold under each agency’s conventional timber program. A BLM official said that a number of factors could have influenced the decline in the percentage of stewardship timber volume relative to total timber volume over the period. Likely the most important factor is that during this period, stewardship projects increasingly produced lower- value forest materials-—including small trees, limbs, and brush, often referred to as woody biomass—rather than commercial timber, a trend the official attributed to a poor timber market. Additionally, he said, BLM has stopped assigning specific targets for field units to achieve on the use of stewardship projects, which may have led to some field units’ reducing their use of the tool. In addition, this official noted that some states have focused on issuing smaller contracts to try to build a contractor base. The Forest Service reports that through stewardship contracts, products worth at least $8.2 million were sold (i.e., sold for cash or exchanged for services) during fiscal years 2006 and 2007—representing about 2 percent of the agency’s total timber value sold (including timber sold through traditional timber sales) during those years. This includes timber large enough to be milled into lumber as well as other products, such as firewood and wood for posts and poles. The Forest Service began collecting these data only in fiscal year 2006, when it developed an accrual accounting method to report the value of forest products sold through stewardship contracts. The $8.2 million figure likely understates the actual value of products sold through stewardship contracting, according to Forest Service officials, because stewardship contracts were not always properly distinguished from conventional timber contracts in the agency’s systems. During the same 2 fiscal years, BLM estimated that the agency sold, through stewardship contracts, products valued at about $5.9 million dollars, representing about 7 percent of BLM’s overall timber value sold during that period. As for data on the value of contractor services received under stewardship contracts, no Forest Service data were available on a fiscal year basis. Although service values specific to stewardship contracting have been captured in TSA since the beginning of fiscal year 2007, the values are cumulative, by contract, and so cannot be identified by a specific fiscal year. Service values prior to that time are recorded in FPDS-NG, but only for certain stewardship contracts—those in which the value of the services exceeds the value of the timber. Further, the system does not distinguish these contracts from other contracts (e.g., standard procurement contracts), so the system cannot generate data specific to stewardship contracts. For BLM, the value of services purchased under stewardship contracts during fiscal years 2006 and 2007 totaled about $10.5 million. Both agencies maintain data on the amount of receipts retained from stewardship contracts once the contracts have been closed. The stewardship contracting authority allows the agencies to retain for use on future stewardship projects any money received under a contract or agreement. Although the agencies are not required to return these receipts to the Department of the Treasury’s (Treasury) general fund, the agencies report their net amounts to the Treasury. In fiscal year 2005, both agencies reported that they had no net retained receipts from stewardship contracting. The Forest Service reported about $3.6 million in retained receipts in fiscal year 2006 and about $1.2 million in fiscal year 2007, with the Pacific Northwest and Southern Regions generating the most receipts. BLM reported about $31,000 in retained receipts in fiscal year 2006 and about $107,000 in fiscal year 2007, with the California State Office generating the most receipts. Although the agencies report their retained receipts, they do not track how the receipts are subsequently spent. The Forest Service’s TSA system tracks the amount of receipts collected and retained at the closure of each contract, but it does not track the subsequent expenditure of the receipts. And as we reported in 2007, the Forest Service’s elimination of project- level tracking makes it impossible to determine which specific accounting codes (including the one that designates retained receipts) were used to fund a particular project. BLM tracks the amount of stewardship receipts collected and retained using its Collections and Billing System and, like the Forest Service, reports the amounts annually to the Treasury, but it too does not track the expenditure of retained receipts by project. The most common objective of stewardship projects, according to information we gathered during our site visits and agency officials’ statements, is to reduce potentially hazardous fuels by removing timber and other vegetation. Removing timber and vegetation can also promote forest health, another important objective. The agencies generally reduce fuel using either mechanical treatments, in which equipment—such as chain saws, chippers, bulldozers, or mowers—is used to cut vegetation, or prescribed burning, in which fires are deliberately set by land managers to restore or maintain desired vegetation conditions. Figure 4 depicts commercial thinning projects—in which the trees removed are large enough to have some commercial value—on national forest land using a delimber (left) and a grapple skidder (right). Although many projects were designed to protect areas in the wildland- urban interface (WUI)—that is, the area where structures and other human development meet or intermingle with undeveloped wildland— other projects included activities such as improving wildlife or fish habitat, reducing exotic and invasive plant species, and studying heritage fruit trees. In fiscal year 2007, for example, the Forest Service reported treating over 34,000 acres of WUI land, restoring 87 miles of streams, decommissioning 29 miles of road, and improving 35 miles of road for the use of passenger cars. BLM does not gather equivalent information at the field level, but its projects also included a variety of activities intended to reduce fuels, create wildlife habitat, restore streamside habitat, or control invasive plants—in one case, using goats to curtail the spread of the blackberry. We visited two stewardship projects in Idaho where both BLM and the Forest Service worked to improve and protect fish habitat. BLM installed culverts and improved roads to protect fish habitat, while the Forest Service restored a stream channel to create habitat for native fish species, including the endangered bull trout, by placing timber products generated from the stewardship contract in the stream to provide protective cover for the fish. This project area is shown in figure 5. During our site visits, we encountered other wildlife improvement projects benefiting, among other species, elk, wild turkeys, and the red-cockaded woodpecker. For example, we visited a project in South Carolina in which the Forest Service created and preserved nesting habitat for the red- cockaded woodpecker, which is a “keystone species” for longleaf pine forests, according to the Forest Service. The birds excavate cavities in longleaf pine trees, typically in mature trees suffering from a fungus that softens the tree’s interior. At one project we visited, the Forest Service was harvesting less desirable trees and using the resulting receipts to pay a contractor to cut the remaining stands into a clustered pattern of longleaf pines with nesting cavities. This project area is shown in figure 6. The Forest Service and BLM have similar processes for planning stewardship projects—including processes for identifying suitable projects, preparing and approving project proposals, and soliciting and evaluating bids. Both agencies involved stakeholders in planning and monitoring stewardship projects, although the extent of this involvement varied. The agencies differ in their approaches to implementing the projects, however, both in the contract types they generally use and in their efforts to involve small, local businesses. The Forest Service and BLM have similar processes for planning stewardship projects, including identifying suitable projects, preparing stewardship contract packages, and soliciting and evaluating bids for contracts; they also have similar processes for monitoring the projects. In both agencies, it is generally the field unit staff (e.g., foresters) who initiate stewardship projects, often working with stakeholder groups, and prepare the contract packages. In the Forest Service, the field unit staff generally work as part of an interdisciplinary team—made up of specialists from various disciplines such as engineering, fish biology, and wildlife biology— to identify stewardship projects that need to be done. In BLM, similarly, field unit staff work with other specialists to identify projects that align with the field unit’s resource plan. Projects were typically identified in areas that needed restoration—such as thinning overgrown stands of trees, installing culverts, or obliterating roads—and had enough timber value to cover at least a portion of the cost of the restoration services. For example, one Forest Service project in Montana was considered a good candidate for a stewardship project because it would accomplish needed fuel reduction work in a WUI and was located in an area with sufficient timber value to cover the cost of the service work. The forest did not have sufficient funding for the total volume of fuel reduction work needed in the area. As another example, a BLM project in Oregon was originally planned as several small, individual service contracts for thinning an overstocked pine plantation—taking out all but the biggest trees. But demand had increased for small-diameter wood (for use as posts and poles) and for wood chips that could be sold as biomass. When BLM officials realized they could sell material that had previously been nonmerchantable, they decided to accomplish the thinning through a stewardship contract. Officials in both agencies noted that the impetus for planning their first stewardship projects sometimes came from headquarters or from regional or state offices, which directed field units to implement a certain number of projects each year. Headquarters and regional or state office officials said they had done so because field units might otherwise be reluctant to experiment with this unfamiliar tool. In fact, when BLM received the stewardship authority, it provided its field units with a “budgeting carrot” to encourage use of the authority, providing units with extra funding for the accomplishment of stewardship projects. In some cases, project proposals were not initiated by the agency, but instead were brought to the agency by community groups or organizations. For example, the Forest Service’s Crooked River project—in Idaho County, Idaho—was brought forward by the nearby community of Elk City, which is surrounded by forest and was concerned about fire risk. The project included watershed improvement activities in addition to hazardous fuel reduction and timber harvesting activities. As another example, in Michigan, a Forest Service stewardship project grew out of a request from a Native American tribe that wanted to obtain pine logs with which to construct a traditional ceremonial roundhouse. When the tribe asked the Forest Service for help acquiring logs for its roundhouse, the Forest Service agreed to develop a stewardship project in which two stands of trees were reserved from acreage already being thinned as part of a larger project. In exchange for about 150 pine logs from those stands, the tribe performed service work, including thinning and removing aspen and balsam fir, making road improvements, and installing a culvert to improve water quality. This project was done through a contract that the Forest Service negotiated with the tribe, as allowed under the Tribal Forest Protection Act of 2004. Figure 7 shows the roundhouse under construction and the culvert installed as part of the service work conducted in exchange for the roundhouse timber. The agencies’ processes for preparing and approving project proposals are similar as well. In both agencies, field unit staff develop a written project proposal, which contains information on the project’s purpose, scope, and acreage, and the type, volume, and value of products and services involved. The proposals also contain information on the type and extent of outreach and collaboration with stakeholders such as community members, environmental groups, and industry representatives. Officials of both agencies said they generally hold public outreach meetings, which they advertise via newspaper and radio ads, Web site notices, or other means. At these meetings, agency personnel discuss project ideas and goals and inform the public about stewardship contracting’s requirements. In the Forest Service, project proposals are submitted to the forest supervisor and then to the regional forester for review and approval. In BLM, proposals are submitted to the state stewardship coordinator for review and then to the state director for approval. Both agencies use established methods to estimate the value of the timber and the value of the services. Agency staff estimate the timber value using the standard appraisal system that they use as part of ordinary timber sale contracts. This system employs a transaction evidence appraisal method, which involves “cruising” the timber to estimate its volume and then using evidence from recent timber sales in the area to estimate its value. To determine the value of the services, both agencies prepare a government estimate, which is developed by resource specialists (e.g., engineers, silviculturists, and fuel specialists). As part of this process, the agencies conduct market surveys by reviewing online contractor information and examining historical contract award information for the state. Processes for soliciting bids on stewardship contracts are similar as well. For contracts that primarily involve the sale of timber, the agencies issue a prospectus, with bid forms, and have a sample contract available at the field unit for review by prospective bidders. For contracts that predominantly feature the acquisition of services, the agencies advertise the contract in FedBizOpps—the Web-based database of federal contracting opportunities—and then issue solicitations for bid. The agencies also hold “show me” trips or preproposal or prebid meetings to discuss project specifications with potential bidders, and may amend project requirements based on the comments received at these meetings. Once they receive bids, both agencies use the best-value process for evaluating the technical proposals submitted by potential bidders for stewardship projects. This process entails having a group—composed of individuals with the requisite skills—evaluate each technical proposal and assess its strength in each evaluation category. The categories typically include factors such as experience, past performance, strength of the technical approach, type of equipment available, planned use of local workers, and planned use of forest products by local mills or companies. In some cases, the evaluation group assigns numerical scores to the various factors; in other cases, the group uses adjectival ratings (e.g., exceptional, acceptable, marginal, and unacceptable). As an illustration of the factors considered during this process, one Forest Service project in Idaho entailed emplacing more than 100 in-stream structures (e.g., large boulders and trees) to improve bull trout habitat over about 4 miles of stream. A “big plus” in the winning bidder’s technical proposal, according to Forest Service officials, was the plan to have a hydrologist design an on- the-ground survey before the contract package was put together, an act that officials believed would eliminate the need for many contract modifications as the work progressed. Typically, price is evaluated separately from the other factors. According to the Forest Service’s national stewardship contracting coordinator, forests have different opinions about whether price is equal to or of lesser or greater importance than other factors in determining best value, and forests vary in the weight they assign price. He believes that price should be 50 percent of the determination and that all the other factors should make up the other 50 percent. However, in several locations, agency officials said that over time, as contractors gained experience in completing technical proposals, the differences between proposals had become more and more narrow, until ultimately, price became the de facto deciding factor in contract award. The extent to which stakeholders have been involved in planning stewardship projects varied. For some projects, public interest has been keen, and the agencies have collaborated with large and diverse groups, which in some cases predate the stewardship authority. For BLM’s Weaverville community forest project in California, a diverse group of individuals—representing the community, industry, and environmental groups—was involved in project planning. Similarly, for the White Mountain stewardship project on the Apache-Sitgreaves National Forests in Arizona, a large and diverse group was involved in planning the project; the group includes representatives of environmental and wildlife advocacy groups, state and local governments, industries, a university, and communities. On the Lakeview Sustained Yield Unit in southern Oregon, the Forest Service and BLM have worked with a group that dates back to the 1950s, when it was a community group that oversaw and reported on the unit’s production. Today, this group—which includes representatives of several environmental groups as well as industry—works with the agencies to sustain and restore a healthy forest ecosystem that can accommodate human and natural disturbances. Fear of wildland fire is often the impetus for collaboration. Communities that have experienced large fires are often interested in fuel reduction, whether accomplished through stewardship projects or other means, and increasing numbers of communities around the country are identifying areas needing fuel reduction to reduce the risk of fire. In one area in Montana, for example, that had experienced a large fire in the past, the Forest Service held public meetings about a proposed stewardship project involving fuel reduction and notified interested parties about the meetings through newspaper notices and telephone calls. The Forest Service collaborated with the local rural fire department as well as with community members and environmental groups. According to the district ranger, the public was interested in this project for its fuel reduction benefits, and a prominent environmental group was also in favor of the project. The White Mountain project in Arizona likewise benefited from increased collaboration in the wake of the nearly 500,000-acre Rodeo- Chediski fire. Not surprisingly, individuals who collaborate on stewardship projects are often the same ones who have been involved in developing community wildfire protection plans. In California, for example, BLM worked with community fire safe councils to plan stewardship projects in the WUI. Although public interest in some projects is intense, for other projects, agency officials said there was little public interest, despite agency efforts to involve community members. For example, a Forest Service official in Colorado said that while the region gets a lot of community interest in some projects, most of the region’s stewardship projects are “standard, run of the mill” projects (e.g., small thinning projects) that are not of interest to the community. With these types of projects, this official said, community members sometimes come to meetings but typically have little further involvement. As for monitoring, both the Forest Service and BLM systematically involve stakeholders in programmatic monitoring, but stakeholder involvement in project-level monitoring varies. As noted earlier, the 2003 stewardship authority replaced the requirement for multiparty monitoring and evaluation of each project with a requirement to monitor and evaluate the overall use of stewardship contracting. Accordingly, the Forest Service and BLM jointly contracted with the Pinchot Institute for Conservation to conduct “multiparty programmatic monitoring” of stewardship contracting—that is, nationwide monitoring of the overall use of the stewardship authority. The institute conducts this monitoring primarily through subcontracts with four regional partnership organizations that survey agency staff and project stakeholders (e.g., contractors and community members) about the extent to which local communities were involved in developing stewardship projects. The institute worked with the Forest Service and BLM to develop the survey instrument. In fiscal years 2006 and 2007, the institute’s regional partners conducted telephone surveys with individuals involved in a sample of Forest Service and BLM stewardship projects. The fiscal year 2007 programmatic monitoring survey included 58 Forest Service stewardship projects and 38 BLM projects. For each of these projects, three individuals were identified for interviews: the agency project manager and two randomly selected external participants, such as community members or contractors. For the Forest Service projects, more than 70 percent of the 67 external (nonagency) survey respondents believed that the development of the stewardship project in which they were involved was “very collaborative” (39 percent) or “somewhat collaborative” (33 percent). Only 6 percent characterized the development as “not at all collaborative.” Similarly, for BLM projects, more than 80 percent of the 37 external survey respondents believed that the development of the stewardship project in which they were involved was “very collaborative” (24 percent) or “somewhat collaborative” (57 percent). Only 5 percent characterized the development as “not at all collaborative.” (For both agencies, the remainder of the respondents said they did not know.) Although both agencies also monitor the effects of individual projects over time, the extent to which the agencies involve stakeholders in project-level monitoring activities varies. In some locations the agencies have undertaken extensive and innovative approaches to involving stakeholders in project-level monitoring. For example, at one project in southern Colorado, BLM works with graduate students from the University of Kansas to establish and monitor treatment plots to measure the project’s effect on soils, vegetation, tree stand diversity and health, and wildlife use. And at a project in northern California, the Forest Service included in the stewardship contract, as a service item, a requirement that the contractor compile and submit data on the use of machinery—such as harvesting and hauling equipment—on the project. These data are then used by a nonprofit organization to study the carbon offsets on projects. As another example, at the Forest Service’s White Mountain stewardship project, a stewardship board monitors the project’s social, ecological, and economic effects. In other locations, the agencies have not have involved stakeholder groups substantially in project-level monitoring. In one region, officials noted that most of the field units manage collaboration as part of the environmental assessment activities required by the National Environmental Policy Act; in other areas, officials noted that there has not been much local interest in engaging in multiparty monitoring. Both the Forest Service handbook and BLM guidance state that stewardship receipts may be used to defray the direct costs of the local collaborative process—for example, by paying for meeting rooms, facilitation, and travel for stakeholders involved in the monitoring process. Also, the Forest Service handbook allows stewardship receipts to be used to pay for the development of monitoring protocols and items to be monitored, as agreed on within a collaborative group and recommended to the line officer. Both agencies also allow the use of stewardship receipts for project-level monitoring in certain circumstances, such as where there is interest and support from local collaborative partners. However, Forest Service guidance specifies that stewardship receipts may not be used for environmental monitoring—that is, monitoring of a project’s effects on air, soil, or water quality—at the project level. Some collaborators see the prohibition on using stewardship receipts for project-level environmental monitoring as a shortcoming. For example, in a letter to the Forest Service’s Washington Office, one group of stakeholders expressed its dissatisfaction with the restriction on the use of receipts for project-level monitoring. The group was concerned that the lack of funding would hobble its efforts to collect information demonstrating the effective implementation and results of projects, thereby preventing the demonstration of ecological and economic benefits to the watershed. The Forest Service responded that it is required by its land and resource management plans to conduct environmental monitoring of its activities and that it will continue to do so with funds other than stewardship receipts, thereby allowing stewardship receipts to go toward accomplishing work on the ground. Other stakeholders we met with noted their concern that without rigorous project-level monitoring, it will likely be difficult to assess the effects of individual stewardship projects and of stewardship contracting authority as a whole. Both agencies use contracts with a mix of timber sale and acquisition provisions, although the Forest Service typically uses contracts emphasizing timber to be sold, while BLM typically uses contracts emphasizing services to be procured. The Forest Service generally uses one of two basic types of stewardship contracts: integrated resource timber contracts (IRTC) and integrated resource service contracts (IRSC). The selection of the contract type to be used depends on the type and merchantability of the product to be sold (generally timber). IRTCs are generally used when the estimated value of the timber to be sold under the contract exceeds the estimated value of the services to be performed. IRSCs are generally used when the ratio is inverted—when the estimated value of the services exceeds that of the timber. In both cases, the difference in value is balanced with an appropriate cash payment. Several field unit staff said they would prefer to have a single stewardship contract, for ease of use, rather than separate ones (i.e., IRTCs and IRSCs), but Forest Service officials in the agency’s acquisition management and forest management programs explained that the contracts are different because they are governed by different legal requirements. On the timber side, the Forest Service’s authority to sell timber is governed by, among other laws, the National Forest Management Act of 1976, as amended. Acquisitions of goods and services are governed primarily by the Federal Acquisition Regulation (FAR). Administering contracts under these two different authorities requires different training, experience, and workforces. On the timber side, for example, contracting officers are familiar with the requirements for planning and administering timber sales, but generally lack the certification required to authorize them to obligate government funds for the acquisition of services. And on the acquisition side, contracting officers may be familiar with procurement requirements but generally do not have much experience with timber sales, and the Forest Service has not had the resources to train them. Having a single contract form would require staff expertise and training in both areas. Accordingly, the Forest Service resorted to having two separate types of contracts, each with its own rules and provisions. Forest Service officials also told us that the Forest Service designed the IRTC so as to minimize the differences between timber sale contracts and stewardship contracts, because purchasers were familiar with timber sale contracts, and the Forest Service expected those same purchasers to play a large role in stewardship contracting. The Forest Service predominantly uses IRTCs rather than IRSCs. Particularly in timber-rich parts of the country, such as parts of the Northeast and the Northwest, most stewardship projects are done under IRTCs because timber is the main source of revenue to pay for the service work. When it uses IRTCs, the Forest Service often has receipts remaining when the contract has been closed. In Montana, for example, a Forest Service official said that generating retained receipts is a “key aspect” of stewardship contracting because the Forest Service can retain these receipts and use them to accomplish subsequent stewardship work. The Forest Service generally uses retained receipts to pay for services acquired through another stewardship contract—usually a service contract—within the same forest, and typically within the same ranger district, from which the receipts originated. Although retained receipts could be used in another forest or ranger district, the community stakeholders that helped plan and monitor a stewardship project might object to retained receipts being directed elsewhere, according to a Forest Service official. The Forest Service tends to use IRSCs in parts of the country that have low-value timber. In the Rocky Mountain and the Intermountain regions, for example, many of the stewardship contracts are IRSCs (primarily for fuel reduction), because the timber is typically low in value. Accordingly, the regions require substantial appropriated dollars to supplement the value of the timber and pay for the necessary services such as hazardous fuel reduction. In contrast to the Forest Service, BLM predominantly uses acquisition contracts (i.e., service contracts) to carry out its stewardship projects. BLM officials typically refer to their stewardship contracts as “service contracts with embedded products.” Because the value of the services acquired typically exceeds the value of the product sold (as with Forest Service IRSCs), BLM generally uses fuel reduction or forest management funds to pay for a portion of the service work. In many cases, these contracts take the form of indefinite delivery/indefinite quantity (IDIQ) contracts—umbrella contracts under which the agency can issue numerous task orders. Although BLM also has a stewardship contract type that can be used for projects that primarily involve the removal of timber, BLM has generally avoided using stewardship contracting to carry out timber sales. According to several BLM field officials, for example, the general direction from the Washington Office has been that if a project “looks like a timber sale and feels like a timber sale,” then it should be offered it as a regular timber sale, using standard timber sale procedures. This is particularly the case for the heavily timbered lands in western Oregon. In fact, agency guidance allows timber-type stewardship contracts to be used only when the total volume of timber to be sold is less than 250,000 board feet. This restriction, according to a BLM official, effectively prevents the use of timber-type stewardship contracts on even small stewardship projects; instead, a project must either be altered to include services with a value in excess of the timber value or simply offered as a regular timber sale. Also unlike the Forest Service, BLM has typically not retained receipts at the end of stewardship projects. Of course, in many areas of the West, BLM-managed land lacks valuable timber that could generate such receipts. In Arizona, for example, BLM-managed lands primarily have piñon pine and juniper, which have little market value and are typically used for firewood. As one BLM official explained, stewardship contracting is a restorative activity that may not have a commercial value. If it does, that value is used to offset part of the cost of the service work. When retained receipts are generated by BLM projects, typically through projects conducted in California, Oregon, and Washington, their distribution is decided by the BLM state office. Generally, according to BLM officials, the receipts would be directed first toward the project that generated them (if needed), then to the same local area, then to the state, then to other states that need them. In some locations, where the timber has a high enough value, BLM officials said they want to begin generating more retained receipts. This would allow them to use those receipts on stewardship projects in other areas where the timber values are lower. Stewardship agreements are another vehicle occasionally used by the agencies. As discussed earlier, the Forest Service had entered into 12 agreements between fiscal years 2003 and 2007; BLM had entered into 4 agreements. The agreements in place are typically cost-share agreements or participating agreements, in which both the agency and the partner derive a mutual benefit. Most of these agreements are for 10 years, and while some are small, others cover an entire region. The Forest Service’s Pacific Northwest Region, for example, issued two regionwide agreements—one with the National Wild Turkey Federation and one with the Rocky Mountain Elk Foundation. These regional agreements are essentially “umbrella agreements” that are similar to the IDIQ contracts BLM uses. That is, the regional agreements establish the framework within which a number of projects can be completed through supplemental agreements, similar to the task orders issued under an IDIQ contract. Agency officials provided several reasons why they sometimes prefer to use agreements rather than contracts to carry out stewardship activities. First, they find agreements to be simpler and more flexible than stewardship contracts. That is, whereas contracts can be a hundred pages or more in length, agreements are generally much shorter—perhaps a dozen pages. Also, changes can be made to agreements more quickly and simply—the partners agree on what changes are needed, write up the changes, and initial them. Second, agreements need not contain all of the many clauses required by contracts (e.g., clauses associated with the calculation of timber rates or the costs of constructing roads). And third, unlike contracts, agreements sometimes bring in matching funds from partnership organizations, thereby allowing the agency to “get more bang for its buck,” as one official said. That is, when partners contribute resources such as volunteer labor, equipment, or funding, work can be accomplished at less cost to the agency. In one stewardship agreement, for example, the Forest Service and a partner (a nonprofit organization) agreed to share the cost (about $114,000) of a stewardship project designed to reduce hazardous fuels. The Forest Service’s share was 56 percent; the partner’s was 44 percent. The partner’s contribution included services and supplies, as well as $36,000 from a grant it had received from another nonprofit organization. Agency officials cautioned, though, that agreements do have drawbacks. For example, in some cases a partner organization may not have the skills or experience to perform all the work, so agency staff may need to spend considerable time overseeing the work to ensure it is done properly. Also, according to agency officials, potentially interested contractors may feel unfairly excluded if a project is awarded to a partner organization through an agreement, rather than being offered for competitive bid. Finally, some agency officials expressed uncertainty about the options available to them in case a partner did not comply with the provisions of an agreement. One type of agreement that has not been widely used in stewardship projects is the Wyden Amendment agreement, which allows the agency to conduct restoration work on private lands, as long as the work achieves public land management goals. According to Forest Service officials, only one national forest—the Siuslaw, in Oregon—has used a Wyden Amendment agreement to include the treatment of private lands in a stewardship project. Headquarters officials did not know why forests had not made greater use of the Wyden Amendment, but they surmised that forest officials were either unaware of the Wyden authority or had placed a higher priority on getting work done on federal land rather than private land. This is a decision, according to headquarters officials, that forest supervisors must make in determining the best use of limited agency resources. In addition to using different contract types, the agencies differ in how they approach the objective of involving small local businesses in stewardship contracts. Whereas the Forest Service invites full and open competition on most of its stewardship contracts, both IRTCs and IRSCs, BLM generally sets aside its stewardship contracts for small businesses. Forest Service officials explained that they invite full and open competition on all IRTCs because, like traditional timber sale contracts, IRTCs are exempt from the requirements governing small business set- asides. However, the large timber companies that typically bid on IRTCs often subcontract with small local businesses to do the service work with which the timber contractors are less familiar. In this way, IRTCs help stimulate the local economy, albeit somewhat indirectly. The Forest Service’s IRSCs, on the other hand, are subject to the requirements governing small business set-asides. According to the FAR, acquisitions of services within a specified range of value (generally from $3,000 to $100,000) shall be “reserved exclusively for small business concerns and shall be set aside for small business unless the contracting officer determines there is not a reasonable expectation of obtaining offers from two or more responsible small business concerns that are competitive in terms of market prices, quality, and delivery.” Nevertheless, Forest Service officials believe that most of the agency’s IRSCs are not set aside for small business, largely because of the dearth of small logging companies in many parts of the country. However, the Forest Service does not maintain data on the number of IRSCs that are set aside for small business, and so cannot gauge its success in involving small businesses. BLM, in contrast, generally sets aside all of its stewardship contracts (which are typically service-oriented contracts) for small businesses, according to agency officials. A contracting officer from BLM’s Oregon office said that there have consistently been at least two responsible small business firms that were interested in BLM stewardship projects and from which BLM could expect to receive reasonable prices. Accordingly, BLM projects have been set aside for small businesses. Although BLM’s stewardship contracting guidance makes an exception to the set-aside policy in cases where “non-traditional entities (e.g., local governments, nongovernmental entities, and nonprofit organizations) have expressed an interest” in a project, BLM contracting officials told us they set aside all stewardship contracts regardless, because, in their words, “the FAR trumps BLM guidance.” The agencies cited as key benefits of stewardship contracting the ability to accomplish more work on the ground and to build collaborative partnerships. The primary challenges cited by the agencies are (1) overcoming internal and external resistance to using stewardship contracting, (2) dealing with market uncertainties, and (3) understanding and dealing with the ramifications of using long-term multiyear contracts. The agencies have numerous efforts under way to overcome some of the challenges they face, including conducting training courses and workshops and supporting innovative efforts by entrepreneurs and researchers, but they have not developed strategies to guide the nationwide use of long-term multiyear stewardship contracts and to inform offices’ decisions about the use of such instruments. Agency officials frequently cited the ability to get more work done on the ground as a measure of stewardship contracting’s success. “Stewardship contracting is the most valuable tool the Congress has given us in 30 years,” said a Forest Service official in southern Oregon. In particular, according to agency officials, the ability to use product value to offset service costs has enabled them to accomplish work that otherwise would not get done, given current funding constraints. A Forest Service district ranger in Montana, for example, said that stewardship contracting enabled the district to perform nearly $1 million of service work for which the district did not have appropriated funds—an amount equivalent to about 40 percent of the district’s entire annual budget. This work included removing 49 stream crossings—roads that crossed streams and thus contributed to stream sedimentation—to help meet state water quality goals. Similarly, a Forest Service official in Wisconsin noted that through stewardship contracting, the forest unit could accomplish some work— including planting large trees and grinding stumps—that it would not have been able to afford to do otherwise. As another example, stewardship contracting is expected to play a big part in helping the Forest Service deal with the problem of trees killed by the mountain pine beetle in Colorado. The Forest Service plans to use stewardship contracting for the removal of dead and dying trees, using the value of the trees to offset a portion of the associated costs. Several environmental group representatives we spoke with likewise praised stewardship contracting for helping the agencies accomplish more needed work. Forest Service officials also stated that stewardship contracting is financially advantageous in other ways. First, although the agencies often use monies from the Knutson-Vandenberg (K-V) fund to conduct reforestation, using the stewardship authority to conduct these activities allows field units to avoid the overhead charges that the Washington Office assesses on the use of K-V (and other) funds. Additionally, retained receipts are subject to fewer limitations on use than K-V funds. Field officials also stated that stewardship contracting enhances their productivity because the revenues stay within the field unit rather than being returned to the Treasury. As a Forest Service official in Wisconsin said, “Anything we get under stewardship contracting is better than a traditional timber sale because the revenues stay here rather than going to the Treasury.” Agency officials also pointed out the savings from implementing one contract for a particular project rather than two or more. According to a forester in a BLM field unit in California, for example, the net cost per acre is reduced because BLM staff spend less time developing, advertising, and implementing a single stewardship contract than they would on multiple traditional contracts. Similarly, a Forest Service official noted that prior to receiving the stewardship contracting authority, the Forest Service had to go through a two-step process: first conducting a timber sale to remove merchantable timber and then issuing a separate service contract to remove the remaining material. The official stated that by law, the Forest Service could not mix the two steps. Stewardship contracting has relieved the Forest Service of that burden by having a single contractor do all the work, thereby saving the agency the time and associated cost of preparing two separate contracts. Agencies also cited the collaborative partnerships they have built through stewardship contracting. These collaborative partnerships have resulted in community support for stewardship projects and allowed the agencies to move forward with projects without the litigation costs and delays that have often confronted typical timber sales and even some hazardous fuel reduction projects. A Forest Service official in Montana, for example, said that the community has become very supportive of stewardship contracting, as have local environmental groups. Another official added that this support is in itself a big success. At first, according to this official, some environmental groups refused to accept stewardship contracting, saying that it was just an excuse to cut more timber. But now, she said, she is hearing less opposition. Similarly, a headquarters Forest Service official said that when stewardship contracting first started, many—including the Forest Service and environmental groups—had concerns that stewardship projects would just be disguised timber sales. But after the Forest Service reached out to stakeholders, including environmental groups, these concerns diminished over time, and stakeholders began to see the value of stewardship contracts in performing needed work. Several agency officials also credited the collaborative process with building community support for forest restoration projects and allowing the projects to go forward without protest. For example, according to the national stewardship contracting coordinator, comments from national forest officials across the country indicate that the use of stewardship contracting and the collaboration associated with its use have led to fewer appeals and less litigation at the project level. Several field unit officials reported similar impressions. On one forest in California, for example, 3 of the forest’s 22 stewardship projects have been appealed, but none has been litigated. Another field unit, similarly, reported having few or no appeals or litigation associated with their stewardship projects. Many stakeholders agreed. For example, one member of a project- monitoring group told us he had forestalled litigation by his environmental organization on several occasions because of the trust that the monitoring group had developed with the agency. In fact, some community groups have produced guides to help businesses understand the stewardship contracting process. Nevertheless, collaboration has its drawbacks, according to agency officials and others. One drawback is the time it takes to build and sustain a truly collaborative group. For example, members of the monitoring board for the Forest Service’s White Mountain project, in Arizona, said they worked together for years to develop mutual trust and respect and to build consensus. Similarly, members of BLM’s community forest project in Weaverville, California, said they worked for years before developing a level of trust that allowed the work to proceed without protest. Ongoing collaboration takes time as well. Officials of the Forest Service’s Southern Region noted that, in one state, a forest working with community groups was on its third iteration of an environmental assessment for a stewardship project, having redone the assessment to accommodate the group’s wishes. In several locations, officials raised the question of how much community collaboration should be expected, especially when projects or communities are small. Another drawback, according to Forest Service officials, is that collaboration can dilute the effectiveness of a project. Forest Service officials at several project locations noted that community involvement ended up watering down the impact of the stewardship projects because the Forest Service limited the amount of work it did at the request of stakeholders. On one hand, these officials said, the Forest Service was being responsive to community desires in altering its projects, but on the other hand, the projects may not have been as effective as they could have been because they were not appropriately designed. For example, on one project in southern Utah, Forest Service officials and the contractor thought that the compromise reached with an environmental group prevented the project from accomplishing its objective. The project was designed to protect the trees between a wilderness area and a popular campground by thinning them to discourage damage by pine beetles. After the environmental group appealed the project, the Forest Service agreed to remove fewer trees. As a result, according to these officials, the area will remain susceptible to pine beetles, which officials believe will kill all the trees. Officials added that although the Forest Service achieved some political goodwill by compromising with the environmental group, it accomplished little in terms of resource management. We have previously reported on the advantages and disadvantages of collaboration. From the outset of stewardship contracting, both agencies encountered resistance to using stewardship contracting, from both inside and outside the agencies. Within the agencies, unfamiliarity with stewardship contracts made some officials reluctant to use them. One Forest Service official said, for example, that he was familiar with timber contracts but not with all the nuances of acquisition contracts. In general, timber staff were not familiar with acquisition procedures and regulations, while acquisition staff were similarly unfamiliar with selling timber—making both types of staff reluctant to use this new tool. As one Forest Service official explained, the challenge is to get the timber staff and the acquisition staff working together—to bridge the gap between the two different cultures. Officials’ unfamiliarity with the use of the new tool was compounded by the lack of a centrally located source of expertise to which agency staff could turn for assistance or advice. Officials of both agencies remarked on the importance of sharing lessons learned among their respective units. Agency officials noted that the sharing of these lessons need not come in the form of guidance or direction from the Washington Office, however. For example, since BLM’s Oregon State Office was designated the “center of excellence” for stewardship contracting, the contracting officers in that office said they have learned many valuable lessons, and staff in other offices have begun turning to these contracting officers for advice and assistance on stewardship contracting issues. Turnover in stewardship coordinator positions, particularly at the national level, has also hampered understanding because institutional knowledge is especially important for helping field staff use new or complex programs or tools. At BLM, the constant turnover in the stewardship coordinator position at headquarters—with four different staff successively filling that position between July 2007 and July 2008—has made that office ill equipped to deal with questions from the field. Turnover in the Forest Service’s headquarters stewardship coordinator position also occasionally hampered field officials’ attempts to gain insights and assistance as they used the tool, albeit to a lesser degree. In other cases, some field units were reluctant to use stewardship contracts because the units were located in areas with high timber values and healthy markets and had sufficient K-V funds (which are generated through timber sales) to carry out needed service work. A Forest Service official in Wisconsin, for example, said that the timber economy has been stable in Wisconsin, giving his ranger district little incentive to use stewardship contracting. Outside the agencies, resistance has come primarily from contractors and local community officials. As with the agency officials, the learning curve for bidders not acquainted with stewardship contracting was steep. For example, preparing the required technical proposals describing how the contractor would perform the service work was intimidating and time consuming; one contractor likened it to preparing a résumé. Stewardship contracts also called for contractors to do work that they may not have done before, which made some contractors uncomfortable. Several officials told us that contractors were uncertain how to bid on some aspects of service work and, in some cases, did not have the set of skills or the equipment needed to perform it. In Wisconsin, for example, the contractor on a stewardship project to curb the spread of oak wilt said he was leery of bidding on the project at first, as he had no experience or equipment with which to pull stumps—a task crucial to the control of the disease. He ultimately bid on—and won—the contract after agreeing to subcontract the stump-pulling work to a road contractor with whom he had previously worked. Although many contractors overcame their reluctance and bid on projects, bidders’ lack of experience with subcontracting led to higher prices, according to a Forest Service official, because bidders felt greater risk in bidding on unfamiliar work and priced their bids accordingly. In many cases, county commissioners and other local officials were opposed to stewardship contracting projects because receipts from stewardship projects were not factored into the calculation of timber receipts (and other qualifying receipts) from which the counties received a share. For years, many counties across the country depended heavily on their share (typically 25 percent) of timber sale and other qualifying receipts, but these receipts dwindled substantially with the decline in federal timber sales in the late 1980s. The Secure Rural Schools and Community Self-Determination Act of 2000 was enacted, in part, to address the decline in federal payments by stabilizing payments to counties that depended on revenues from timber sales on Forest Service and certain BLM lands. Under the act, each county could continue to receive a portion of the revenues generated from these lands or could choose instead to receive annual payments equal to the average of the three highest annual revenue payments to the county from fiscal year 1986 through fiscal year 1999. Payments under the act ceased in December 2007, but the act was reauthorized in October 2008, with payments to continue through fiscal year 2011. During our review (before the October 2008 reauthorization), agency officials told us that, regardless of the option that counties had chosen under the Secure Rural Schools Act, county commissioners and other local officials had expressed concerns about stewardship contracting’s effect on county revenues—whether immediate or potential. That is, counties that had elected to continue receiving 25 percent of timber and other qualifying receipts were concerned because stewardship receipts were not included in the calculation of timber receipts and thus were perceived to have an immediate detrimental effect on county revenues. And counties that had elected to receive an average of prior-year receipts were also concerned because they thought that if the Secure Rural Schools Act were reauthorized, the formula for calculating payments to counties might be changed to include years in which stewardship projects were conducted in the counties. If so, the counties’ portion of receipts would be diminished, again in part because stewardship receipts would not be included in the total from which the counties’ share would be calculated. According to a Forest Service official in Montana, there was not a county commissioner in the state who was not concerned about the county’s share of receipts diminishing as a result of stewardship contracting. In the Great Lakes forests, similarly, some counties were in favor of the concept of stewardship contracting, according to Forest Service officials, but also wanted to maximize receipts from timber. In Wisconsin, for example, the Forest Service was planning a stewardship project that the community favored because of concerns about fire risk, but a Forest Service official noted that the county has not embraced stewardship contracting because of its effect on payments to the county. Forest Service officials were worried about how they were going to get the county’s support. Although the agencies are not legally required to obtain the approval of local officials to conduct stewardship projects, these concerns have made some agency staff more cautious about using the tool more widely. In Oregon, BLM officials noted that resistance from county officials has caused BLM to take a conservative approach to developing stewardship contracting in certain areas. For example, one BLM district will not approve a project unless the project has the written support of the county commissioners. Counties’ concerns about nine proposed stewardship projects in the district were conveyed to the district manager in a letter from the Association of O&C Counties. According to the letter, the association’s board of directors had decided to support the nine projects but expressed deep concern about stewardship projects in general because they generate no receipts to be shared with counties. Market uncertainties posed another set of challenges. Market prices for timber have been volatile, especially lately, with the slump in the housing market; this has made it very difficult to get bids on timber sales in some areas, and this difficulty has spilled over into stewardship projects as well. One stewardship contract on the Superior National Forest in Minnesota, for example, was offered three times, each time at lower timber prices, before it was awarded; another stewardship contract on the same forest was offered twice before being awarded. Forest Service officials expressed hope that once the housing market turns around, the value of timber will increase and make timber sales—and stewardship projects— more attractive to loggers. Markets for other materials removed through stewardship contracting— primarily biomass and small-diameter trees—are uneven as well. In some areas, particularly near pulp or paper mills, the market for biomass and small-diameter wood is strong. This was the case in several eastern forests we visited and, according to agency officials, is also true of parts of California and Oregon. In other areas, however, facilities that can accept and process biomass are scarce, and markets for the material are correspondingly weak. In Montana, for example, officials said there is little market for the small-diameter wood and biomass generated from stewardship projects, so these materials are typically burned. Similarly, on many BLM lands, where the value of the wood is low and the distance to biomass markets long, BLM may find it more cost-effective to burn the wood than to use it. In such cases, the paucity of markets for small- diameter materials keeps the cost of the service work high, because contractors cannot defray their costs by selling the resulting materials. We have previously reported that the high costs of harvesting and transporting woody biomass, combined with uncertainties about supply, have hindered market development. Finally, although long-term contracts offer certain benefits to the agencies, field units can find it challenging to provide sufficient funds to award and implement such contracts, particularly while funding other agency activities. Agency officials have touted long-term contracts as providing contractors with some assurance of a long-term supply of materials, thus encouraging investment in equipment or facilities that can economically use small-diameter wood and biomass—products that often have had little or no commercial value. According to the Forest Service handbook, for example, “The use of multi-year contracts is encouraged to provide incentives to potential contractors to invest in long-term landscape improvement projects.” BLM, similarly, stated in a fiscal year 2007 stewardship contracting review that long-term contracts would encourage business development for biomass utilization. Without a long-term contract, an investor can find it difficult to secure the financing necessary to retool or build facilities that can process small-diameter wood or biomass. A contractor in Oregon noted, for example, that constant supply (i.e., through a long-term contract) is the key to encourage investment in equipment. He explained that a single machine can cost more than $1 million, and a contractor will not invest—nor will a bank lend—such a large amount without a reasonable assurance that there will be sufficient ongoing demand for the machine. Contractors still face risk when entering into a long-term contract with the government, however, because unforeseen budget shortfalls could prevent an agency from funding the contract. Without some additional protection against risk, contractors may be reluctant to make sizable investments in equipment or infrastructure for fear that the government will cancel the contract, thus making the investment unprofitable. Contractors may thus decline to bid on long-term contracts unless the contracts include a cancellation ceiling—that is, an amount the government will pay the contractor if it cancels the contract. The FAR authorizes such ceilings to protect the contractor’s investment, with the amount of the ceiling to be agreed on by the government and the contractor before the contract is signed. To ensure that this money is available if needed—and to prevent agencies from making financial commitments beyond the funding Congress has provided—the FAR generally requires that, should an agency include a cancellation ceiling in a contract, the agency must obligate the entire amount of the ceiling at the inception of the contract. Depending on the size of the contractor’s potential investment, however, this ceiling could be millions of dollars—far exceeding the budget of an individual field unit. Rather than develop a contract that would require a cancellation ceiling beyond its available resources, a field unit would instead have little recourse but to develop a contract with a much lower ceiling—one it could afford—thereby forgoing its hope of attracting significant investment in equipment or infrastructure. In fact, two Forest Service units have had to make this choice. For the only long-term multiyear contract the agencies have had experience with to date, at the White Mountain stewardship project in Arizona, the cancellation ceiling had little to do with the contractor’s actual investment; instead, it simply represented an amount the Forest Service thought it could afford and the contractor agreed was reasonable. In 2004, the Forest Service hired a consulting firm to develop an estimate of the potential cancellation liability associated with a multiyear contract for the White Mountain project. The contracting officer for the project said he was shocked by the resulting estimates, which ranged from nearly $3 million to more than $7.5 million. Accordingly, the cancellation ceiling was set at $500,000. The contracting officer said that this lower amount did not reflect the potential liability estimate based on one of the three scenarios examined by the consulting firm because none of those scenarios materialized. Instead, the contractor used already existing equipment, but the contracting officer told us that he believed it was appropriate to have a cancellation ceiling anyway, to compensate the contractor for his risk in case of cancellation. Similarly, for a 10-year contract the Forest Service is preparing in order to address fire risk on the Front Range of Colorado, a Rocky Mountain Region official told us that an amount in the range of $6 million to $10 million would be needed to attract large infrastructure such as a wood pellet plant—an amount far beyond the region’s funding capability. Instead, the contract announcement will include a cancellation ceiling of $500,000—the amount the region thought it could afford. According to this official, the inability to fund a substantial cancellation ceiling (e.g., $6 million to $10 million for construction of a pellet plant) changed the initial premise of the contract. That is, while the long-term contract was initially envisioned as a way to attract investment in industry or infrastructure to expand the use of material resulting from the project, it is now intended simply to treat the forests as cost-effectively as possible within the existing infrastructure. Other units were also contemplating the use of long-term contracts at the time of our review but were likewise concerned about the potential cancellation ceiling. For example, Forest Service officials in California were considering the use of a long-term multiyear contract but were concerned about how they would fund the cancellation ceiling. Some contractors may be willing to bid on contracts without cancellation ceilings if there is no substantial investment involved. In July 2008, the Forest Service issued a 10-year contract for the Lakeview stewardship project in southern Oregon. This contract includes a minimum dollar guarantee over the 10-year performance period. According to a Forest Service official, the Forest Service did not include a cancellation ceiling in the Lakeview contract because it was not seeking investment in infrastructure; that infrastructure already is in place. The Lakeview contract was issued in accordance with the terms of a November 2007 memorandum of understanding (MOU)—signed by the Forest Service, BLM, the State of Oregon, a county, and several cities and nongovernmental organizations—that provides a framework within which the signatory parties agree to work together to accomplish forest restoration projects, including fuel reduction projects. The MOU states that the Forest Service and BLM will each offer a minimum number of acres to be treated each year. BLM also plans to issue a long-term contract under the terms of the MOU, but the BLM contract will probably be an IDIQ contract, according to BLM officials. Other agencies also have the authority, under the FAR and agency-specific regulations, to use multiyear contracts, although these contracts typically may not exceed 5 years. Although the FAR requires all agencies to obligate sufficient funds to cover any potential cancellation costs of a multiyear contract, additional requirements apply to certain agencies. For example, according to the Department of Defense’s acquisition regulation, if a contract contains a cancellation ceiling in excess of $100 million but the budget for that contract does not include proposed funding for the costs of contract cancellation up to that ceiling, then the head of the agency must provide written notification to the congressional defense committees and to the Office of Management and Budget before awarding the contract. This written notification must include, among other things, the extent to which cancellation costs are not included in the budget for the contract and an assessment of the financial risks of not budgeting for the potential costs of contract cancellation. Experience to date with the White Mountain project highlights another potential challenge related to the use of long-term contracts: the difficulty of balancing the need to devote substantial resources to the long-term project in order to furnish a sufficient and predictable supply of materials to the contractor and the need to fund the unit’s other programs and activities—all within a limited budget. With this project, the forest committed to funding contractor treatments on at least 5,000 acres annually, in order to ensure the contractor a sufficient supply of material. Although per acre costs were initially high, at the time the contract was developed, the forest expected that within a few years these costs would decrease as growth occurred in the small-diameter wood and biomass industry—allowing the contractor to defray a greater portion of his costs as he found markets for the material. Instead, for the 29 task orders issued between September 2004 and September 2007, these costs have not dropped significantly, as shown in figure 8. To live up to its commitment, the Forest Service has continued to fund 5,000 acres of treatment annually—but at a much greater cost than expected, a cost that has taken a substantial toll on the forest’s other programs. These other programs—such as range management, wildlife, hazardous fuels, and vegetation and watershed management—have suffered because the forest has directed considerable funding toward the White Mountain project, leaving little available to carry out other projects that need to be done. In fact, in 2005, the forest received instruction from the region to direct 100 percent of its hazardous fuels and timber dollars toward the White Mountain project, along with 50 percent of its vegetation and watershed management dollars and 40 percent of its wildlife dollars. A forest budget official was particularly concerned about the effect on the range management program, for which she estimated funding was half of what it would have been if it had grown at the same rate as it did for other forests in the region. Another forest official expressed concern about the fuel reduction work that was not being completed on the forest because the funds for that program were being monopolized by the White Mountain project. In particular, this official noted that the forest’s ranger districts that were not included in the White Mountain project area were at a particular disadvantage because they experienced no direct benefit from the project, whereas other districts had at least a portion of their lands being treated (those that fell within the project area). This project has had a similar effect on other forests within the region, according to forest and regional officials. As the region has redirected funds toward the White Mountain project, these other forests have become resentful of the disproportionate amount of funding the project has received. The Apache-Sitgreaves forest has “reached a crossroads,” one official said, in terms of the White Mountain project’s viability; if the per acre costs remain high, the forest will have to decide whether to continue funding the project, particularly in light of the effect it is having on other programs in the forest. The agencies have numerous efforts under way to overcome some of the challenges they face, including conducting training courses and workshops to help overcome resistance to the use of an unfamiliar tool and supporting innovative efforts to find cost-effective uses for small- diameter materials. However, they do not have a strategy in place for the use of long-term contracts, in terms of where such contracts should be used and how they should be funded. As a result, field offices must make decisions about whether to enter into long-term contracts without fully understanding the inherent risks and trade-offs, thereby potentially jeopardizing the stability of their other programs or, on the other hand, forgoing an opportunity to achieve cost-effective restoration. To overcome resistance to stewardship contracting, the agencies have provided—jointly and individually—training for their staff and for contractors. For example, the two agencies jointly formed a cadre of officials that developed a training program that covered both acquisition and timber contracting and addressed both IRTCs and IRSCs. According to a Forest Service official, the staff who work on acquisition contracts and the staff who work on timber sale contracts usually do not work together, so it was refreshing to have both types of staff involved in discussions and learning about stewardship contracting. The Forest Service has posted the training materials on its Web site. Also, the Forest Service plans to address difficulties between the timber and acquisition sides by establishing centers of excellence that would provide advice and assistance to staff as needed. In the Forest Service, acquisition staff often have not been actively involved in stewardship contracts. Although an acquisition official said that the agency had planned to train some timber staff to act as acquisition contracting officers, with the requisite certification to obligate government funds, it has since abandoned that idea because of the expense and time it would take to provide the initial and recurrent training. The agency’s new plan, according to this official, is to establish several centers of excellence, staffed with individuals who can provide advice and assistance on acquisition issues and can act as contracting officers when necessary. BLM already has in place such an arrangement: In 2007, the Washington Office designated the Oregon State Office as the agency’s center of excellence for contracting. In this capacity, the Oregon office handles contracts for all of BLM’s western states, including Alaska, that are valued at more than $100,000. The Oregon contracting officers put together stewardship contracting packages and review the final contracts; they also attend preproposal meetings and provide advice to contractors and BLM personnel. Nevertheless, the operational side of stewardship contracting (e.g., outreach, collaboration, design, and contract administration) is still performed by the field offices. To help overcome resistance by contractors, agency officials said they provide training to contractors and work with them one on one to help them understand stewardship contracting. For example, procurement task assistance centers are located in several states; these centers work with contractors (at no charge) to help them understand contract formats and requirements. The biggest hurdle, according to a BLM official, is getting contractors to feel comfortable with preparing technical proposals. Forest Service officials also said they try to incorporate into stewardship contracts service work that is familiar to contractors, to encourage them to gain experience with this new tool. Some officials noted that “starting small” with stewardship projects can be a strategy to improve contractors’ chances for success. An official of one forest, for example, said that keeping stewardship projects small—in acres and in value—has been a good way for both the forest and the contractors to gain experience. And on another forest, a contracting official said that bundling similar types of work in a contract has been a successful strategy. This strategy can benefit smaller companies that lack the equipment or financial resources to bid on a large project or a project that includes dissimilar tasks. As for county commissioners’ concerns about the loss of county timber receipts, agency officials said they try to involve county officials in stewardship project planning efforts and talk to officials about the local benefits of stewardship projects. In Wisconsin, for example, Forest Service officials said they have talked with county officials about the benefits of stewardship contracting, such as the stable employment that stewardship contracting would bring to the counties despite the counties’ not receiving any portion of the stewardship receipts. In some cases, county officials have been willing to support stewardship contracting. In Minnesota, for example, a Forest Service official described county officials as “cautious but willing” to support stewardship contracting because of the potential for an increase in employment and the associated multiplier effect of people spending their salaries in the local area. Also, the agencies are working to find cost-effective uses for small- diameter materials and biomass. To stimulate the market for small- diameter wood and biomass, and thereby reduce the amount that contractors must be paid to remove this material, the agencies are working with various contractors, entrepreneurs, universities, and other organizations to develop cost-effective and sometimes innovative uses for these materials. In some cases, agency officials have worked with stewardship contractors to find new markets for these materials, including nearby facilities that use wood chips for heat or power plants that can burn the materials—alone or mixed with coal. In another case, one national forest is working with an entrepreneur and a nearby university on the development of a process known as torrefaction, in which wood chips are slowly heated until the wood reaches a near-charcoal state, making it easier to store, transport, and use in certain applications. In some cases, the agencies have provided grants to spur investment in research or development of innovative uses for biomass. The Forest Products Laboratory, for example, provided a $250,000 biomass grant to support the construction of a pressure treatment facility that will treat material processed from the White Mountain stewardship project in Arizona. This facility uses a chemical product to preserve material for exterior use. Although numerous agency officials cited the potential of long-term multiyear stewardship contracts to help stimulate markets for wood products, neither agency has developed strategies for funding the associated cancellation ceiling—one of the two primary challenges associated with multiyear contracts. As noted earlier, the purpose of obligating the cancellation ceiling at the inception of the contract is to prevent agencies from making financial commitments beyond the funding Congress has provided. Yet rather than identifying strategies for funding these cancellation ceilings, several Forest Service officials told us they believe their agency should be exempt from having to obligate these funds at the outset of the contract. One official said that the cancellation ceiling is unnecessary altogether, because contracts already contain a standard clause allowing the contractor to be reimbursed if the government cancels the contract for its convenience. Other Forest Service officials disagreed that the standard clause offers sufficient protection, stating that the contractor needs the protection afforded by the cancellation ceiling—but added that requiring the agency to obligate the funds at the inception of the contract needlessly ties up agency funds that could be used to conduct additional work. These officials believe that the agency should not have to obligate the funds unless and until it cancels the contract. The Forest Service has sought legislative relief from the up-front funding requirement, but none had been enacted as of October 2008. In the two instances in which a cancellation ceiling has been established for a long-term multiyear contract—for the White Mountain contract in Arizona and the proposed Front Range contract in Colorado—agency staff derived cancellation ceilings that reflected not the amount needed to attract significant investment in infrastructure, but rather the amount each unit believed it could afford. Without a national strategy on the use of long-term multiyear contracts, including a clear agency position on the need for appropriate cancellation ceilings and guidance on how to fund them, agency units may continue to establish such “affordable” levels— potentially driving away interested investors who are concerned that they do not have sufficient contractual protection—or may forgo the use of long-term contracts entirely. Forest Service officials also held different opinions about whether an agency could avoid the cancellation ceiling entirely by using options contracts, which do not require an up-front cancellation ceiling, while still stimulating infrastructure investment. A May 2007 opinion from the Department of Agriculture’s Office of General Counsel held that it is unnecessary to use multiyear contracts at all; the opinion suggested that the agency use options contracts instead. However, others in the agency said that options contracts do not afford contractors enough assurance of a long-term supply and, as such, do not assist them in obtaining loans for equipment or plant construction—a fundamental objective of using long- term multiyear contracts. Accordingly, options contracts may be best suited for areas with existing infrastructure (e.g., lumber mills or pulp and paper mills). The other challenge associated with long-term contracts is maintaining sufficient funding to support an ongoing long-term project at levels sufficient to provide the contractor with a steady supply of material while at the same time funding other important activities. The implications of this challenge are also highlighted by the previously discussed experience of the White Mountain project. This is not to say the project has been unsuccessful; to the contrary, numerous agency officials as well as environmental and other stakeholders praised the quality of the work and the ecological results. Nevertheless, as a result of funding its commitment on the White Mountain project, the forest has struggled to adequately finance its other programs—a cautionary lesson for other agency units contemplating long-term stewardship contracts of their own. Certainly, other units may decide that the need for a particular long-term project is so great that they are willing to reduce funding for various other programs in order to pay for it. However, the agencies have not developed strategies for the use and funding of long-term multiyear contracts. Without such a strategy—based on a systematic analysis of lessons learned from long- term projects already undertaken and accompanied by guidance on selecting and implementing such projects—individual units may make choices about using long-term contracts without fully understanding their implications. Halfway through its currently authorized 10-year life span, stewardship contracting has shown promise in helping the Forest Service and BLM accomplish their land management objectives. The agencies have taken advantage of the ability to trade goods for services to defray the cost of needed thinning and other service work, and they have worked closely with community groups to design projects that meet community needs. One element of stewardship contracting has not been widely explored, however: the authority to enter into 10-year contracts. Although we frequently heard that this authority is essential in helping develop markets for timber, woody biomass, and other materials (by allowing the agencies to provide potential contractors and industry operators more certainty of supply), the suitability of long-term multiyear stewardship contracts to encourage investment in infrastructure has yet to be demonstrated. And the experience of the one forest that has implemented a long-term multiyear contract shows the potential pitfalls of this tool, as the forest has had to scale back its other programs in order to adequately fund the long-term project. The stakes are further raised by the need for a potentially sizable up-front obligation of funds to protect both the contractor’s and the government’s interests. Although two additional long- term multiyear contracts are in process and officials in several field units said they are contemplating the use of such contracts, the agencies have not developed national strategies that describe the role of long-term multiyear contracts and lay out the agencies’ positions on issues such as when such contracts are appropriate, how many should be in place, where they should be located, and how they will be funded. Without such a strategy, the agencies may fail to capitalize fully on the potential of stewardship contracting. For example, field units may have little choice but to settle for affordable cancellation ceilings, rather than ceilings sufficient to encourage substantial investment in industry or infrastructure to use the products from the stewardship project. Regardless of the contracting mechanisms used or their duration, the agencies must maintain complete and reliable data if they are to effectively evaluate their use of stewardship contracting and provide details on its use to Congress and the public. Currently, the agencies’ data reside in myriad automated and manual systems that are often not linked. Further, the agencies do not systematically capture nationwide information specific to agreements, nor does the Forest Service capture data on the number of contracts that are set aside for small businesses. Not only do such data deficiencies keep the agencies from assessing the true costs and accomplishments associated with stewardship contracting—especially in comparison with other tools that might achieve the same goals—but they also prevent Congress and the public from making informed judgments about the value of this land management tool, which will become increasingly critical as expiration of the stewardship authority draws closer and Congress evaluates its renewal. We are making three recommendations to improve the agencies’ use of stewardship contracting. To ensure that the commitment of federal funds under long-term contracts is appropriately targeted, especially given the potential trade-offs involved, we recommend that the Secretaries of Agriculture and the Interior develop strategies for the use of long-term multiyear contracts that address, on a nationwide basis, the criteria agency officials can use to evaluate whether, in any given case, such a contract would be an appropriate mechanism to assist the agency in meeting its land management objectives. The strategy should address options for funding such contracts in a manner that considers trade-offs with respect to other land management activities and should be based on a systematic analysis of lessons learned from long- term projects already undertaken. Additionally, to ensure ease of reporting and accurate accounting of activities undertaken through stewardship contracts and agreements, we recommend that the Secretaries implement, as part of their efforts to improve their stewardship contracting databases, improvements that will increase data interfaces among the various systems that contain stewardship data and will ensure accuracy and completeness in the data maintained and, as part of these same efforts, implement improvements that will accurately account for products sold and services received under stewardship agreements. We provided the Departments of Agriculture and the Interior with a draft of this report for review and comment. The Forest Service and the Department of the Interior generally agreed with the findings and recommendations in the report. The Forest Service’s and Interior’s written comments are reproduced in appendixes II and III, respectively. We are sending copies of this report to interested congressional committees, the Secretaries of Agriculture and the Interior, the Chief of the Forest Service, the Director of the Bureau of Land Management, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to determine (1) the extent to which, and for what purposes, federal agencies are using stewardship contracting; (2) what processes the agencies use in planning, implementing, and monitoring stewardship projects to manage resources; and (3) what successes and challenges the agencies have experienced in using stewardship contracting. Our review was limited to the Forest Service and the Bureau of Land Management (BLM), the two agencies with stewardship contracting authority. To identify the extent and nature of the agencies’ use of stewardship contracting authority, we obtained data from Forest Service and BLM officials on the number of such projects, as well as other project information such as project acreage, timber volume and value, and the value of contracted services. We also obtained data on retained receipts from the agencies’ financial accounting systems. Neither the Forest Service nor BLM maintains comprehensive national data on all aspects of stewardship contracting; in some cases, reliable agency data were available for only certain years, and in other cases the agencies could provide only estimates. Further, because the agencies have adopted ways of collecting and reporting data independent of one another, equivalent data were not always available for both agencies during the same time period. We assessed the reliability of the data by conducting interviews with headquarters, regional, and state office officials who enter data into the systems, maintain them, and prepare reports using system data. We also obtained information on the standards, procedures, and internal controls in place for collecting, reporting, and verifying data, in order to assess their accuracy and completeness. In some cases, data are maintained in systems whose reliability GAO has previously assessed; in such cases, we relied on these earlier assessments in evaluating system reliability. For example, certain Forest Service data on acreage treated under stewardship contracts are reported through the National Fire Plan Operations and Reporting System; we reviewed a 2007 GAO report that assessed this system and determined that it is sufficiently reliable for our purposes. Similarly, both agencies track some financial data on stewardship contracting through their departmental accounting systems—the Department of Agriculture’s Foundation Financial Information System and BLM’s Federal Financial System. We reviewed previous work done by GAO and an independent auditor and determined that the data produced from these systems are sufficiently reliable for our purposes. We did not perform any electronic testing of data. Ultimately, we determined that the various sources of agency data provided sufficiently reliable data for certain years, as well as for broad trends across years, but did not provide data sufficiently reliable to allow comparisons between the agencies in all areas, as noted in the body of the report. Finally, we interviewed Forest Service and BLM officials about their progress in designing or modifying systems to improve their data and better track information associated with stewardship contracting projects. Because neither agency maintains nationwide data that describe the objectives or characteristics of individual stewardship projects, we obtained information on project objectives and characteristics by interviewing headquarters and field officials and conducting site visits to projects in seven of the nine Forest Service regions and 7 of the 11 western states in which BLM has state offices. We reviewed (either by visiting in person or discussing with agency officials) a nonprobability sample of 26 Forest Service projects and 9 BLM projects. We selected projects to represent variety in geographic location, type of restoration work, size (in acreage as well as in value), and stage of implementation, as well as in the stewardship contracting authorities used. Table 2 shows the locations of the Forest Service projects we reviewed and the projects’ objectives; table 3 shows similar information for the BLM projects we reviewed. To assess agency processes for planning, implementing, and monitoring stewardship projects, we reviewed national guidance issued by each agency, including guidance for such processes as conducting timber appraisals, advertising and awarding contracts, and establishing and maintaining monitoring processes. We also reviewed federal contracting requirements, including those contained in the Federal Acquisition Regulation. In addition, we interviewed national program officials with each agency, as well as officials of the Pinchot Institute for Conservation, the agencies’ contractor for the multiparty monitoring and evaluation effort, to obtain information and opinions on agency processes for conducting stewardship projects. During our site visits, we selectively reviewed projects’ contracting and financial files to obtain information on the planning, contracting, and monitoring processes each agency uses, and interviewed Forest Service and BLM project officials at each location, including regional stewardship coordinators, project managers, timber sale contracting officers, acquisition contracting officers, and others. At several sites we also met with the contractors performing the stewardship activities in order to obtain their perspectives on the projects, including the agency processes they observed for advertising, awarding, and overseeing the projects. And finally, at some locations we met with stakeholders, such as community groups, researchers, local citizens, and representatives of timber industry and environmental groups, in order to obtain their perspectives on the use of stewardship contracting. To identify the successes and challenges the agencies have experienced using stewardship contracting, we interviewed agency officials, contractors, and stakeholders at many projects we visited to obtain their views on the successes and challenges associated with stewardship contracting, including the factors they believe contributed to these successes and challenges, and the measures taken to overcome the challenges. We also reviewed selected project contracting and financial files and stakeholder documents to assess the extent to which projects offered examples of successes or challenges faced by the agency units in using stewardship contracting. Finally, we reviewed national program guidance and spoke with national program officials in each agency to identify actions the agencies had taken to overcome the challenges we or others had identified. We conducted this performance audit from August 2007 through October 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Gaty, Assistant Director; Sandra Davis; and Pam Tumler made key contributions to this report. Mark Braza, Nancy Crothers, Carol Henn, Rich Johnson, Ty Mitchell, and Bill Woods also made important contributions to this report.
The Department of Agriculture's Forest Service and the Department of the Interior's Bureau of Land Management (BLM) have stewardship contracting authority, which allows the agencies to trade goods--such as timber--for services (e.g., thinning forests or rangelands) that the agencies would otherwise pay for with appropriated dollars, and to enter into stewardship contracts lasting up to 10 years. The authority is set to expire in 2013. GAO was asked to determine, among other things, (1) the extent to which the agencies are using stewardship contracting and (2) what successes and challenges the agencies have experienced in using it. In doing so, GAO assessed agency data, reviewed project files, and visited projects in numerous locations. From fiscal years 2003 through 2007, the Forest Service and BLM awarded a combined total of 535 stewardship contracts, with the number increasing each year--from 38 in fiscal year 2003 to 172 in fiscal year 2007. However, for certain aspects of stewardship contracting, such as the acres involved or the value of the services exchanged for goods, reliable data were not available for the full 5-year fiscal period because neither agency has had a comprehensive database of its stewardship contracting activity since 2003. The agencies did not begin to maintain nationwide stewardship data until recently, primarily because of difficulties in adapting their systems to account for all aspects of stewardship contracting. Further, these data are not complete, and reside in myriad systems, not all of which interface with one another. These deficiencies keep the agencies and Congress from accurately assessing the costs and value of stewardship contracting. The agencies credit stewardship contracting with allowing them to accomplish more work--by allowing them to trade goods for services, thereby extending their budgets for thinning and other services--and spurring collaboration with members of the community and environmental groups. But stewardship contracting has its challenges too, including some resistance to its use (e.g., by contractors unfamiliar with it) and a paucity of markets for the small trees typically removed in stewardship projects. Also, although agency officials view long-term multiyear contracts as crucial to market development, these contracts can involve financial challenges. These contracts are attractive because they offer contractors and industry operators some certainty of supply, enabling them to obtain loans for equipment or processing facilities, which can then spur demand for materials resulting from stewardship projects. But such contracts can require a substantial up-front obligation of funds--to protect the contractor's investment if the government later cancels the contract--that may exceed the budget of a field unit (e.g., a national forest). Also, funding the annual work specified in the contract can force a unit to scale back its other programs if the value of the timber removed is not sufficient to pay for that work. Yet neither agency has developed a strategy for using such contracts, a step that could help field units determine which projects are appropriate for these long-term contracts and how they would be funded.
From its inception, the process for selecting Medicare claims administration contractors was stipulated by Congress and differed from the process for awarding most other federal contracts in that, among other things, the Medicare contractors were not selected through a competitive process. Before Medicare was enacted in 1965, providers were concerned that the program would give the government too much control over health care. To increase providers’ acceptance of Medicare, Congress ensured that health insurers like Blue Cross and Blue Shield would play a key role in administering Medicare, as they already had experience as payers for health care services to physicians and hospitals. Medicare’s authorizing legislation required that the claims administration contracts be awarded to carriers and fiscal intermediaries—now referred to as legacy contractors. By law, CMS was required to select carriers from among health insurers or similar companies and to choose fiscal intermediaries from organizations that were first nominated by associations representing providers, without the application of competitive procedures. In addition, CMS could not terminate these contracts unless the contractors were first provided with an opportunity for a public hearing, whereas the contractors themselves were permitted to terminate their contracts, unlike other federal contractors. The contractors were paid based on their allowable costs and generally did not have financial incentives that were aligned with quality performance. Pub. L. No. 108-173, Title II, 117 Stat. 2066, 2167 (2003). recompete the contracts at least once every 5 years. CMS implemented the MMA contracting reform requirements by shifting claims administration tasks from 51 legacy contracts to new entities called Medicare Administrative Contractors (MACs). Originally, CMS selected 15 MACs to process both Part A and B Medicare claims (known as A/B MACs) and 4 MACs to process durable medical equipment (DME) claims (known as DME MACs). CMS also selected 4 A/B MACs to process claims for home health care and hospice services. CMS began awarding the MAC contracts in 2006; however, bid protests and consolidation of some of the MAC jurisdictions delayed some of the MACs from being fully operational. By 2009, most of the legacy contracts had been transitioned to MACs and by December 2013, CMS completed that transition. Under the FAR, agencies may generally select from two broad categories of contract types: fixed-price and cost-reimbursement. When implementing contractor reform, CMS chose to structure the MAC contracts as cost-plus-award-fee contracts, a type of cost-reimbursement contract. This type of contract allows CMS to provide a financial incentive—known as an award fee—to contractors if they achieve certain performance goals. In addition to reimbursement for allowable costs and a contract base fee (which is fixed at the inception of the contract), a MAC can earn the award fee, which is intended to incentivize superior performance. In 2010, we reviewed three MACs that had undergone award fee plan reviews and found that all three received a portion of the award fee for which they were eligible, but none of the three received the full award fee. In the new contracting environment, MACs are responsible for a variety of claims administration functions, most of which were previously performed by the legacy contractors. MACs are responsible for processing and paying claims, handling the first level of appeal (often referred to as redeterminations of denied claims), and conducting medical review of claims—which is done before or after payment to ensure that the payment is made only for services that meet all Medicare requirements for coverage, coding, and medical necessity. In addition, the MACs serve as providers’ primary contact with Medicare, including enrolling providers, conducting outreach and education, responding to inquiries, and auditing provider cost reports. CMS is moving toward further consolidation of MAC contracts in hopes that consolidation will further improve CMS’s procurement and administration processes. Since the original implementation, CMS chose to consolidate the 15 A/B MACs into 10 jurisdictions and is in the process of that consolidation. Currently, there are 5 consolidated A/B MACs that are fully operational, 7 A/B MACs that will eventually be consolidated into 5 jurisdictions, and 4 DME MACs that are fully operational. While CMS has relied on contractors to conduct claims administration functions since Medicare’s inception and has worked to consolidate these contracts, the agency has been granted additional statutory authority in recent years to award new types of contracts to conduct specialized tasks within the Medicare program. From 1965 to 1996, the legacy contractors were not only responsible for paying claims but also for tasks related to program integrity, such as working with law enforcement on cases of suspected fraud. However, the Health Insurance Portability and Accountability Act of 1996 established the Medicare Integrity Program, authorizing CMS to award separate contracts for program integrity activities such as investigating suspected fraud. These contracts are now handled by Zone Program Integrity Contractors and are generally aligned with the same jurisdictions as the MACs. In 2003, the MMA directed CMS to develop a demonstration project testing the use of contractors to conduct recovery audits in Medicare.known as recovery auditors, conduct data analysis and review claims that have been paid to identify improper payments. While other contractors that review claims are given a set amount of funding to conduct reviews, These contractors, recovery auditors are paid contingency fees on claims they have identified as improper. To increase efforts to identify and recoup improper payments, Congress passed the Tax Relief and Health Care Act of 2006, which, among other things, required CMS to implement a permanent and national recovery audit contractor program. Unlike Medicare FFS, in which contractors process and pay claims, in Medicare Part C, CMS contracts with private organizations, known as Medicare Advantage organizations (MAOs), to offer MA health plans and provide covered health care services to enrolled beneficiaries. CMS pays MAOs a pre-determined, fixed monthly payment for each Medicare beneficiary enrolled in one of the MAO’s health plans. MA plans must provide coverage for all services covered under Medicare FFS, except hospice care, and may also provide additional coverage not available under Medicare FFS. MA plans, with some exceptions, must generally allow all Medicare beneficiaries who reside within the service area in which the plan is offered to enroll in the plan.meet all federal requirements for participation, including maintaining and monitoring a network of appropriate providers under contract; having benefit cost-sharing amounts that are actuarially equivalent to or lower than Medicare FFS cost-sharing amounts; and developing marketing materials that are consistent with federal guidelines. In addition, MA plans must Medicare beneficiaries can generally elect to enroll in an MA plan if one is offered in their community. As of February 1, 2014, approximately 15.3 million beneficiaries—nearly thirty percent of all Medicare beneficiaries—were enrolled in MA plans—an all-time high. Those plans were offered through 571 contracts between MAOs and CMS. Substantial changes in the law regarding contract requirements and other parameters of the program—including payment rates—have contributed to fluctuations in the number of contracts and enrolled beneficiaries over the years. Under authority provided by the Social Security Amendments of 1972, CMS first began contracting with private plans to provide care to enrolled beneficiaries in 1973. benefits covered under Medicare FFS and to meet certain other standards. Plans were generally paid on the basis of their costs during these early years of contracting. By 1979, the government had 33 contracts with organizations offering private plans. The law required plans to provide A decade after the Social Security Amendments of 1972, the Tax Equity and Fiscal Responsibility Act of 1982 authorized the first full-risk plans that were paid a fixed monthly amount per beneficiary that was set at 95 percent of the expected spending for beneficiaries in Medicare FFS. The payment for each beneficiary was adjusted by demographic and other factors. The accuracy of this adjustment was criticized by us and other researchers. The demographic payment adjusters resulted in excess payments to those plans that enrolled healthier beneficiaries with below-average health care costs. This, in part, encouraged continued growth in Medicare private plans and by May 1, 1997, 4.6 million beneficiaries—nearly 12 percent of all Medicare beneficiaries—were enrolled in private plans under 280 contracts. At the same time, concerns were raised that basing plan payment rates on local Medicare FFS spending—the methodology used to geographically adjust payment rates—resulted in no or low plan participation in some areas, particularly rural areas. Pub. L. No. 92-603, § 226, 86 Stat. 1329,1396 (1972). The Balanced Budget Act of 1997 (BBA) formally established private plans as Part C of Medicare and introduced additional changes to the program. These changes to the program included new types of plans that could be offered, the standards applied to the contracts, beneficiary enrollment rules, and payment rules. In an effort to refine the payment methodology, the BBA required CMS to use health status measures to adjust payments to plans, added a payment methodology establishing a minimum amount or floor rate, and limited rate updates in higher payment counties that, with other refinements, resulted in reducing some of the payment differences between high and low spending areas. Following these changes, and coincidental with broad-based dissatisfaction with managed care practices more generally, organizations offering Medicare private plans reversed what had been a rapid expansion in the mid-1990s and began a period of plan withdrawals and declining beneficiary enrollment in plans. For example, from 1999 through 2003, the number of Medicare contracts with private plans fell from 309 to 154. During the same period, private plan enrollment fell from about 6.3 million to 4.6 million beneficiaries. Subsequent legislation providing new methods of adjusting payments to account for health status, among other things, did little to entice them back into the program. Private plan participation in Medicare began to rebound after passage of the MMA. The law made the program more attractive to plans by establishing minimum payments of 100 percent of Medicare FFS spending and pegging the minimum increase to the Medicare national per capita growth rate, providing substantial annual increases over those authorized under the BBA. As of December 2009, enrollment had grown to about 10.9 million beneficiaries. The Patient Protection and Affordable Care Act (PPACA) included several changes to the MA program, such as bringing payments to plans closer to Medicare FFS, and rewarding plans for quality. Since March 2010, enrollment in MA plans has grown from 11.0 million to 15.3 million—an increase of about 39 percent. While contract requirements for MAOs and parameters of the program are largely derived from statute, CMS has responsibility to implement the program and ensure compliance with these requirements. The agency’s responsibilities include, among other things, making monthly payments to MA plans, implementing health status adjustments to the payments, establishing processes for enrolling and disenrolling beneficiaries, reviewing marketing materials, providing for independent review of coverage appeals, conducting audits, and enforcing compliance. The audits typically involve a combination of desk reviews of documents submitted by MA plans, and at CMS’s discretion, site visits. To ensure compliance, CMS may take a variety of enforcement actions, ranging from informal contacts offering technical assistance to civil money penalties or plan suspension for egregious or sustained noncompliance. Whereas MA offers beneficiaries an alternative way to access their Part A and B benefits, Part D is structured to provide benefits only through private organizations under contract to Medicare. Under the Part D program, which began providing benefits on January 1, 2006, CMS contracts with private organizations called plan sponsors.sponsors offer outpatient prescription drug coverage either through stand- alone prescription drug plans for those in original FFS Medicare, or through MA prescription drug plans for beneficiaries enrolled in MA. Through the Part D contracts, plan sponsors offer prescription drug plans Part D plan which may have different beneficiary cost-sharing arrangements (such as copayments and deductibles) and charge different monthly premiums. Medicare pays plan sponsors a monthly amount per enrollee independent of each enrollee’s drug use, therefore creating an incentive for the plan sponsor to manage spending. Payments to prescription drug plan sponsors are adjusted according to the risk factors—including diagnoses and demographic factors—of beneficiaries enrolled in a sponsor’s plans. However, sponsors still have an incentive to control spending to ensure it remains below the adjusted monthly payments received from CMS and payments received from enrolled beneficiaries. Sponsors can lower drug spending by applying various utilization management restrictions to drugs on their formularies. The Part D program also relies on sponsors to generate prescription drug savings, in part, through their ability to negotiate price concessions, such as rebates and discounts, with entities such as drug manufacturers, pharmacy benefit managers, and pharmacies. Prior to 2011, enrollees exceeding an initial coverage limit were responsible for paying the full cost of covered drugs until they reached an out-of-pocked maximum. Beginning in 2011, PPACA established the Medicare Coverage Gap Discount Program to assist beneficiaries who do not receive Part D’s low-income subsidy with their drug costs when they reach the coverage gap. See GAO, Medicare Part D Coverage Gap: Discount Program Effects and Brand-Name Price Trends, GAO-12-914 (Washington, D.C.: Sept. 28, 2012). alone prescription drug plans in 2012 were enrolled in actuarially equivalent or enhanced benefit plans. While CMS contracts with plan sponsors to offer the Part D benefit, the agency has an oversight role. As with MA, CMS is responsible for ensuring that the payments it makes to plans sponsors are accurate. Given that final payments to plan sponsors are based, in part, on the price concessions that plan sponsors have negotiated, CMS is responsible for ensuring that data plan sponsors submit on price concessions are accurate. accurate information to the Medicare Plan Finder interactive website, which helps beneficiaries compare different plans and identify the plan that best meets their needs. CMS oversees the complaints and grievances processes and may rely on complaints and grievances data to undertake compliance actions against specific plan sponsors. CMS also oversees Part D sponsors’ fraud and abuse programs, which include compliance plans that must include measures to detect, correct, and prevent fraud, waste, and abuse. See GAO, Medicare Part D Prescription Drug Coverage: Federal Oversight of Reported Price Concessions Data, GAO-08-1074R, (Washington, D.C.: Sept. 30, 2008). use of generic drugs, and higher-than-expected rebates from pharmaceutical manufacturers to the prescription drug plans. Chairman Pitts, Ranking Member Pallone, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. If you have any questions about matters discussed in this testimony, please contact Kathleen M. King at (202) 512-7114 or [email protected] or James Cosgrove at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this report include Lori Achman, Sheila K. Avruch, George Bogart, Christine Brudevold, Christine Davis, Christie Enders, and Gregory Giusto. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the enactment of Medicare in 1965, contractors have played a vital role in the administration of the program. The original FFS program was designed so that the federal government contracted with health insurers or similar private organizations experienced in handling physician and hospital claims to process and pay Medicare claims rather than having the federal government do so. CMS now also contracts with private organizations that provide covered services under the MA program and the Part D prescription drug program. This statement provides an overview of the manner in which CMS has contracted with private organizations to administer benefits in (1) original FFS Medicare, (2) MA, and (3) the Part D prescription drug program. It is based primarily on products that GAO has issued regarding CMS contracting with claims administration contractors to administer the FFS program, and with other private organizations as part of MA and the Part D prescription drug benefit programs. These products were issued from November 1989 through January 2014 using a variety of methodologies, including reviews of relevant laws, policies, and procedures; data analysis; and interviews with contractors, stakeholders, and CMS officials. We have supplemented information from our prior products with publicly-available data on Medicare private plan contracts and enrollment, CMS-issued guidance for Medicare private plans, and a review of relevant literature. GAO has made numerous recommendations to CMS in these previous products and is not making any new recommendations at this time. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) reformed the way the Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, contracts with claims administration contractors. From its inception, the process for selecting Medicare fee-for-service (FFS) claims administration contractors was stipulated by Congress and differed from most other federal contracts in that, among other things, the Medicare contracts were not awarded through a competitive process. The MMA repealed limitations on the types of contractors CMS could use and required that CMS use competitive procedures to select new contracting entities to process medical claims and provide incentives for contractors to provide quality services. CMS has implemented the MMA contracting reform requirements by shifting and consolidating all claims administration tasks to new entities called Medicare Administrative Contractors. CMS is currently in the process of further consolidating these contracts. The agency also uses other contractors to review claims to ensure payments are proper and investigate potential fraud. CMS contracts with private organizations to administer benefits under Medicare Advantage (MA), but has an important administrative and oversight role. MA is the private plan alternative to FFS and differs from FFS in that CMS contracts with private entities, known as Medicare Advantage organizations (MAOs), to provide covered health care services to beneficiaries who enroll. MAOs are paid a predetermined monthly amount for each beneficiary enrolled in one of their health plans and must provide coverage for all FFS services (except hospice care), but may also provide additional coverage. The government first began contracting with private plans in 1973. Several laws since then have changed how the MAOs are paid and the types of plans that can participate. While contract requirements for MAOs and parameters of the program are largely derived from statute, CMS has responsibility to implement the program and ensure compliance with these requirements. CMS also contracts with private organizations, called plan sponsors, to provide the outpatient prescription drug benefit under Part D. Through the Part D contracts, plan sponsors offer prescription drug plans which may have different beneficiary cost-sharing arrangements (such as copayments and deductibles) and charge different monthly premiums. The Part D program relies on sponsors to generate prescription drug savings through negotiating price concessions with entities such as drug manufacturers, pharmacy benefit managers, and pharmacies, and managing beneficiary use. While CMS contracts with plan sponsors to provide the Part D benefit, the agency has oversight responsibilities. For instance, CMS is responsible for making accurate payments to plan sponsors and ensuring the accuracy of information submitted by plan sponsors to the beneficiary-focused Medicare Plan Finder website. Medicare actuaries have attributed lower-than-projected expenditures in Part D to a combination of factors, including lower-than-projected Part D enrollment, slower growth of drug prices in recent years, greater use of generic drugs, and higher-than-expected rebates from pharmaceutical manufacturers to the prescription drug plans.
Inhalation therapy consists of drugs, including bronchodialators such as albuterol sulfate, taken through a nebulizer to alleviate severe respiratory problems. In the Medicare population, this therapy is primarily used to treat chronic obstructive pulmonary disease, which includes diseases such as asthma, emphysema, and chronic bronchitis. Once beneficiaries begin receiving inhalation therapy, they are likely to receive it for the remainder of their lives. Inhalation therapy drugs are covered by Medicare because the nebulizer, which is covered as DME, is only useful in conjunction with the drugs. Under the DME benefit, Medicare payment for nebulizers covers the cost to suppliers of purchasing the equipment, delivering it to the beneficiary, and ensuring that the beneficiary knows how to use and care for the equipment. Medicare regulations specify that DME suppliers must document that they or another qualified party provided the beneficiary with the necessary information and instructions on using the equipment, but suppliers do not have to provide that education themselves. DME suppliers receive no additional payment if they provide the patient education; however, physicians can bill Medicare if they or their staff provide the patient training. MMA changed Medicare’s payment method beginning in 2005 for most drugs covered under part B, including inhalation therapy drugs, from one based on the average wholesale price (AWP) to one based primarily on the average sales price (ASP) plus 6 percent. This new payment method is expected to result in payment rates that are closer to drug acquisition costs. The change was in response to substantial Medicare overpayments for outpatient drugs. For example, in a 2001 report, we found that the widely available acquisition prices for the two most common inhalation therapy drugs were 15 and 22 percent of AWP, while payment was 95 percent of AWP. Although most Medicare-covered outpatient drugs are provided in a physician’s office, inhalation therapy drugs are different. A physician prescribes the drugs, but beneficiaries receive the drugs from inhalation therapy drug suppliers, such as homecare companies and mail-order and retail pharmacies. The four largest suppliers are for-profit homecare companies that accounted for almost 41 percent of Medicare inhalation therapy payments in 2003. In addition to supplying the drugs, most companies also provide beneficiaries with a nebulizer and other related supplies. Under the AWP-based payment system, suppliers received drug payments that were substantially higher than their acquisition costs. Suppliers indicated that they used these excess payments to offer services that benefited both beneficiaries and their physicians, such as shipping the drugs overnight, making monthly phone calls to remind beneficiaries to refill their prescriptions, and operating 24-hour hotlines to respond to beneficiary questions. Several inhalation therapy suppliers and two physician organizations we spoke with indicated that suppliers also used excess payments to market their services to physicians to gain market share. Currently, Medicare pays a dispensing fee of $5 monthly per patient for inhalation therapy drugs. In August 2004, CMS published a proposed rule in which the agency noted that it believed a dispensing fee is appropriate to cover a supplier’s costs in delivering inhalation therapy drugs to patients, although it did not propose a specific dollar amount for 2005. CMS solicited comments on the services and costs associated with providing inhalation therapy drugs and an appropriate amount for such a dispensing fee. In addition, CMS proposed to allow suppliers to dispense a 90-day supply of drugs to Medicare beneficiaries, an increase from the current limit of a 30-day supply. A final rule is scheduled for publication in November 2004. We found that 2003 per unit acquisition costs for the three inhalation therapy drugs most frequently billed to Medicare varied widely among the 12 suppliers in our sample (see table 1). For ipratropium bromide, excluding the 3 suppliers with the highest costs and the 3 with the lowest costs, the remaining 6 suppliers in our sample had acquisition costs that ranged from $0.26 to $0.44. For albuterol sulfate, excluding the 3 suppliers with the highest costs and the 3 with the lowest costs, the remaining 6 suppliers had costs that ranged from $0.05 to $0.06. Although costs varied, they were not always lower for large suppliers. For example, the lowest acquisition cost for ipratropium bromide was obtained by one of the small suppliers, and the highest acquisition cost was obtained by one of the large suppliers. Because the three primary drugs used in inhalation therapy are available as generic drugs, purchasers may choose from more than one source to buy these drugs, potentially leading to greater competition and lower prices. Industry representatives we spoke with stated that, typically, inhalation therapy drug suppliers purchase drugs from wholesalers or distributors. We found that three of the four large suppliers in our sample purchased inhalation therapy drugs directly from manufacturers. For these companies, the large volume of drugs that they purchase may have allowed them to receive competitive prices negotiated directly with manufacturers, avoiding any price markups from wholesalers. The other large supplier purchased drugs from a mail-order pharmacy that is also an inhalation therapy drug supplier. Most of the small suppliers in our sample stated that they purchased their drugs from a wholesaler or distributor, and a few indicated they used group purchasing organizations to negotiate prices with manufacturers. Two small inhalation therapy suppliers stated they purchased their drugs from both manufacturers and distributors, one noting that they use different sources for different drugs. Under the previous AWP-based payment system, there was a considerable difference between the prices widely available to purchasers and Medicare’s payment for the drugs. Using the lowest and highest per unit acquisition costs reported by our suppliers for 2003, we estimated a difference of $119 to $129 per patient, per month between what suppliers received in payment from Medicare at a rate of 95 percent of AWP and the acquisition costs they incurred for a typical monthly supply of albuterol sulfate. For ipratropium bromide, we estimated that the difference between the 2003 payment rate and lowest and highest acquisition costs was $162 to $187 per patient per month for a typical monthly supply. Because patients receiving inhalation therapy may receive more than one inhalation therapy drug, the excess payments to suppliers for many patients would have been larger. Among the suppliers in our sample, there was wide variation in the monthly costs associated with dispensing inhalation therapy drugs. We also found that larger suppliers did not necessarily have lower dispensing costs. Because Medicare payments for drugs greatly exceeded suppliers’ acquisition costs, suppliers indicated they were able to provide services that benefited both beneficiaries and their physicians. For example, while most suppliers stated that they shipped drugs overnight to beneficiaries on an as-needed basis, one supplier reported doing so routinely. We found that providing a 90-day supply of drugs could reduce suppliers’ costs; the cost for dispensing a 90-day supply was less than twice the cost for dispensing a 30-day supply. Total per patient monthly dispensing costs varied widely among the suppliers in our sample. Using 2003 data obtained from 12 inhalation therapy suppliers, we estimated that the cost of dispensing inhalation therapy drugs ranged from $7 to $204 per patient per month. Excluding the 3 suppliers with the highest and the 3 with the lowest dispensing costs, the remaining 6 suppliers in our sample had estimated dispensing costs that ranged from $53 to $116 per patient per month. Large inhalation therapy drug suppliers did not necessarily realize economies for inhalation therapy drug dispensing costs; estimated per patient monthly costs ranged from $53 to $138 for large suppliers and from $7 to $204 for small suppliers. The estimated per patient monthly costs for each individual dispensing cost category varied widely across suppliers, with some suppliers incurring much higher costs than others (see table 2). Examples of substantial costs that suppliers incurred in dispensing inhalation therapy drugs include patient care services, such as pharmacy, packaging and shipping, personal delivery, and medication refill and compliance phone calls, as well as billing and collection costs and bad debt. The wide range of costs associated with dispensing inhalation therapy drugs is due in part to the variation in services offered by suppliers. Because of the difference between the acquisition prices of the drugs and Medicare’s payment for them, suppliers indicated that they were able to incur the costs associated with providing services that benefited both beneficiaries and their physicians. For example, 10 of 12 suppliers in our sample reported that they compounded at least some prescriptions, for which they may have incurred additional costs, including maintenance of a sterile compounding room and increased pharmacist labor. However, the 2 suppliers in our sample that did not compound drugs did not have the lowest pharmacy costs among all suppliers. All suppliers in our sample made phone calls to beneficiaries to ask them if they needed medication refills, to coordinate a refill delivery, and to check on the beneficiaries’ compliance with their prescribed drug regimens. Most suppliers made these calls on a monthly basis, but one reported that it did so twice a month. Several suppliers reported that they incurred substantial costs to ship drugs overnight to beneficiaries; most did so on an as-needed basis, although one supplier did so routinely. In addition, several suppliers maintained a 24-hour on-call service for patients to speak to a trained clinician or technician with questions or problems. Inhalation therapy suppliers we spoke with reported that one of their largest costs was the cost of respiratory therapists, who often provide initial patient education and are available as a clinical resource for medication refill and compliance phone calls. Respiratory therapist costs associated with teaching patients about the use and care of a nebulizer are covered as a patient education cost under Medicare’s payment for the equipment. Therefore, in our analysis we excluded respiratory therapist costs for patient education on the use of the nebulizer, but included respiratory therapist costs for medication refill and compliance phone calls. CMS has proposed to allow pharmacy suppliers to dispense Medicare beneficiaries a 90-day, rather than a 30-day, supply of inhalation therapy drugs. We determined that the cost to dispense a 90-day supply of drugs is less than twice the cost to dispense a 30-day supply of drugs (see table 3). This is because certain costs, such as pharmacy, shipping, and billing, are incurred only when the drugs are dispensed; therefore, less frequent dispensing would lower overall costs. For example, suppliers would bill Medicare only once for a 90-day supply of drugs, whereas they would have to bill Medicare three times over that same period if they were dispensing a 30-day supply to beneficiaries. Allowing for a 90-day supply of drugs could reduce both Medicare’s and suppliers’ costs because suppliers could dispense, ship, and bill for drugs less frequently and Medicare would process fewer claims. The inhalation therapy suppliers in our sample exhibited a wide range of drug acquisition costs. The suppliers’ costs of dispensing inhalation therapy drugs were quite variable as well. Higher dispensing costs incurred by some suppliers were covered by the excess payments for these drugs under the AWP-based payment system. Our analysis gives a range of the costs suppliers were incurring for dispensing inhalation therapy drugs, a starting point for determining a dispensing fee amount. The appropriate amount of a Medicare dispensing fee must take into account how excess payments for drugs affected dispensing costs. Some costs incurred by suppliers are necessary to dispense inhalation therapy drugs to Medicare beneficiaries, for example, maintaining a licensed pharmacy and billing Medicare. These necessary costs may no longer be covered when Medicare drug payments are closer to acquisition costs with the implementation of the ASP-based payment system. Other costs suppliers incurred may not be necessary to dispense the drugs. We recommend that the Administrator of CMS evaluate the costs of dispensing inhalation therapy drugs and modify the dispensing fee, if warranted, to ensure that the fee appropriately accounts for the costs necessary to dispense the drugs. In commenting on a draft of this report, CMS agreed with our recommendation. CMS noted the variation we found in inhalation therapy suppliers’ costs of dispensing these drugs to Medicare beneficiaries and stated it would carefully consider our analysis as it determines an appropriate dispensing fee for 2005. CMS stated that it would work with those concerned with inhalation therapy to understand the variability in dispensing costs. The agency also acknowledged the variation in the acquisition costs of inhalation therapy drugs. CMS noted our finding that acquisition costs were not necessarily related to the size of the supplier and stated it intends to further explore the factors influencing drug acquisition costs. CMS’s written comments appear in appendix II. We received oral comments on a draft of this report from the American Association for Homecare (AAHomecare), which represents homecare companies, including those that provide inhalation therapy drugs. The association agreed with our recommendation. AAHomecare noted that respiratory therapists provide services that are associated with dispensing inhalation therapy drugs, as well as with the use of nebulizers, and, therefore, the exclusion of all costs associated with respiratory therapists from our analysis was not appropriate. We have clarified the discussion of our methodology to indicate that we excluded respiratory therapist costs related to patient education on the use of the nebulizer but we included respiratory therapist costs related to the medication refill and compliance phone calls. AAHomecare also made technical comments, which we incorporated where appropriate. We are sending a copy of this report to the Administrator of CMS and appropriate congressional committees. We will also make copies available to others on request. The report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-7119 or Nancy A. Edwards at (202) 512-3340. Other major contributors to this report include Beth Cameron Feldpush, Joanna L. Hiatt, and Andrea E. Richardson. In conducting this study, we analyzed data from 12 inhalation therapy suppliers. We interviewed officials from the Centers for Medicare & Medicaid Services (CMS), three durable medical equipment (DME) regional carriers, and the Department of Veterans Affairs (VA) to gather comparative information on how VA pays for inhalation therapy. We also interviewed representatives from the American Association for Respiratory Care; American Association for Homecare; American College of Chest Physicians; Emphysema Foundation for Our Right to Survive; and National Association for Medical Direction in Respiratory Care; and two manufacturers and a wholesaler of inhalation therapy drugs. These interviewees helped us identify 20 inhalation therapy suppliers that we interviewed. We conducted a site visit at an inhalation therapy pharmacy and DME supply branch location, and interviewed officials at these facilities. To obtain information on suppliers’ costs of purchasing and providing inhalation therapy drugs to Medicare beneficiaries, we asked the 20 inhalation therapy suppliers we interviewed to report cost data to us on worksheets we provided. We analyzed 2003 cost information from 12 of these suppliers. We assessed the reliability of the cost data in several ways. For publicly traded companies, we compared certain submitted data, such as net revenue and income tax, to data reported in their annual reports filed with the Securities and Exchange Commission. In addition, we calculated the average percentage of total drug acquisition and dispensing costs accounted for by certain cost factors and compared our findings to a similar 2003 industry study. We also compared each supplier’s reported data to statements they made during our interviews. We collected data on personnel costs by service (pharmacy) on one worksheet, and by type of personnel (pharmacist) on another. For each supplier, we compared total reported personnel costs on each of these worksheets. Using 2003 Medicare DME claims, we calculated each supplier’s total Medicare inhalation therapy revenue and compared it to the total they reported on the worksheets. Although we initially received data from 13 suppliers, we excluded the data of one small, retail pharmacy supplier, as we considered its data unreliable. This pharmacy did not complete one of the personnel worksheets, and, therefore, we could not compare and verify its reported personnel costs. This pharmacy also reported drug acquisition costs that were inconsistent with other suppliers’ acquisition costs, in some cases over 25 times higher. We determined that the data from the remaining suppliers were reliable for our purposes. Our sample of 12 suppliers represents national, regional, and local homecare and mail-order pharmacies. All suppliers have other service lines in addition to inhalation therapy, such as the provision of DME, infusion drugs, and oxygen. These 12 suppliers accounted for more than 42 percent of 2003 Medicare inhalation therapy payments. Although these suppliers represent companies with a wide range of service volumes and geographic locations, they are not a statistically representative sample of all inhalation therapy suppliers. In our analysis, we excluded certain costs. We excluded sales and marketing costs, as they are not allowed by Medicare, as well as “other” costs that a supplier did not specifically describe. We excluded suppliers’ costs for patient education on the use of the nebulizer because they are covered under Medicare’s payment for the equipment. To analyze suppliers’ costs of purchasing inhalation therapy drugs, we divided total 2003 acquisition costs (net of rebates and discounts) for each drug by the total number of billing units to obtain a per unit acquisition cost for each drug for each supplier. We analyzed costs for the 4 largest suppliers, each of which had payments accounting for at least 3 percent of all Medicare inhalation therapy payments in 2003, and all other, or small, suppliers. To identify costs associated with dispensing and delivering inhalation therapy drugs, we analyzed 2003 costs associated with dispensing and delivering these drugs for each of the 12 suppliers. We determined the portion of inhalation therapy costs related to drugs using the percent of inhalation therapy revenue accounted for by inhalation therapy drug revenue. For pharmacy and medication refill and compliance phone calls, we used 100 percent of inhalation therapy costs, as these costs are related only to providing the drugs. For each supplier, we divided inhalation therapy drug dispensing costs by the number of reported inhalation therapy patient-months to determine per patient monthly drug dispensing costs. We also determined per patient drug dispensing costs with 90-day delivery by including only once the costs that would be incurred one time per dispensing and by tripling all other costs. We included pharmacy, packaging and shipping, delivery, medication refill and compliance phone calls, other patient care costs, and billing and collection costs only once in this analysis. We calculated the difference between the 2003 Medicare payment rates and the lowest and highest acquisition costs for albuterol sulfate and ipratropium bromide reported by our suppliers by multiplying both the payment rates and acquisition costs by the number of milligrams in the typical monthly supply of albuterol sulfate or ipratropium bromide and subtracting the cost from the payment. We conducted our work from May through October 2004 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) revised the payment formula for most of the outpatient drugs, including inhalation therapy drugs, covered under Medicare part B. Under the revised formula, effective 2005, Medicare's payment is intended to be closer to acquisition costs. The Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, also pays suppliers of inhalation therapy drugs a $5 per patient per month dispensing fee. Suppliers have raised concerns that once drug payments are closer to acquisition costs, they will no longer be able to use overpayments on drugs to subsidize dispensing costs, which they state are higher than $5. As directed by MMA, GAO (1) examined suppliers' acquisition costs of inhalation therapy drugs and (2) identified costs to suppliers of dispensing inhalation therapy drugs to Medicare beneficiaries. Using cost data obtained from 12 inhalation therapy suppliers that accounted for more than 42 percent of 2003 Medicare inhalation therapy payments, GAO found that 2003 acquisition costs for the three inhalation therapy drugs representing approximately 98 percent of Medicare inhalation therapy drug expenditures varied widely. For example, per unit acquisition costs for ipratropium bromide, the inhalation therapy drug with the highest Medicare expenditures, ranged from $0.23 to $0.64. Although costs varied, they were not always lower for the 4 largest suppliers. The lowest acquisition cost for ipratropium bromide was obtained by one of the small suppliers, and the highest by one of the large suppliers. GAO estimated that the 2003 Medicare payment rate per patient, per month was between $119 to $129 higher than suppliers' acquisition costs for a typical monthly supply of albuterol sulfate and between $162 to $187 higher for a typical monthly supply of ipratropium bromide. GAO estimated 2003 per patient monthly dispensing costs of $7 to $204 for the 12 inhalation therapy suppliers, which included patient care costs, such as pharmacy and shipping, and administrative and overhead costs, such as billing. Large suppliers did not necessarily have lower dispensing costs. Because Medicare payments for drugs have been much higher than suppliers' acquisition costs, suppliers indicated they were able to provide services that benefited both beneficiaries and their physicians, a fact that raises questions about the services necessary to dispense inhalation therapy drugs. For example, several suppliers reported that they incur substantial costs to ship drugs overnight to beneficiaries; most did so on an as-needed basis, although one did so routinely. All suppliers in GAO's sample made phone calls to beneficiaries to ask them if they needed medication refills, to coordinate a refill delivery, and to check on the beneficiaries' compliance with their prescribed drug regimens. Most suppliers made these calls on a monthly basis, but one reported that it did so twice a month.
As of July 2000, about 6.2 million people—or approximately 16 percent of Medicare's 39 million beneficiaries—were enrolled in Medicare+Choice plans. These plans receive a fixed monthly payment for each beneficiary, regardless of what an individual enrollee's care actually costs. Higher costs reduce a plan's profits or result in losses, while lower costs can enable it to offer additional benefits that help it to retain existing enrollees and attract new enrollees. Because managed care plans have a financial incentive to provide care efficiently, policymakers have long looked to them to curb unnecessary spending and produce savings for Medicare. Among BBA's major reforms to contain Medicare spending was the creation of Medicare+Choice, which was also designed to increase the plan options available to Medicare beneficiaries. Before the BBA, numerous studies by us, the Physician Payment Review Commission—which has been incorporated into the Medicare Payment Advisory Commission—HCFA, and others demonstrated that the Medicare program spent hundreds of millions more on beneficiaries enrolled in health plans than it would have spent if the same individuals had remained in traditional FFS Medicare.This occurred because Medicare payments were based on the estimated cost of FFS beneficiaries with average health and were not adequately adjusted to reflect the fact that plans tended to enroll beneficiaries in better-than-average health who had lower health care costs—a phenomenon known as favorable selection. Before 1998, base payment rates to plans in each county were set at 95 percent of the estimated FFS cost of the average beneficiary. The wide variation in local FFS expenditures, caused by local differences in both the prices of medical services and in beneficiaries' use of services, led to corresponding variation in these base rates. This variation may have accounted for some of the unevenness in plan availability across the country. Other factors, such as the higher concentration of Medicare beneficiaries, may have prompted plans to serve primarily urban areas. Beneficiaries in most rural areas lacked access to plans. Beginning in 1998, the BBA substantially changed the method used to set plan payment rates. The new method involves paying the highest of three alternative rates: a minimum amount, or “floor”; a minimum increase over the previous year's payment rate; or a blend of historical FFS spending in a county and national average costs adjusted for local price levels. Some of the new payment provisions were designed to reduce excess payments, while others were designed for different purposes—such as increasing plan participation in geographic areas that had low payment rates. The BBA aims to reduce the excess in Medicare's health plan payments primarily by holding down per capita payment increases for 5 years and by mandating a new health-based risk adjustment system. In January 2000, HCFA implemented a method for adjusting plan payments based on beneficiary health status, as required by the BBA. The new method, to be phased in over time, will pay plans more for serving Medicare beneficiaries with serious health problems and less for serving relatively healthy ones. The BBA also contains provisions to gradually remove graduate medical education (GME) payments from plan payments and provide for teaching hospitals to receive these payments directly from Medicare.Because GME spending is concentrated in high-payment-rate counties, its removal is expected to slow payment rate growth more in those areas. Another BBA objective is to reduce the geographic disparity in payment rates. A methodological approach known as “blending” will, over time, move all rates closer to the national average by providing for larger payment increases in low-rate counties and smaller payment increases in high-rate counties. In addition, the BBA established a minimum payment rate, known as a “floor,” to encourage plans to offer services in areas that historically had low payment rates and few participating plans—primarily rural counties. The BBA also eliminated the requirement that no more than 50 percent of a plan's enrollment may consist of Medicare and Medicaid beneficiaries. This means that Medicare plans can now serve areas without first building a commercial base. In 1999, 45 of the 346 plans that participated in 1998 terminated their Medicare contracts and 54 others reduced the number of counties they served. These withdrawal decisions affected about 407,000 enrollees (7 percent of the managed care population) who had to choose a new plan (if one was available in their county) or switch to FFS. About 61,000 of these enrollees, or 1 percent of the total Medicare managed care population, lived in counties in which no other plan was offered. Even if another plan was available, the approximately 450 beneficiaries affected by the withdrawals who had end-stage renal disease (ESRD) had to return to FFS.Medicare prohibits beneficiaries with ESRD from joining a health plan, although they may stay in one if they develop the disease while enrolled. Plan withdrawals can be disruptive and costly for affected beneficiaries. Although many affected beneficiaries can enroll in another plan, this option may require them to switch health care providers and accept different benefit coverage. Those who return to FFS may be able to retain their providers, but typically face out-of-pocket costs that are higher than they incurred as managed care enrollees. For example, most plan enrollees receive some coverage for outpatient prescription drugs, a benefit not offered in the FFS program. Although the BBA guarantees beneficiaries affected by plan withdrawals the right to purchase certain supplemental insurance policies (known as Medigap), none of the guaranteed policies cover prescription drugs. Officials from organizations representing plans reported that the BBA changes to the payment rates and the increased administrative burden of the new regulations were largely responsible for the plan withdrawals. According to the officials, Medicare payment rate increases did not keep pace with plans' costs or medical inflation. Our analysis indicated that a combination of market factors may have influenced plans' participation decisions. Plans more frequently withdrew from counties they had entered more recently, where they had attracted fewer enrollees, or where they faced larger competitors. Some plans indicated that they withdrew from areas where they were unsuccessful in establishing sufficient provider networks. The effect of Medicare's payment rates on withdrawals was much less obvious. For example, about 90 percent of high-payment-rate counties experienced a plan withdrawal compared with only 34 percent of low-payment counties. Taken as a whole, these findings suggested that a portion of the withdrawals may have been the result of plans that were less able to compete effectively in certain areas. In November 1999, the Congress passed the Balanced Budget Refinement Act of 1999 (BBRA). The BBRA contains provisions designed to encourage plan participation in Medicare+Choice. Among other changes, the BBRA provides a new entry bonus to plans that begin serving currently unserved areas. It also increases plans' flexibility to vary benefits within a geographic area and reduces some administrative requirements. In addition, the act slows the phase-in of the new risk adjustment methodology, reducing the short-term effect of the new methodology on plan payments. The BBRA also reduces the length of time that a plan has to wait to reenter the program after terminating its Medicare contract. The effect of these provisions on future plan participation is uncertain. In 2000, 41 of 309 participating plans terminated their Medicare+Choice contracts and another 58 plans reduced the number of counties they serve. This pattern will continue in 2001, when 65 of 261 plans currently participating in Medicare+Choice have announced they will terminate and another 53 plans will change their service areas. Combined, these plan withdrawals directly affect about 1.3 million Medicare+Choice enrollees. The 2001 withdrawals affect a much larger percentage of enrollees, approximately 15 percent, compared with the 2000 withdrawals that affected about 5 percent of all enrollees. All affected enrollees have to choose a new plan (if a plan accepting new enrollees is available in their county) or switch to FFS. By 2001, almost 75 percent of the counties that had a Medicare+Choice plan in 1999 will have been affected.About 238,000, or approximately 19 percent, of the affected enrollees live in counties in which no other managed care plan is being offered. Some of these beneficiaries may have the option of enrolling in a new private FFS plan, but the remainder will have no alternative to the traditional FFS program. The 1,940 beneficiaries in withdrawing plans who have ESRD must return to FFS. Plan withdrawals in both years disproportionately affect beneficiaries living in small urban, fringe, and rural counties.In 2000, approximately 65 percent of the 328,000 beneficiaries affected by the withdrawals lived in one of these types of counties even though these areas accounted for less than 33 percent of Medicare's managed care enrollees. (See fig. 1.) In contrast, the effects of the 2001 withdrawals will be more widespread and more representative of the distribution of Medicare+Choice enrollees. In both years, beneficiaries living in less densely populated areas were also likelier to be left only with the FFS alternative compared to affected beneficiaries in major urban areas. (See table 1.) A small number of plans accounted for a substantial portion of the affected enrollees in both years—the 10 largest withdrawing plans accounting for 45 percent in 2000 and 37 percent in 2001. (See tables 2 and 3.) Whereas the largest plans that withdrew in 2000 were concentrated in small urban, fringe, and rural counties, the largest withdrawing plans in 2001 are more uniformly distributed among these and major urban areas. Also, the withdrawing plans in 2001 tend to have significantly larger enrollments than the withdrawing plans in 2000. Although some plans continue to submit applications to enter the Medicare+Choice program or expand their service areas, the volume of applications has decreased from 30 in 1999 to 10 in 2000. HCFA has already approved many of the applications submitted since July 1998, including one for a private FFS plan called Sterling Option I that initially will serve 1,221 counties in 17 states.The new plan's service area encompasses 940 counties, including many rural counties, previously not served by a Medicare+Choice plan. Since the initial offering, Sterling has added 8 more states to its service area.(See fig. 2.) Beneficiaries who enroll in Sterling will pay a $55 monthly premium (in addition to the Medicare part B premium) in exchange for reduced out-of-pocket costs for many services and extended coverage for hospitalizations, among other benefits. However, Sterling Option I does not offer prescription drug coverage. Plan participation in Medicare managed care increased rapidly after 1993, peaked in 1998, and began declining in 1999. This experience is not unique to Medicare and, in fact, closely tracks plan participation in the Federal Employees Health Benefits Program (FEHBP), another large program offering multiple health plan choices. The withdrawals in 2000 followed a pattern that is similar to the pattern of withdrawals in FEHBP, as well as the pattern we found in our prior analysis of the 1999 Medicare plan withdrawals.Nearly all of the plans that terminated their Medicare contracts for 2000 or reduced their service areas were relatively new entrants in their respective markets, had attracted few beneficiaries, or had only a small share of the local Medicare managed care market. The plan withdrawals for 2001 deviate somewhat from this pattern in that some older, more established plans are terminating. However, the service area reductions in 2000 and 2001 are consistent with the 1999 pattern of withdrawals. In both years, other factors—such as plans' inability to establish sufficient provider networks—are often evident. Between 1993 and 1998, the Medicare managed care program grew rapidly and the number of plans more than tripled—from about 110 plans to 350 plans.Since 1998, however, 151 plans have terminated their Medicare contracts or announced that they will, and few new plans have joined the program. Despite the drop in plan participation, enrollment has continued to increase—although at a slower pace—with the result that the total number of Medicare managed care enrollees has remained approximately the same or even increased slightly over the past 2 years. However, the substantial decline in plan participation next year may cause total enrollment to fall. FEHBP experienced a similar rapid rise in the number of participating plans followed by a decline.(See fig. 3.) Between 1994 and 1997, the number of plans participating in FEHBP increased from 369 to 470. Since then, the number of FEHBP plans has declined steadily and may fall to approximately 240 next year. This roughly 50 percent decline in the number of FEHBP plans is similar to the approximately 57 percent decline experienced in Medicare over the same period. However, the percentage of FEHBP enrollees affected is substantially smaller than the percentage of Medicare+Choice enrollees affected. In 2001, for example, FEHBP plan withdrawals are expected to affect about 1 percent of enrollees, compared to Medicare+Choice withdrawals affecting 15 percent of enrollees. At the same time new plans were joining the Medicare program, many existing plans expanded their geographic service areas. Some plans entered previously unserved rural counties while others entered urban counties with one or more existing Medicare plans. As a result, the percentage of rural beneficiaries with access to Medicare managed care increased from about 10 percent in 1993 to over 31 percent in 1998. Because of recent plan withdrawals, however, the percentage of beneficiaries in rural areas with access to a Medicare managed care plan has fallen to about 21 percent in 2000. Urban beneficiaries, nearly all of whom already had access to at least one plan in 1993, had a wider choice of plan options. In recent years, however, even large urban areas have seen a decline in plan participation. The percentage of beneficiaries living in large urban areas with access to at least one plan has declined from 99 percent in 1999 to 97 percent in 2000 and is expected to fall again in 2001. The vast majority of Medicare+Choice plans that terminated their Medicare contracts in 2000, as opposed to reducing the number of counties they served, were recent entrants into urban areas that already had substantial plan participation. Many terminating plans had few beneficiaries or a relatively small share of the local Medicare managed care enrollment. These factors are the same ones that were associated with the 1999 withdrawals. In 2000, 38 of the 41 terminating plans were either recent entrants, had attracted fewer than 200 enrollees, or had less than a 15 percent share of the local Medicare plan market in each of the counties they served.(See table 4.) Plans that terminated their participation in FEHBP had similar characteristics: 42 percent of the terminating plans had fewer than 300 enrollees and many of those were recent entrants. The pattern of plan withdrawals is different in 2001 in that some older, larger, and more established plans are also terminating their Medicare contracts. For example, almost 43 percent of terminating plans entered the market before 1996 and 29 percent had total plan enrollments that exceeded 10,000 enrollees. Although the patterns of contract terminations in 2000 and 2001 appear to be somewhat different, the patterns of service area reductions in the 2 years are similar. In both years, plans that withdrew from only a portion of the counties they served tended to pull out of counties that they had more recently entered or where they had relatively low enrollment. In the majority of cases—92 percent in 2000 and 79 percent in 2001—plans withdrew from counties where they had recently entered, where they enrolled fewer than 200 beneficiaries, or where they enrolled fewer than 15 percent of the Medicare managed care enrollees. This pattern was more pronounced in 2000 than in 2001, but the 2001 service area reductions still follow the same general trend. In some cases, plans consolidated into one or more core areas where they were most strongly established. Service area reductions have been more concentrated in rural areas. Despite the floor payment rates, enacted in BBA, which make payments to plans considerably higher than FFS costs in many rural counties, the challenge of providing managed care in rural areas may be a significant contributing factor. The sparseness of both beneficiaries and providers may present difficulties for plans. Without sufficient beneficiary populations, plans say they cannot enroll enough individuals to spread risk and cover fixed operating costs.In addition, plans may have difficulty obtaining discounts and negotiating contracts with physicians and hospitals when an area has few competing providers. Humana Health Plan of Texas, the plan with the single largest number of affected enrollees in 2000, illustrates the consolidation behavior exhibited by a number of the plans that reduced their service areas. Humana started serving Medicare beneficiaries in areas around Corpus Christi, Texas, in 1986 and added San Antonio in 1988. It more recently expanded its service area by adding a total of 23 counties in 1995, 1997, and 1999. (See fig. 4.) In 2000, the plan withdrew from 16 of the counties, both urban and rural, it entered in 1997 and 1999, as well as a few it entered in 1995. The plan remained in the central counties encompassing San Antonio and Houston, both urban areas where the plan had by far its largest concentration of enrollment, and the Corpus Christi area. Humana remained in San Antonio despite the fact that the county's monthly payment rate for 2000 was, on average, $26 lower than payments in the four urban counties it dropped. The 10 counties it retained in 2000 accounted for 70 percent of its Medicare managed care enrollees in Texas. In 2001, Humana will consolidate even further, serving only the San Antonio and Corpus Christi areas. This time, the 2001 monthly payment rate in San Antonio is, on average, $147 lower than the six counties the plan is dropping. Humana recently stated that it incurred pre-tax losses exceeding $26 million during 1999 in the counties it will leave in 2001. However, only a fraction of these losses may be due to providing Medicare covered benefits. The plan is currently offering, at no additional charge, an unlimited generic prescription drug benefit and a brand name benefit up to $1,400 per year, in addition to some coverage for physical exams and vision services, to the beneficiaries in these Texas counties. The Medicare managed care experience in Maryland illustrates both the service area reductions that occurred in 2000 and the trend toward larger, more established plans terminating their contracts in 2001. In 2000, plans withdrew from recently entered rural counties while continuing to serve more heavily populated urban areas. In 2001, these plans are continuing the exodus from Medicare by withdrawing from these urban areas and terminating their contracts. Between 1986 and 1993, only one plan, Freestate Health Plan—sponsored by Blue Cross operated in Maryland. Its service area included only 6 of Maryland's 24 counties, all located in Maryland's major metropolitan area—the areas surrounding Baltimore and Washington, D.C. Over time, new plans began operation in the state, mostly in the same Baltimore- Washington corridor. (See fig. 5.) One plan, Optimum Choice, began offering service statewide in 1994, followed by 2 more statewide plans in 1996. Between 1997 and 1999, however, these 3 plans reduced their service areas until, by 1999, only Freestate continued to serve Maryland's rural counties. In 2000, Freestate reduced its service area to the Baltimore- Washington area—its historical core service area. Rural Maryland beneficiaries, who had a managed care option between 1994 and 1999, were left with no alternative to traditional FFS Medicare. The difficulty of serving sparsely populated rural areas may have been an important factor in the Maryland plans' withdrawal decisions for 2000. Freestate Health Plan, for example, withdrew from rural Caroline County where it faced no competition and enrolled nearly one in five of the county's beneficiaries despite charging a $75 per month enrollee premium. However, the plan's relatively large market share in the county amounted to only 895 enrollees. In contrast, the plan's 2 percent market share in urban Montgomery County, an area it continued to serve, resulted in more than 2,000 enrollees. In addition, Medicare payment rates were increasing faster in the rural counties the plan left because of the floor and the blend provisions in the BBA. Freestate has announced it will terminate its contract in Maryland for 2001, leaving Kaiser Health Plan as the only remaining Medicare plan serving the state. Freestate has said that it expects to incur losses of $7.5 million by the end of 2000. Although recent entry, low enrollment, or low market share are characteristics of most withdrawing plans, in some cases plan withdrawals appear to have little to do with these factors. In one case, a merger caused a plan to change operations to avoid anti-trust violations and subsequently resulted in termination of selected contracts. In other cases, plans terminated all operations—Medicare, Medicaid, and commercial—in an area. Finally, some plans have reported that providers in some areas are becoming increasingly resistant to contracting with them, making it more difficult for plans to assemble viable provider networks in certain areas. The following examples illustrate other factors that may have contributed to plan withdrawals. Aetna U.S. Healthcare acquired NYLCare Health Plans in July 1998, and later purchased Prudential Health Care in August 1999. Because Texas officials were concerned that Aetna would have too large a share of the state's market after it acquired Prudential, they agreed to the purchase under the condition that Aetna sell its NYLCare business in the state. However, under special agreement, Aetna was allowed to continue managing the Medicare component. Aetna subsequently terminated this contract. Capital Area Community Health Plan of Albany, NY, was affiliated with Kaiser Permanente, which withdrew from all of its operations in the entire northeast region in 2000. Humana terminated all of its business—commercial and Medicare—in Nevada. United Health Care of Louisiana was one of the first national plans to buy out local plans in Louisiana. Local providers, who preferred dealing with the local plans, resisted contracting with United. The plan eventually withdrew from these areas. Oxford Health Plans of NY had trouble assembling a viable provider network in one of the large counties it served, so it withdrew from that county. Industry representatives have stated that low Medicare payments, resulting from BBA provisions designed to control program spending, are primarily to blame for the recent plan withdrawals. The American Association of Health Plans contends that the BBA created a “fairness gap” by decreasing payments to health plans relative to spending on beneficiaries in the FFS program. However, since the BBA was enacted the increase in Medicare+Choice payment rates has exceeded the growth in per capita FFS spending. Furthermore, our recent study of plan payments found that Medicare paid plans $5.2 billion (or about 21 percent) more than it would have spent in 1998 if plan enrollees had received standard Medicare covered services through the traditional FFS program. According to reports that plans submit to HCFA, Medicare's payments are also substantially higher than the average plan's projected costs of providing Medicare-covered benefits. Moreover, although industry representatives have called for higher payment rates, the extent to which rate increases would affect plans' decisions to participate in Medicare is unclear. In 2000 and 2001, withdrawals have not been confined to counties where payment rate increases, or payment rates, were low. Between 1997—the year the BBA was enacted—and 1999, Medicare+Choice payment rates increased on average by about 4.2 percent.(See fig. 6.) Furthermore, the payment rate increase was applied to 1997 rates that HCFA now estimates were inflated by about 3 percent because of an error in the spending forecast used to set the rates.In contrast, per capita FFS spending fell 1.7 percent during the same period. HCFA estimates that between 1999 and 2001, per capita FFS spending will grow faster than Medicare+Choice payment rates. If these estimates prove accurate, the cumulative increase in Medicare+Choice payment rates between 1997 and 2000 will still exceed the growth in per capita FFS spending, but the gap will be much narrower. By 2001, HCFA's current projections indicate that average spending in the traditional program will have increased 11.9 percent, while plan payment rates will have increased 10.7 percent. Recently, we reported that Medicare+Choice plan payments likely exceed the amount that beneficiaries enrolled in plans would cost in the traditional FFS program.In 1998, aggregate payments exceeded enrollees' estimated FFS costs by about 21 percent—or approximately $5.2 billion. On a per enrollee basis, Medicare paid plans about $1,000 more than the FFS program would have spent to provide Medicare-covered benefits. A portion of the estimated $5.2 billion in annual excess plan payments may diminish over time. Approximately $2 billion of these excess payments resulted from FFS spending forecast errors built into the 1997 county payment rates due to the BBA provisions that based future county rates on the 1997 rates and guaranteed 2 percent minimum annual rate increases. The effect of the 1997 forecast error will largely be mitigated by the BBA provision that slows Medicare+Choice rate increases relative to the growth in FFS spending between 1998 and 2002. The bulk of the excess payments we estimated for 1998 ($3.2 billion) will persist each year until payments on behalf of individual enrollees better match their expected health care costs. Medicare+Choice plans attracted a disproportionate selection of healthier and less-expensive beneficiaries relative to traditional FFS Medicare (a phenomenon known as favorable selection), while payment rates largely continued to reflect the expected FFS costs of beneficiaries in average health. Consequently, we estimate that the program spent about 13.2 percent more on plan enrollees than if they had received services through the traditional FFS program. This year, HCFA implemented a new methodology to adjust payments for beneficiary health status. However, our results suggest that this new methodology, which will be phased in over several years, may ultimately remove less than half of the excess payments caused by favorable selection.HCFA expects to introduce a more refined methodology in 2004 that may better adjust payments to reflect enrollees' expected health care costs. Medicare+Choice payment rates not only surpass what the FFS program would spend to provide Medicare-covered benefits to plans' enrollees, but data submitted by plans show that rates also generally exceed plans' estimated costs to provide those same benefits. As part of the annual contracting process, each Medicare+Choice plan is required to project its per enrollee cost of providing Medicare-covered benefits.If estimated Medicare payments exceed a plan's projected costs, the plan must use the difference to provide additional benefits during the contract year or contribute to an escrow account and use the funds to provide benefits in future years. To fulfill Medicare's requirement, plans choose to provide additional benefits—such as routine vision care, dental care, and coverage for outpatient prescription drugs—that are not covered in the traditional FFS program. In their 1999 contract submissions, the average plan—including plans that withdrew in 2000—projected that its costs would be substantially less than its Medicare payment. On average, plans estimated that they could provide Medicare-covered services for about 89 percent of Medicare's payment.Plans indicated that they would provide additional benefits to make up the difference. Most plans' benefit packages exceeded the minimum requirements. Consequently, the average plan in 1999 estimated it would spend about $1,300 per enrollee, an amount equal to about 22.5 percent of its Medicare payment, on benefits that are not covered in the FFS program. Among plans that terminated their contracts or reduced their service areas in 2000, the average annual amount spent on additional benefits was slightly lower—about $1,100, or 21.6 percent of Medicare's payment. (See table 5.) Plans' contract submissions for 2000 exhibited a similar pattern of additional benefits.Plans that will terminate their contracts in 2001 projected that they would spend an average of about $1,200 per enrollee, or 22 percent of their Medicare payment, on additional benefits in 2000. Plans that will reduce their service areas projected they would spend slightly less, about $1,000 or 18 percent of their Medicare payment, on additional benefits. In contrast, spending on additional benefits was estimated at nearly $1,500 per enrollee, or about 25 percent of 2000 Medicare payments, for plans that will remain in the program in 2001. The effect of payment rates on Medicare+Choice plan participation is ambiguous. While changes in payment rates are an important influence on plans' participation decisions, we found that plan withdrawals were not limited to counties with low payment rates. On the one hand, plan withdrawals appear to be more extensive in the 2 years with lower payment rate increases. In both 1999 and 2001, county rates increased by an average of 2 percent. In 1999, plan withdrawals affected 42 percent of counties that previously had a managed care plan, and in 2001 plan withdrawals will affect 58 percent of such counties. In contrast, a smaller proportion of counties—approximately 37 percent— were affected in 2000 when rates increased by about 4 percent. The extensiveness of plan withdrawals may also be related to the gap between average county rate increases and the change in expected per capita FFS spending. For example, the projected increase in per capita FFS spending is much higher for 2001 than it was for 1999 and withdrawals will be more extensive.Therefore, withdrawals may moderate after 2002 when payment rate increases will mirror expected increases in per capita FFS spending except for adjustments to correct prior spending forecast errors. The relationship across counties between plan participation and payment rates, and rate increases, is not clear. Both high-payment rate and low- payment rate counties are affected by the 2000 and 2001 plan withdrawals, although the relationship between payment rates and withdrawals is somewhat different in the two years. In 2000, approximately 39 percent of the non-floor counties that had at least one plan in 1999—those with payment rates set above the minimum payment of $402—were affected by a plan withdrawal. A slightly higher proportion of counties in the middle payment categories were affected compared to the proportion of affected counties in the highest rate category and the rate category just above the floor. (See table 6.) In 2001, about 80 to 90 percent of counties in the higher payment ranges, but less than two-thirds of the counties in the lower payment ranges will be affected. (See table 7.) The 2001 withdrawal pattern is similar to the one that occurred in 1999 in that a disproportionate number of high payment rate counties were affected by withdrawals. In both 2000 and 2001, floor counties that previously had a Medicare+Choice plan will be proportionately less affected by the withdrawals compared to counties that receive payment rates above the floor. However, the difference between floor and nonfloor counties is less pronounced in the 2001 withdrawals. The relationship between payment rate increases and plan participation in a particular county is unclear. In 2000 and 2001 floor counties may have been less affected by the withdrawals because the BBA substantially increased payment rates in those counties, and those rates remain considerably above the average cost of Medicare benefits in the traditional FFS program. Between 1997 and 2001, payment rates in floor counties increased by 27 percent. In contrast, payment rate increases have been more modest in nonfloor counties, around 11 percent.However, the pattern of plan withdrawals in 2000 suggests that even relatively large payment rate increases may not be enough to keep some plans in certain counties. While county payment rates increased by an average of 4 percent in 2000, the BBA's rate “blending” provision increased rates by 10 percent or more in certain counties.Nonetheless, 40 percent of these counties with large increases were affected by plan withdrawals in 2000—about the same as the percentage of affected counties among those that received the lowest (2 percent) rate increase.(See table 8.) Some areas may have too few beneficiaries or providers to support multiple plans, or even a single plan. Moreover, plans that fail to attract a sufficient number of enrollees will not realize their revenue goals even if payments are adequate on a per capita basis. Medicare+Choice is at a crossroads. Because of contract terminations and service area reductions, by January 2001 more than 1.6 million beneficiaries will have had to switch to a different plan or the traditional fee-for-service program since 1999. Industry representatives contend that payment rate increases are necessary to keep the program viable. However, the Medicare+Choice program has already been expensive for taxpayers. As our work on payment rates shows, the vast majority of plans have gotten paid more for their Medicare enrollees than the government would have paid had these enrollees remained in the traditional fee-for- service program. Raising payment rates to a level sufficient to retain the plans leaving Medicare would mean increasing the excess that currently exists in payments for plan enrollees relative to their expected fee-for- service costs. In areas of the country where there are few beneficiaries and providers are in short supply, no reasonable payment rate increase is likely to entice plans to participate in Medicare. Thus, a trade-off exists between the significant additional costs that would be needed to keep more plans in the program and the benefits of providing more beneficiaries with options for accessing Medicare covered services. Such a trade-off raises questions about the equity of providing a greater array of benefits to a fraction of the Medicare beneficiary population. In our view, efforts to protect the viability of Medicare+Choice plans come at the expense of ensuring Medicare's financial sustainability over the long term. In commenting on our report, HCFA stated that our findings confirmed its own analysis of Medicare+Choice plan withdrawals. HCFA noted that the pattern of withdrawals, analyzed at the corporation level instead of at the individual plan level, reinforces our finding that factors besides payment rates likely influenced plans' participation decisions. For example, HCFA said that in 2001, 54 percent of Aetna's Medicare+Choice enrollees and 69 percent of Cigna's enrollees will be affected by plan withdrawals, but less than 2 percent of Pacificare's enrollees and only 0.1 percent of Kaiser's enrollees will be affected. The agency contends that these differences provide evidence that the withdrawals reflect corporations' strategic business decisions that go beyond Medicare payment adequacy. HCFA also said that it believes the Administration's proposal to provide a prescription drug benefit to all enrollees would both reduce inequities in benefit availability and increase payments to Medicare+Choice plans that cover prescription drugs. (HCFA's comments appear in app. IV.) We also provided representatives of the American Association of Health Plans (AAHP), the BlueCross BlueShield Association (BCBSA), and the Health Insurance Association of America (HIAA) an opportunity to comment on the report. All three groups disagreed with our conclusions and stated that our report did not touch on important issues relevant to plan withdrawals. They also said that withdrawals can be costly for beneficiaries because Medicare+Choice plans typically provide preventive care services and other benefits that are not covered in the traditional FFS program. (AAHP's, BCBSA's, and HIAA's comments appear in apps. V, VI, and VII.) AAHP, BCBSA, and HIAA believe that inadequate Medicare+Choice payment rates are a principal cause of plan withdrawals. BCBSA stated that many plans could not afford to continue providing sufficient additional benefits (beyond those covered in FFS) to attract beneficiaries. All three industry groups stated that it is inappropriate to compare Medicare+Choice payment rate increases with changes in per capita FFS spending (as we did in fig. 6) because plans' costs have been growing faster than per capita FFS spending. HIAA said that FFS spending slowed only as a result of BBA's unprecedented reductions in Medicare reimbursements and that the Congress began correcting these reductions with the enactment of BBRA in 1999. BCBSA commented that the comparison is unfair because the traditional program can control costs in ways that are unavailable to plans. In our report, we acknowledge that plans typically provide benefits that are not available in the FFS program. However, we found that Medicare+Choice payments substantially exceeded plans' projected costs (including normal profits) of providing Medicare-covered benefits and that plans contracted with Medicare to use the difference to provide benefits that are not available in the FFS program. Furthermore, the contention that plans' costs have grown more rapidly than per capita FFS spending, or that plans have a limited ability to control their own cost increases, does not alter our finding that Medicare+Choice payments exceed the estimated amount that the traditional program would spend on the individuals enrolled in plans. AAHP and HIAA stated that our methodology for estimating the FFS costs of plan enrollees, based on enrollees' prior use of services in the FFS program, underestimates the health care costs of plan enrollees and therefore overestimates excess payments to plans. In developing our methodology, however, we employed assumptions that would tend to underestimate excess payments.Therefore, we believe our findings likely represent a lower bound on the estimated excess payments plans receive and the potential savings from improved risk adjustment. HIAA stated that services are overutilized in the FFS program and that by using FFS spending as a comparison we overestimated the degree of favorable selection and the extent of excess payments to plans. In our analysis, we did not attempt to quantify an appropriate level of care. If services are overutilized in the FFS program, a comparison of plan payments with a more efficient delivery system might indicate less favorable selection, but it would not alter our finding that current Medicare+Choice payment rates— largely based on FFS spending patterns—exceed the estimated cost of providing Medicare-covered benefits in the FFS program. AAHP, BCBSA, and HIAA said that we did not address the issue of regulatory burden in our report. They believe that recent regulations have increased plans' administrative costs and discouraged plan participation. Because many of the recent regulations resulted from provisions in BBA designed to increase plan accountability, facilitate informed choice and plan comparisons, protect beneficiary rights, or foster quality improvement efforts, a comprehensive analysis of this issue would require an assessment of the regulations' benefits as well as their costs. Such an analysis was beyond the scope of our report. Finally, AAHP and BCBSA stressed that plans typically provide benefits not covered in the traditional FFS program and that plan withdrawals are not only disruptive for beneficiaries but can also result in beneficiaries having to pay more in out-of-pocket costs. Although we agree, and did discuss this issue in the report, it was not the focus of our study. We are sending copies of this report to the Honorable Nancy-Ann Min DeParle, Administrator of the Health Care Financing Administration, and other interested parties who request them. If you or your staffs have any questions about this report, please call me on (202) 512-7114 or Laura A. Dummit, Associate Director, on (202) 512-7119. Other major contributors included George Duncan, Beverly Ross, and Susanne Seagrave under the direction of James C. Cosgrove. We reviewed pertinent laws, regulations, HCFA policies, and research by others to obtain information on the Medicare+Choice program, including revisions to the payment methodologies. To obtain different perspectives on why plans withdrew or reduced their service areas, we interviewed officials at HCFA's regional offices and representatives from the American Association of Health Plans and Blue Cross/Blue Shield of Maryland, one of the plans that withdrew. To do our analysis, we obtained data files from HCFA, which the agency uses to compute Medicare+Choice plan payments and which are widely used by researchers. To identify counties with a plan in 1999, we used HCFA's 1999 Medicare Compare Database combined with HCFA's July 1999 Medicare Managed Care Market Penetration for All Medicare Plan Contractors Quarterly State/County/Plan Data Files. We excluded cost, demonstration, and health care prepayment plans from our analysis and used only those plans identified as Medicare+Choice. We concluded that a plan was offered in a particular county only if both databases agreed. The count of enrollees by plan by county in a plan's service area as of July 1999 was obtained from the State/County/Plan Penetration Files, except in four cases where plans reduced their service areas and withdrew from only part of a county. In these cases, we obtained the actual number of enrollees affected from HCFA's Center for Health Plans and Providers. Similarly, we identified counties with a plan in 2000 using HCFA's 2000 Medicare Compare Database combined with HCFA's March 2000 Medicare Managed Care Market Penetration for All Medicare Plan Contractors Quarterly State/County/Plan Data Files. Again, we excluded cost, demonstration, and health care prepayment plans from our analysis and used only those plans identified as Medicare+Choice. We concluded that a plan was offered in a particular county only if both databases agreed. HCFA's Center for Health Plans and Providers gave us a list of contract consolidations that occurred in 2000, and we adjusted our information accordingly. The count of enrollees by plan by county in a plan's service area as of March 2000 was obtained from the State/County/Plan Penetration Files. To analyze the changes in plan participation in the Medicare+Choice program in 2000 and 2001, we used HCFA data on Medicare+Choice plan contracts. In July 1999, HCFA provided us with a list of plans that had announced they were withdrawing from the program or reducing their service areas as of January 1, 2000, and the counties and number of enrollees affected. In July 2000, HCFA provided us with the same information for plans that had announced changes for 2001. We excluded Guam, Puerto Rico, and the Virgin Islands from all county- level analyses. In some of the analyses, the same counties are defined as separate entities if plans can contract with them separately. For example, Los Angeles County, California, is divided into Los Angeles-1 and Los Angeles-2; they are counted separately because plans may contract with them separately. The independent cities of Virginia are also counted as separate counties because their payment rates differ from those of their counties, and plans contract to serve these areas as if they were independent counties. We classified counties as urban, rural, or small urban/fringe using the rural/urban continuum codes in the February 1999 Area Resource File, which we obtained from the Bureau of Health Professions, Health Resources and Services Administration of the Department of Health and Human Services. We defined urban counties as the central counties of metropolitan areas of 1 million population or more and rural counties as all nonmetropolitan counties. Finally, included in the small urban/fringe counties are counties in metropolitan areas of less than 1 million population and fringe counties of metropolitan areas of 1 million population or more. The February 1999 Area Resource File combines the Virginia independent cities into their original counties and does not report separate rural/urban continuum codes for these. We kept these cities separate in keeping with the HCFA data, and we assigned these cities the same rural/urban continuum codes as their original counties. To analyze geographic differences in beneficiaries' access to a plan from 1993 to 1998, we used the December 1993-1998 State/County/Plan Penetration Files and deleted all plan/county combinations where a plan enrolled fewer than 10 enrollees. Because we were not able to obtain actual contract information on plan service areas before 1997, this provided an approximation of plans' service areas. We then used the same urban, rural, and small urban/fringe county designations as before from the February 1999 Area Resource File to determine the percentage of beneficiaries with access to a Medicare+Choice plan in these different areas. We obtained county-level payment rate information for 1997 through 2001 for Medicare risk plans and Medicare+Choice plans, including payment reductions resulting from the removal of graduate medical education (GME) spending, from HCFA's Web site. In addition, we used a February 1999 file from HCFA's Office of Information Systems containing historical county-level information on the year that plans first entered individual counties. Healthsource, Arkansas, Inc. United Healthcare of Arkansas, Inc. Premier Healthcare, Inc. Humana Health Plan, Inc. Health Plan of Nevada, Inc. Pacificare of California, Inc. National Med, Inc. Aetna U.S. Healthcare of CaliforniaCigna Healthcare of Colorado, Inc. Qual-Med, Inc., Denver HMO Colorado, Inc. Qual-Med, Inc., Pueblo Qual-Med, Inc., Colorado Springs Kaiser Foundation Health Plan of Connecticut Physicians Health Service of Connecticut, Inc. Connecticare, Inc. Florida Health Choice, Inc. Community Health Care Systems, Inc. Humana Medical Plan, Inc. HIP Health Plan of Florida, Inc. Cigna Healthcare of Florida, Inc. Av-Med Health Plan, Inc. Av-Med Health Plan, Inc. 1,600 (Continued From Previous Page) Health Options, Inc. United Healthcare of Georgia, Inc. Kaiser Foundation Health Plan of Georgia, Inc. Exclusive Healthcare, Inc. Humana Health Plan, Inc. Humana Kansas City, Inc. United Healthcare of the Midwest, Inc. United Health Care of Louisiana United Health Plans of New EnglandUnited Healthcare of the Mid-AtlanticUnited Healthcare of the Midwest, Inc. Exclusive Healthcare, Inc. Healthsource New Hampshire, Inc. United Healthcare of New Jersey, Inc. Physicians Health Services of New Jersey, Inc. Cigna Healthcare of New Jersey, Inc. Cigna Healthcare of New Jersey, Inc. 40 (Continued From Previous Page) Humana Health Plan, Inc. Capital Area Community Health Plan Oxford Health Plans (New York) Inc. Capital Area Community Health Plan Physicians Health Service of New York, Inc. United Healthcare of New York, Inc. Capital Area Community Health Plan Cigna Healthcare of New YorkHum/Healthcare Systems, Inc. Pacificare of Ohio, Inc. Cigna Healthcare of Ohio, Inc. Summacare, Inc. Community Health Plan of Ohio Aetna U.S. Healthcare, Inc. Health Alliance Plan of Michigan Family Health Plan, Inc. Bluelincs HMO, Inc. Pacificare of Oregon, Inc. Cigna Healthcare of Pennsylvania, Inc. United Health Plans of New England, Inc. 1,100 (Continued From Previous Page) United Healthcare of Tennessee, Inc. United Healthcare of Tennessee, Inc. Tennessee Health Care Network, Inc. United Healthcare of Tennessee, Inc. Humana Health Plan of Texas Healthcare Partners Plans, Inc. Cigna Healthcare of Texas, Inc. HMO Blue, Northeast Texas Humana Health Plan of Texas United Healthcare of Texas, Inc. Healthkeepers, Inc. Pacificare of Washington, Inc. Group Health Cooperative of Puget Sound Group Health Cooperative of Puget Sound Options Health Care, Inc. Humana Wisconsin Health Organization Insurance Corporation Primecare Health Plan, Inc. Plan remained in Medicare but reduced the number of counties served. Health Partners of Alabama, Inc. Intergroup Prepaid Health Service of ArizonaCigna Healthcare of Arizona, Inc. Pacificare of Arizona, Inc. Aetna U.S. Healthcare of California National Med, Inc. Maxicare, A California CorporationPacificare of Colorado, Inc. Mutual of Omaha of Colorado, Inc. Anthem Health Plans, Inc. (CT) Aetna U.S. Healthcare, Inc. Cigna Healthcare of Connecticut, Inc. Physicians Health Service of ConnecticutCigna Healthcare Mid-Atlantic, Inc. Cigna Healthcare of Delaware, Inc. Humana Medical Plan, Inc. Prudential Health Care Plan, Inc. Cigna Healthcare of Florida, Inc. 8,600 (Continued From Previous Page) Prudential Health Care Plan, Inc. Cigna Healthcare of Florida, Inc. Prudential Health Care Plan, Inc. United Healthcare of Florida, Inc. Aetna U.S. Healthcare, Inc. Prudential Health Care Plan, Inc. Av-Med Health Plan, Inc. Physicians Healthcare Plans, Inc. Preferred Medical Plan, Inc. Cigna Healthcare of Georgia, Inc. Aetna U.S. Healthcare of Georgia, Inc. United Healthcare of Georgia, Inc. HMO Georgia, Inc. Humana Health Plan, Inc. Aetna Health Plan of Illinois, Inc. United Healthcare of the Midwest, Inc. Mercy Health Plans of MissouriMaxiCare Indiana, Inc. Anthem Insurance Companies, Inc. Aetna Health Plan of Illinois, Inc. Aetna U.S. Healthcare, Inc. Pacificare of Ohio, Inc. Humana Health Plan of Ohio, Inc. HMO Louisiana, Inc. Aetna U.S. Healthcare, Inc. Gulf South Health Plans, Inc. Maxicare Louisiana, Inc. United Health Plans of New England, Inc. 10,100 (Continued From Previous Page) Fallon Community Health Plan, Inc. United Healthcare of the Mid-Atlantic Cigna Healthcare Mid-Atlantic, Inc. NYLCare Health Plans of Maine, Inc. HMO Missouri, Inc. Group Health Plan, Inc. Humana Kansas City, Inc. Wellpath Select, Inc. Qualchoice of North Carolina, Inc. Harvard Pilgrim Health Care of New HampshireOxford Health Plans (New Jersey), Inc. Cigna Healthcare of New Jersey, Inc. Amerihealth HMO, Inc. Qualmed Plans for Health, Inc. Lovelace Health Plan, Inc. St. Joseph Healthcare PSO, Inc. Aetna U.S. Healthcare, Inc. HIP of Greater New YorkCigna Healthcare of New York MDNY Healthcare, Inc. Health Services Medical Corps Central New York 2,300 (Continued From Previous Page) Physicians Health Service of New York, Inc. Aetna U.S. Healthcare, Inc. Prudential Health Care Plan of Northern Ohio Humana Health Plan of Ohio, Inc. Aetna U.S. Healthcare, Inc. Pacificare of Ohio, Inc. Paramount Care, Inc. Kaiser Foundation HP of OhioSummacare, Inc. Bluelincs HMO, Inc. Community Care HMO, Inc. HMO of Northeastern Pennsylvania, Inc. Penn State Geisinger Health PlanKeystone Health Plan Central, Inc. Healthcentral, Inc. Healthguard of Lancaster, Inc. Qualmed Plans for Health, Western Pennsylvania Qualmed Plans for Health, Inc. United Health Plans of New England, Inc. Tennessee Health Care Network, Inc. Healthsource Tennessee, Inc. NYLCare Health Plans, Inc. NYLCare Health Plans, Inc. 20,500 (Continued From Previous Page) Prudential Health Care Plan, Inc. Presbyterian Health Plan, Inc. Methodistcare, Inc. Pacificare of Texas, Inc. Cigna Healthcare of Texas, Inc. Texas Health Choice, L.C. Prudential Health Care Plan, Inc., San Antonio Cigna Healthcare of Virginia, Inc. Cigna Healthcare Mid-Atlantic, Inc. Aetna U.S. Healthcare, Inc. Pacificare of Washington, Inc. Network Health Plan of Wisconsin, Inc. Plan remained in Medicare but reduced the number of counties served. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Pursuant to a congressional request, GAO reviewed health care plans' withdrawal from the Medicare Choice program, focusing on the: (1) geographic distribution and the distribution among plans of enrollees affected by the recent plan withdrawals; (2) factors associated with plans that terminated or reduced their participation in the program; and (3) likely role of payment rates in affecting plans' decisions. GAO noted that: (1) of 309 plans serving Medicare beneficiaries at the end of 1999, 99 plans terminated their contracts or reduced the number of counties they served for the 2000 contract year, and 118 have announced they will terminate their contracts or reduce service areas for the 2001 contract year; (2) these withdrawals affected about 328,000 enrollees in 2000 and will affect almost 1 million enrollees in 2001; (3) the number of enrollees affected accounts for about 5 percent of Medicare Choice enrollees in 2000 and about 15 percent in 2001; (4) a disproportionate number of affected enrollees live outside of major urban areas; (5) a portion of these enrollees, approximately 79,000 in 2000 and 159,000 in 2001, will have no other Medicare managed care option available in their area and must either switch to a non-managed care option, if one is available in their area, or return to traditional fee-for-service (FFS) Medicare; (6) while a new private FFS plan has begun to offer services in many of the affected areas as an alternative to the traditional public FFS Health Care Financing Administration does not offer a prescription drug benefit; (7) in January 2000, Medicare Choice plans tended to withdraw from more difficult to serve rural counties or large urban areas that they had entered more recently or where they failed to attract sufficient enrollment; (8) in 2001, the trend is essentially the same for the service area reductions but somewhat different for the contract terminations, which involve some older, more established plans; (9) the pattern of Medicare Choice withdrawals shares common elements with plan participation in the similarly choice-based health insurance program for federal employees; (10) industry representatives contend that the Balanced Budget Act's (BBA) payment rate changes were too severe and that low Medicare payment rates are largely responsible for the plan withdrawals; (11) however, since the BBA was enacted, Medicare Choice payment rates have risen faster than per capita FFS spending; (12) in addition, many plans have attracted beneficiaries who have lower-than-average expected health care costs, while Medicare Choice payments are largely based on the expected cost of beneficiaries with average health care needs; and (13) it is unclear whether Medicare Choice payment rate increases would affect plans' participation decisions.
SPOT-ES comprises five systems: unclassified and classified versions of the SPOT database, unclassified and classified versions of the Total Operational Picture Support System (TOPSS) analytic and reporting tool, and the JAMMS personnel-location tracking tool. Figure 1 illustrates the five systems. DOD guidance provides that, in applicable contingency operations, contractor visibility and accountability shall be maintained through a common joint database—SPOT, or its successor. The SPOT database contains information about contracts—such as company name, contract number, and task order if any—and information about contractor personnel, such as contact information and next of kin, blood type, and the government-furnished support to which they are entitled. Government contracting officers use the information in SPOT to generate letters of authorization, which contractor personnel must obtain and carry in order to process through a deployment center or to travel to, from, or within the designated operational area. The letter of authorization also identifies any additional authorizations, privileges, or government support that contractor personnel are entitled to under the contract. Examples of such services could include access to dining facilities, transportation, or medical care beyond emergency treatment. The letter of authorization also identifies contractor personnel whose contracts permit eligible personnel to carry weapons, although arming approval rests with the combatant command to which personnel deploy. DOD guidance indicates that the department intends to use SPOT to facilitate integration of contingency contractors and other personnel as directed by the Under Secretary of Defense for Acquisition, Technology and Logistics or the combatant commander, and to ensure that accountability, visibility, force protection, medical support, personnel recovery, and other related support can be accurately forecasted and provided. According to the guidance, SPOT data elements are intended to provide planners and combatant commanders an awareness of the nature, extent, and potential risks and capabilities associated with contracted support. TOPSS is the reporting and analysis component of SPOT-ES. This tool generates a variety of standard reports that provide information on users, accounting and compliance, specific contracts or task orders, contractor deployments, and specific individuals. In addition, this tool can provide summary-level data in geospatial format. The JAMMS movement tracker is an information technology application developed to capture movement and location information about contractor personnel in specified operational theaters. Also, this tracker can capture information about operating forces and government civilians. JAMMS data-collection points are established at locations such as: dining facilities, aerial ports of debarkation, and medical locations; and at U.S. embassies and other State locations. JAMMS terminals can scan a wide range of identification credentials, such as common access cards, SPOT- generated letters of authorization, and some driver’s licenses and passports. These credentials provide identity information about the cardholder, which is retained and made available to integration partners. GLAAS is USAID’s worldwide web-based procurement system that manages awards throughout USAID’s acquisition and assistance life cycle. GLAAS adapted commercial off-the-shelf software to accommodate USAID’s procurement-management needs. GLAAS integrates with the USAID financial-management system and other external government systems to provide reports to the Office of Management and Budget, Congress, and other stakeholders. GLAAS also provides data, such as award value and whether awards were competed, for reports to Congress on contract support for contingency operations outside the United States. GLAAS contained information on approximately 19,000 awards in fiscal year 2013, of which 382 contracts were related to contingency operations in Iraq and Afghanistan. GLAAS is an independent system that does not interoperate with any DOD systems, including SPOT-ES. USAID has developed business rules and processes to explain to users how to operate GLAAS. The GLAAS user guides illustrate USAID’s acquisition and award process and instruct users on how to complete various tasks such as procurement planning, and creating and modifying purchase orders, solicitations, and assistance awards. USAID also supplements the GLAAS user guides with training sessions and other reference guides. The Federal Procurement Data System (FPDS) provides a comprehensive web-based tool for agencies to report information related to contracts. Executive agencies are to use FPDS to maintain publicly available information about all unclassified contract actions exceeding certain monetary thresholds. Generally, contracting officers must submit complete and accurate contract information to FPDS within 3 business days after contract award. Agencies can report data to FPDS using either of two methods: an Internet web portal or by means of contract- writing systems. DOD, State, and USAID report to FPDS through their respective contract-writing systems—Standard Procurement System, Global Financial Management System, and GLAAS. According to DOD, the Data Services Environment (DSE) is DOD’s primary resource for registering, sharing, and publishing different types of metadata about systems, services, and data resources, such as authoritative data sources, to support DOD and the needs of all authorized users. DOD components, according to DOD guidance, must register all authoritative data sources, information technology services, and required metadata in the DSE. DOD guidance also provides policy that data will be made visible and trusted, among other things, for all authorized users. According to the guidance, data is made visible by creating and associating metadata. Data is considered trusted when there is sufficient pedigree (source and lineage) and descriptive metadata for users to rely on it as an authoritative data source. According to the DOD DSE Concept of Operations, once a system is registered in the DSE, the appropriate authoritative body is to review the data source and either approve or not approve the source as an authoritative data source. The DSE is linked to DOD’s broader efforts for managing data in a net- centric environment, the key attributes of which include ensuring that data are visible, among others things; associating all data with metadata; and posting all data to shared space to provide access to users except when limited by security, policy, or regulation. Section 2222 of Title 10, U.S. Code, contains provisions regarding DOD development of a defense Business Enterprise Architecture (BEA), associated enterprise transition plan, and investment management structures and review processes. Among other requirements, section 2222 prohibits DOD from obligating funds for a defense business system program with a total cost in excess of $1 million over the period of the future years defense program unless the precertification authority certifies that the defense business system meets specified conditions. Specifically, the appropriate precertification authority must determine that the defense business system program is in compliance with the defense BEA and that appropriate business process reengineering efforts have been undertaken. For fiscal years 2013 through 2015, the precertification authority for SPOT-ES determined that the system was in compliance with the BEA and that appropriate business process reengineering efforts were undertaken. Subsequent Office of the Deputy Chief Management Officer investment decision memorandums from the Defense Business Systems Management Committee certified SPOT-ES funds. We have previously reported on the need to improve the quality of BEA compliance assertions. In May 2013, we found that compliance assertions continued to lack adequate validation; we recommended that DOD implement and use the BEA compliance assessments more effectively to support organizational transformation efforts by, among other things, establishing milestones by which selected validations of BEA compliance assertions are to be completed. DOD partially agreed with this recommendation. In our follow-up report in May 2014, we found that DOD needed to continue working to ensure the quality of the BEA assessments, as we had previously recommended. We have issued several recent reports about systems that DOD, State, and USAID use to manage information regarding contracts and contractor personnel and to prepare statutorily required reports on contracts, assistance instruments, and related personnel in Iraq and Afghanistan. In September 2012, we found that although SPOT was designated as the common database for the statutorily required information, officials from DOD, State, and USAID generally relied on other data sources they regarded as more reliable to prepare the 2011 joint report to congressional committees. We also found that the agencies generally did not use SPOT to help manage, oversee, and coordinate contracting in Iraq and Afghanistan. Instead, we found that the agencies primarily used the system to generate authorizations for contractor personnel to use U.S. government services. We recommended that the Secretaries of Defense and State and the Administrator of USAID work together to standardize the methodologies used to obtain and present information contained in the annual joint report on contracting in Iraq and Afghanistan to the greatest extent possible. The agencies agreed with our recommendation and indicated they would work together to implement it. Additionally, in February 2014, we reported that State and USAID had taken, or were planning to take, a number of actions to better track the number of contracts and contractor personnel in contingency environments. For example, we reported that, according to State officials, State developed additional guidance in fall 2012 that outlined the process of inputting contracts, how contractors should enter contractor personnel, how to request letters of authorization and approvals, and how to enter data through the contract close-out. We reported that, according to the department, State also established a new office that will be responsible for overseeing contractors’ input of Iraq contractors’ data into SPOT. We also reported that USAID was in the early stages of developing a proposal to use SPOT solely as a tool to track contractor personnel in contingency environments rather than the number and value of contracts. USAID officials stated that other data systems, such as FPDS-NG and GLAAS, provided more reliable information on the number and value of contracts. We recommended that the Secretary of State develop plans to assess whether planned initiatives are achieving their intended objectives and that the Administrator of USAID ensure that its nonpermissive working group consider procedures and practices developed by missions and offices with contingency-related responsibilities. The agencies concurred with our recommendations and provided information on actions taken or planned to address them. USAID has assessed resources that it needs to sustain GLAAS, but DOD has not fully assessed all future resources that it needs to sustain SPOT- ES. DOD and USAID use the budget process to identify resources they project they will need in the next budget year to modernize and operate their systems. USAID assessed its plans and updated GLAAS’s cost estimates with the actual costs of system upgrades and the latest cost estimates for future upgrades; however, DOD has not updated its life- cycle cost estimates or fully defined and assessed its plans to determine all resources needed to sustain SPOT-ES. USAID has assessed the resources it needs to sustain GLAAS and regularly updates GLAAS’s life-cycle cost estimate to reflect its plans. USAID uses historical cost data and cost estimates to develop the annual funding request to operate, maintain, and modernize GLAAS. Table 1 shows GLAAS’s cost estimates and funds that USAID requested for the operation and development of GLAAS from fiscal years 2013 through 2015 and amounts USAID identified as funded in fiscal years 2013 and 2014. We found that USAID assessed its plans and associated costs for modernizing GLAAS and has regularly updated its life-cycle cost estimate. Our review of USAID’s business cases for GLAAS’s modernization projects found that, in 2012 and 2013, USAID estimated the costs to implement system upgrades. For the upgrade completed in 2013, USAID assessed additional costs and analyzed the effect on project scope of a 3-month delay in releasing a system upgrade, known as version 7.1. In addition, USAID regularly updated GLAAS’s life-cycle cost estimate, which was originally calculated in 2009. Officials said they update cost estimates for the following budget year based on prior actual costs and upcoming projects. For example, in July and September 2014 USAID updated the life-cycle cost estimate with actual costs, resulting in a decrease in the projected costs to operate and maintain GLAAS from fiscal year 2015 through 2020. Further, USAID updated the modernization estimates of the life-cycle estimate for fiscal year 2015 through 2016 in September 2014. DOD has identified resources for fiscal years 2013 through 2015 that it needs to operate SPOT-ES; however, since 2010, DOD has not updated its life-cycle cost estimate or fully defined and assessed its plans to identify all the resources it needs to achieve the system’s objectives. GAO’s Cost Estimating and Assessment Guide states that approved cost estimates are often used to create budget spending plans and they should be updated with actual costs so that they are always relevant and current. In addition, Standards for Internal Control in the Federal Government state that management must continually assess and evaluate its plans to ensure that the control activities being used are effective and updated when necessary.to the defense acquisition system states that effective life-cycle sustainment for information systems requires continuous monitoring to Further, DOD guidance related ensure investments are maintained at the right size, cost, and condition, to support business missions and objectives. According to Under Secretary of Defense (Comptroller) figures, the SPOT-ES program received about $27.1 million to operate and maintain its system and $2.8 million to develop and modernize in fiscal year 2014. Table 2 shows the SPOT-ES cost estimates, and the funds that DOD officials identified as requested for fiscal years 2013 through 2015 and the amount they identified as funded for fiscal years 2013 and 2014. DOD prepared a SPOT-ES life-cycle cost estimate in 2010; however, the department has not updated the SPOT-ES life-cycle cost estimate since then to reflect any changes in costs due to schedule delays or program changes. The SPOT-ES life-cycle cost estimate projected that SPOT and TOPSS would undergo upgrades that were not completed or not completed as scheduled. For example, program officials projected that TOPSS versions 1.3 to 1.7 upgrades would be completed in fiscal year 2011, but as of October 2014, a subversion of 1.1 was the latest upgrade; however, SPOT-ES program officials have not updated the life-cycle cost estimate to reflect the costs of implementing the upgrades in the future. Figure 2 illustrates SPOT-ES’s schedule delays. In addition, DOD has not fully defined and assessed its plans for SPOT- ES to ensure a comprehensive life-cycle cost estimate, representative of all resources it needs to sustain the system. GAO’s Cost Estimating and Assessment Guide states that a comprehensive life-cycle cost estimate should include both government and contractor costs of the program over its full life cycle, from inception of the program through retirement of the program. The estimate should also completely define the program, reflect the current schedule, be technically reasonable, and be structured in sufficient detail to ensure that cost elements are neither omitted nor double counted. However, DOD has not defined some of its plans that involve cost elements that need to be incorporated into SPOT-ES life- cycle cost estimate. For example, SPOT-ES program officials identified the need to network and to further develop the JAMMS movement tracking application that feeds movement data to other SPOT-ES systems. SPOT-ES program officials had scheduled to network JAMMS to the SPOT database in the JAMMS version 4.0 release in fiscal year 2011, but as of November 2014, had not done so, as figure 2 illustrates. SPOT-ES program officials prepared a concept paper in September 2014 that proposed ideas for JAMMS’ further development; however, the paper does not define the scope of capabilities for a networked JAMMS. Further, even though the SPOT-ES life-cycle estimate reflected the costs of networking JAMMS in fiscal year 2011, it has not been updated to include the plans for networking or developing JAMMS in future years. Officials said they have not updated the life-cycle cost estimate since 2010 because the system has proven to be stable, and they will update the estimate to reflect additional development and modernization funds they may require to improve the system. However, while the system may be stable, the cost elements and assumptions that DOD used to develop the life-cycle cost estimate have changed. Specifically, the host site supporters and the contractor that provided software maintenance for SPOT-ES have changed. SPOT-ES program officials had projected operation and maintenance life-cycle costs based on the actual costs of the previous host site supporters and the actual costs of software maintenance under the previous contract. For example, the life-cycle cost estimate projected an average yearly software maintenance cost of $16 million, but the software maintenance costs for fiscal years 2013 and 2014 were $2.7 million and $2.5, million respectively. Further, SPOT-ES program officials had requested additional development and modernization funds for fiscal years 2013 and 2014 when compared to the 2010 cost estimate, but they did not update the life-cycle cost estimate. SPOT-ES program officials did not update the life cycle cost estimate or fully assess and define plans to ensure a comprehensive and accurate cost estimate because they accepted the system’s previous program- management estimates as reported. Further, the Under Secretary of Defense for Acquisition, Technology and Logistics and the Under Secretary of Defense for Personnel and Readiness, whose offices both have oversight responsibilities for the SPOT-ES program, have not ensured that the current program management assess and define all plans for the system’s further development or update the life-cycle cost estimate. According to GAO’s Cost Estimating and Assessment Guide, a life-cycle cost estimate should encompass all past (or sunk), present, and future costs for every aspect of the program, regardless of funding source, including all government and contractor costs. Without defining and assessing plans to provide a full accounting for the system, thereby fully accounting for life-cycle costs, management will have difficulty planning program resource requirements and making decisions. Also, to ensure accuracy, GAO’s Cost Estimating and Assessment Guide, states that cost estimates should be updated regularly to reflect significant changes in the program—such as when schedules or other assumptions change—and actual costs, so that it is always reflecting current status. If the estimate is not regularly updated, it will be difficult to analyze changes in program costs, and collecting cost and technical data to support future estimates will be hindered. Moreover, the cost estimate cannot provide decision makers with accurate information for assessing alternative decisions. DOD has developed business rules governing data entry about contracts and contractor personnel into SPOT. However, DOD cannot assure that SPOT or JAMMS—which provides data to SPOT—provide either contract or contractor personnel data that are consistently timely and reliable. DOD has developed SPOT business rules for entering data about contracts and contractor personnel; however, DOD’s process does not provide reasonable assurance that the business rules are followed and SPOT data are timely and reliable. In the context of operational contract support, various provisions in DOD guidance and the Defense Federal Acquisition Regulation Supplement (DFARS) relate to the accuracy and timeliness of information in SPOT. For example: DOD guidance provides that contracting officers, through the terms of contracts, shall require contractors to enter data before employee deployment and to maintain and update the information for relevant employees. An applicable DFARS clause specifies that the contractor shall use SPOT to enter and maintain data for all contractors authorized to accompany the force and, as designated, other personnel. The contractor is to enter the required information about its personnel prior to deployment and to continue to use SPOT to maintain accurate, up- to-date information throughout the deployment. Under the clause, changes to personnel status relating to arrival date and duty location, including closing out deployment with proper status (such as mission complete, killed, wounded) must be annotated in SPOT in accordance with timelines established in SPOT business rules. Defense Manpower Data Center, DOD Business Rules for the Synchronized Predeployment and Operational Tracker (SPOT) (Jan. 21, 2014); and Defense Manpower Data Center, DOD Business Rules for the Synchronized Predeployment and Operational Tracker (SPOT) (Nov. 21, 2014). arrived at the primary duty station.they did not know how often company administrators entered such information within a day. However, they reported that nearly 4,000 contractors with active deployments on active contracts lacked in- theater arrival dates in SPOT at least 7 days after the scheduled date, as of September 2014. SPOT program officials said that Armed Contractor Personnel—The SPOT business rules indicate that company administrators are to enter information into SPOT on equipment used by their personnel who perform security functions, such as the serial numbers of each weapon issued specifically to an individual. However, program officials told us that they sometimes could not link weapon serial numbers to individual contractor personnel because companies did not always assign specific weapons to specific individuals. As a result of our inquiries, program officials also discovered and corrected a software deficiency in SPOT wherein no serial numbers were displayed if personnel had more than one assigned weapon. In addition, we found that the business rules may be insufficient to allow contractors to provide accurate information on foreign contractor personnel. According to the business rules, for contractor personnel who lack a foreign identification number, the contractor must create a number according to instructions in the SPOT User Guide. For U.S. Central Command, in the case of Afghanistan or non-Iraqi citizens, the guide directs personnel to use the first five letters of the last name, plus date of birth in the format “mmddyyyy.” However, SPOT program officials found that some individuals lack birth certificates and do not know their birth date. Out of about 769,000 foreign nationals in SPOT, 213,348 have a birth date of January 1 recorded in SPOT, and about 100,000 of these have identical surname and birth date information. This has created challenges in situations where people have identical names, according to DOD officials, including officials from Pacific Command. DOD has developed a process to begin assigning unique identification numbers, but has not made corresponding changes to the user guide to show how the process will apply to U.S. Central Command. DOD has established a goal for SPOT of 85 percent accuracy. In 2011, DOD directed contracting activities in the Central Command area of responsibility to continue conducting a quarterly manual census, or physical count of contractor personnel, until SPOT contains at least 85 percent of the data revealed by the manual count. DOD officials have conducted this census at Central Command, where most contractors who are now required to register in SPOT are located, every quarter since 2008. While some contracting activities have recently exceeded that threshold—for example, our review of census data for the third quarter of fiscal year 2014 found that 69 out of 102 contracting activities had exceeded 85 percent—officials said that they would continue to conduct the census until all activities did so. DOD’s process also cannot provide reasonable assurance that contractors and contracting officers enter data into SPOT according to the business rules for two primary reasons. First, the department does not use its available mechanisms for tracking contractor performance to promote contractor accountability for entering data correctly and within prescribed time frames into SPOT. These mechanisms include the Contractor Performance Assessment Reporting System (CPARS), which contracting officers can use to report on contractors’ performance. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics officials said that they thought using such mechanisms would be helpful. Second, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics officials said that contracting officers sometimes did not oversee contractors’ data input or enter their own required data.officers in its Operational Contract Support Action Plan for Fiscal Years 2014-2017, which calls for the military departments to issue policy mandating that their contracting activities populate SPOT with required data. DOD has taken steps that may address contracting By not having timely and reliable contract and contractor personnel data, DOD does not have complete visibility into the number of contractors present in the contingency environment, and for whom the department may have to develop plans for such services as force protection, sustenance, and repatriation of injured or deceased contractor personnel. For example, one official at a DOD combatant command stated that DOD can more quickly repatriate injured or deceased contractors when their information is up-to-date and accurate in SPOT. Not using the mechanisms it already has in place to track contractor performance limits DOD’s assurance that contractors have abided by business rules to provide timely and reliable data, and requires DOD officials to devote additional resources to conducting the quarterly manual census. JAMMS is a major data source for SPOT-ES to track contractor personnel in key deployment sites such as dining halls and military airfields; however, SPOT-ES does not receive timely and reliable data from JAMMS. DOD policy states that information solutions shall provide, among other things, reliable, timely, and accurate information. Moreover, DOD has noted that the purpose of SPOT is to provide the combatant commander with accurate, real-time information on all personnel within specified geographic combatant command operations areas and to enable the department to keep track of all persons deployed in contingency zones. We found examples in which deployed contractors provided three types of unreliable data to JAMMS: not scanning at all, scanning incorrect documents, or scanning documents into a terminal that reported back to SPOT-ES as if it was in a different location. First, a review of data at one deployed operational area from August 2013 through March 2014 found that contractor scans in the week preceding the date the data were pulled represented less than half of the contractors believed to be in the area. Second, JAMMS terminals can scan incorrect documents because they can accept bar-coded items that are not identification documents, such as supermarket loyalty cards, and other identification documents, such as letters of authorization, that do not contain photos. For example, in the last two weeks of September 2014, there were almost 15,000 JAMMS scans in Afghanistan that could not be linked to either a deployment in SPOT or a Defense Enrollment Eligibility Reporting System record. Third, officials told us that they sometimes receive erroneous data from JAMMS terminals about the locations of contractor personnel. For example, officials at Central Command said that they have received reports that contractor personnel were at sites that they never visited. We also reviewed documentation about one site in Afghanistan that reported that more than 20 percent of records represented individuals who either never scanned their identification documents or whose most recent scan was in a country other than Afghanistan. We also found that JAMMS data may not always be transmitted to SPOT in a timely manner. Although DOD officials have indicated an intention to develop a networking capability that would allow JAMMS data to be transmitted to SPOT in real time, data are currently transmitted by uploading compact discs from stand-alone JAMMS terminals to networked computers. The JAMMS User Manual and the System of Record Notice for SPOT-ES indicate that JAMMS uploads personnel movement records to SPOT-ES daily. However, personnel at operating locations vary in the frequency with which they upload JAMMS location data into SPOT. For example, SPOT program office officials told us that data usually are uploaded every several days. In contrast, officials at Central Command said their data are uploaded about every week, but that at one site, data for 20 percent of contractor personnel were not uploaded for 30 days. Personnel at operating locations are not consistently ensuring that JAMMS data are transmitted accurately or in a timely manner into SPOT- ES because they lack comprehensive guidance from DOD that describes the purpose of JAMMS and its role in supporting plans for different types of missions. The lack of guidance stems from lack of consensus about requirements for visibility and accountability—such as what constitutes “near real-time” movement tracking—within DOD. Officials at the Joint Staff and in combatant commands told us that in the absence of such guidance, they do not know what resources or emphasis to allocate to correcting problems that they encounter. For example, they could minimize inaccurate scans by posting a monitor at each JAMMS terminal to verify that everyone scans a valid identification document, but such an action has resource implications that could conflict with other command priorities. Without clear guidance about the purpose of JAMMS, such as direction about what types of missions it is to support, the department is not able to help Joint Staff and combatant command planners to determine where to locate JAMMS terminals and what resources to allocate to minimize inaccurate scans. For example, combatant commanders would need more precise data if they expected to use JAMMS to plan a short-notice operation such as a noncombatant evacuation than they would to calculate how much to bill contractors for their employees’ use of on-base dining or medical facilities. Such guidance could also assist program management officials as they develop cost estimates and associated plans, as more frequent data uploads or the potential networking capability described earlier would be more applicable to some missions than others. DOD has completed interoperability certification testing between SPOT- ES and its data sources, but has not fully registered or approved the system’s data to ensure data are visible and trusted. DOD’s SPOT-ES program office completed interoperability certification testing of SPOT-ES with the systems that provide it with data. In March 2014, the Defense Information Systems Agency Joint Interoperability Test Command certified that the latest version of SPOT-ES meets all joint critical interoperability requirements, such as supporting military operations and effectively exchanging information with other systems. Our review of documentation for SPOT-ES found that the system began as a rapid capability (i.e., a capability to be delivered quickly), was conditionally approved for full deployment in November 2010, and did not receive joint interoperability certification until March 2014. In addition, we found that three different DOD offices provided support to SPOT-ES before the Defense Manpower Data Center assumed management and operation of the system in November 2013. However, DOD’s SPOT-ES program-management office completed SPOT-ES interoperability certification testing and associated steps, such as developing and submitting an Information Support Plan for DOD approval. The SPOT-ES program office has not ensured that the system’s data are visible and trusted because it has not fully registered the system’s data in the Data Services Environment (DSE). In May 2003, DOD’s Chief Information Officer issued a memorandum that provided guidance for managing data in a net-centric environment and highlighted key attributes of the department’s net-centric strategy, such as ensuring that data are visible, among other things; all data are associated with metadata to enable discovery by users; and all data are posted to shared spaces to provide access to users except when limited by security, policy, or regulation. According to DOD, the DSE is DOD’s primary resource for registering, sharing, and publishing different types of metadata about systems, services, and data resources to support DOD operational capabilities and data standards and needs for all authorized users. DOD Instruction 8320.02 requires heads of DOD components to register all authoritative data sources, information technology services, and required metadata in the DSE and further states as policy that data will be made visible and trusted, among other things, for all authorized users.According to the guidance, data is made visible by creating and associating metadata. Data is considered trusted when there is sufficient pedigree and descriptive metadata for users to rely on it as an authoritative data source.related to authoritative data sources: (1) data collection, which supports the registration of new data needs, data producers, systems, and databases; (2) data association, which defines all the systems, data producers, and data needs that make up the proposed structure of the authoritative data source; and (3) authoritative data source approval, which involves the authoritative body reviewing and making a determination to approve or not approve the proposed authoritative data source. Figure 3 provides a high-level overview of the key processes related to authoritative data sources. Department of Defense, Deputy Under Secretary of Defense for Logistics and Materiel Readiness and Deputy Under Secretary of Defense for Program Integration, “Designation of Synchronized Predeployment and Operational Tracker (SPOT) as Central Repository for Information on Contractors Deploying with the Force (CDF),” memorandum (Washington, D.C.: Jan. 25, 2007). State issued guidance in February 2014 to address the statutory requirements for data collection on contract support for future contingency operations. Section 844 of the National Defense Authorization Act for Fiscal Year 2013 requires DOD, State, and USAID to each issue guidance regarding data collection on contract support for future contingency operations outside the United States that involve combat operations. The guidance is to ensure that each agency takes the steps necessary to possess the capability to collect and report on at least eight data elements that the statute enumerates. These elements are: total number of contracts entered into as of the date of any report, total number of such contracts that are active as of such date,total value of contracts entered into as of such date, total value of such contracts that are active as of such date, identification of the extent to which the contracts entered into as of such date were entered into using competitive procedures, total number of contractor personnel working under contracts entered into as of the end of each calendar quarter during the 1-year period ending on such date, total number of contractor personnel performing security functions under contracts entered into as of the end of each calendar quarter during the 1-year period ending on such date, and total number of contractor personnel killed or wounded under any contracts entered into. We found that a provision in State’s Foreign Affairs Manual lists each element and stipulates which data source the department will use to gather information on each of the eight data elements. For example, State obtains data on contract value and the use of competitive procedures from FPDS-NG, according to the provision. DOD published guidance about operational contract support in 2011, which officials in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics told us meets the statutory requirement. However, unlike the State guidance, the DOD guidance does not clearly cover each data element listed in section 844 and may not ensure that the department has the capability to collect and report on the required data. This guidance identifies as DOD policy that contractor visibility and accountability shall be maintained in applicable contingency operations through a common joint database–SPOT or its successor. Also, the guidance provides some information on contract visibility and contractor accountability responsibilities and procedures, including responsibilities related to several of the data elements specified in section 844. For example, the guidance indicates that SPOT or its successor shall contain, or link to, minimum contract information, such as contract number, contract category, period of performance, contracting agency, and contracting office, necessary to establish and maintain accountability and visibility of certain contractor personnel, maintain information on specific equipment related to private security contracts, and the contract capabilities in relevant operations. Operational Contract Support, 32 C.F.R. pt. 158; Department of Defense Instruction 3020.41, Operational Contract Support (OCS) (Dec. 20, 2011). collect information on certain contractor personnel, but it is not clear that it would ensure the collection of all personnel information required by section 844. For example, it is not clear whether the collection of employee data based on the guidance would ensure the identification of contractor personnel performing security functions or the number killed or wounded. DOD officials said that they have not updated their guidance to specifically address each data element and source because the 2011 instruction meets their needs, but commented that the instruction is undergoing a revision and it would be useful to update SPOT provisions during that process. Without current and comprehensive guidance that identifies the data elements to collect, the systems with which to collect and report them, and relevant responsibilities and procedures, DOD lacks assurance that its 2011 guidance provides the department the ability to ensure that it can take the steps necessary to collect and report on the eight required data elements. Moreover, DOD may find it more difficult to reconcile information when there are multiple sources for the same data element. For example, SPOT receives some information about contract numbers and about whether a contract was awarded competitively both from the government-wide FPDS-NG and from contractors’ manual entry. Additionally, an update would also clarify current procedures to help ensure the collection of the total number and total value of contracts active as of a reporting date, as well as the number and value of contracts entered into as of the reporting date. USAID has published guidance about the use of GLAAS and SPOT to record data about contracts and contractor personnel, but this guidance may also not ensure that the agency has the capability to collect and report on the required data. USAID officials told us in May 2014 that they were not aware of guidance to implement the statutory requirement, and our review of agency guidance revealed no provisions specifically tied to section 844. However, in November 2014, officials added that USAID believed that its current policy and procedures related to data collection on contract support for contingency operations were sufficient. According to a USAID management official, the agency reports on data related to number, value, and competition of contracts through the reporting functionality of GLAAS, but it is not clear that the corresponding guidance identified by the official would ensure the collection of each data element. The official also cited two Acquisition and Assistance Policy Directives (AAPD), dated in 2009 and 2010. These directives require USAID contracting officers to include a provision in certain contracts with performance in Iraq and Afghanistan that identifies SPOT as the required system to use for personnel data.contract provisions do not specifically address data collection on contractors who are killed or wounded, and do not address any future contingency operations beyond Iraq or Afghanistan. As with DOD, until USAID develops current and comprehensive guidance that identifies the data elements to collect, the systems with which to collect and report them, and relevant responsibilities and procedures, it cannot ensure that it is providing itself with the ability to take the steps necessary to collect and report on the eight required data elements. DOD, State, and USAID expect to continue to rely on contractor personnel to augment military and civilian personnel as they did in recent operations in Iraq and Afghanistan. While SPOT-ES is a central repository in which the three agencies have stored information about nearly a million contractor personnel, the data system is not comprehensive and reliable. Additionally, DOD has not updated its life- cycle cost estimate with actual costs or to reflect changes in costs due to SPOT-ES schedule delays and program changes. Also, DOD has not fully defined and assessed some of its plans to incorporate as cost elements in its life-cycle cost estimate. Consistent with cost-estimating guidance and standards for internal control, without updating its cost estimates to include defined plans for SPOT-ES further development, DOD may not be able to fully determine future resources needed to sustain SPOT-ES. Ensuring that the SPOT-ES program updates its cost estimate to include defined program plans could help the program to better plan program resource requirements and make decisions. DOD has developed business rules to enter data about contracts and contractor personnel in SPOT, but neither SPOT nor JAMMS can provide contractor personnel data that are consistently timely and reliable. Without a reasonable assurance of timely and reliable contract and contractor personnel data, DOD has incomplete visibility into the number of contractors present in the contingency environment, and for whom the department may have to develop plans for such services as force protection, sustenance, and repatriation of wounded or deceased contractor personnel. DOD has not used existing mechanisms for tracking contractor performance that could help provide reasonable assurance that contractors have abided by business rules to provide timely and reliable data, and relieve DOD officials of the need to continue to devote resources to conducting the quarterly manual census. Finally, DOD has not developed comprehensive guidance for JAMMS, which could provide the combatant commands with better information on how to allocate their resources to maximize JAMMS’ utility as a tracking and planning tool. The SPOT-ES program office has completed interoperability certification testing for the system, but the program office did not fully register the system’s data or complete the steps required of authoritative data sources in the DSE. Full registration and approval of SPOT-ES data in the DSE would ensure data are visible and trusted and provide authorized users seeking authoritative data on contracts and contractor personnel confidence in the information, which may prevent the need to develop their own data and potential duplication of efforts. In addition, full registration and approval may preclude data producers from continuing to rely on individual data solutions for their systems rather than leveraging shared common data across the DOD enterprise. State has issued guidance to address section 844 of the National Defense Authorization Act for Fiscal Year 2013, which requires DOD, State, and USAID to each issue guidance regarding data collection on contract support for future contingency operations. The guidance is to ensure that the agencies have the capability to collect and report at least DOD, however, issued guidance in 2011 that may eight data elements.not guarantee that the department takes the steps necessary to ensure the capability to collect and report on the eight required data elements, and USAID guidance may also not ensure complete data collection. For both DOD and USAID, including information on each of the eight elements in departmental guidance could better ensure the ability to collect and report on those elements, possibly including better timeliness and accuracy of data entry. To help improve DOD, State, and USAID’s ability to track contracts and contractor personnel in contingency operations, we are making the following five recommendations to DOD: To ensure SPOT-ES cost estimates are accurate and comprehensive, we recommend that the Under Secretary of Defense for Personnel and Readiness in coordination with the Under Secretary of Defense for Acquisition, Technology and Logistics direct the system’s program office to regularly update its life-cycle cost estimate to include defining and assessing its plans for SPOT-ES. To help improve timeliness and reliability of data in SPOT-ES, the Secretary of Defense should direct Defense Procurement and Acquisition Policy officials, through the Under Secretary of Defense for Acquisition, Technology and Logistics, to ensure that contracting officers use available mechanisms to track contractor performance of SPOT data entry, such as its Contractor Performance Assessment Reporting System or other appropriate performance systems or databases. To provide clarity about expectations for JAMMS that can help improve the timeliness and reliability of data for SPOT-ES from JAMMS uploads, the Secretary of Defense should direct the Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders, to develop comprehensive guidance regarding the purpose of JAMMS and its role in supporting plans for different types of missions. Such guidance could include direction on the number and location of JAMMS terminals and how frequently JAMMS’s data should be uploaded into SPOT-ES to meet DOD’s information needs. To enhance the value of SPOT-ES data, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to fully register SPOT-ES data in the DSE to make data visible and trusted, including taking the necessary steps related to authoritative data sources. To help ensure that DOD possesses the capability to collect and report statutorily required information and to clarify responsibilities and procedures, the Secretary of Defense should direct the Under Secretary of Defense for Acquisition, Technology and Logistics to update SPOT provisions during the process of updating operational contract support guidance. In addition, we are making the following recommendation to USAID: To help ensure that USAID possesses the capability to collect and report statutorily required information, the Administrator of USAID should issue current and comprehensive guidance regarding data collection on contract support for future contingency operations outside the United States that involve combat operations. We provided a draft of this report to DOD, USAID, and State for their review and comment. DOD and USAID provided written comments, which are summarized below and reprinted in appendixes II and III, respectively. State did not provide comments on the report. DOD concurred with four of the five recommendations directed to it and partially concurred with the fifth. DOD also described actions underway or plans to address the recommendations. USAID agreed with the recommendation directed to it and described plans to address it. DOD concurred with the first recommendation, that the SPOT-ES program office regularly update the system’s life-cycle cost estimate to include defining and assessing its plans for SPOT-ES. DOD stated that the Under Secretary of Defense for Personnel and Readiness would direct the system program office to regularly update its life-cycle cost estimate to include defining and assessing its plans for SPOT-ES. We believe that these actions, if fully implemented, would address the recommendation and better ensure that SPOT-ES cost estimates are accurate and comprehensive. DOD concurred with the second recommendation, that the Secretary of Defense direct Defense Procurement and Acquisition Policy officials, through the Under Secretary of Defense for Acquisition, Technology and Logistics, to ensure that contracting officers use available mechanisms to track contractor performance of SPOT data entry, such as DOD’s Contractor Performance Assessment Reporting System or other appropriate performance systems or databases. DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics would direct Defense Procurement and Acquisition Policy officials to ensure that DOD contracting officers use available mechanisms to improve compliance with contract requirements for SPOT data entry. If DOD uses available mechanisms to improve compliance with contract requirements, that would address the intent of the recommendation and improve timeliness and reliability of data in SPOT-ES. DOD partially concurred with the third recommendation, that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders, to develop comprehensive guidance regarding the purpose of JAMMS and its role in supporting plans for different types of missions. Such guidance could include direction on the number and location of JAMMS terminals and how frequently JAMMS’s data should be uploaded into SPOT-ES to meet DOD’s information needs. DOD stated that it agreed to provide clarity regarding the purpose and use of JAMMS to improve the timeliness and reliability of JAMMS data, but it did not agree that such guidance could include direction on the number and location of JAMMS terminals and how frequently JAMMS’s data should be uploaded into SPOT-ES. DOD stated that it would revise language in DOD Instruction 3020.41, Operational Contract Support, to reflect in policy the requirement to use the entire SPOT Enterprise Suite (SPOT-ES), which includes JAMMS. DOD also stated that the combatant commander should establish the requirements for terminal quantities and locations and for data upload schedules based on operational needs in the relevant theater. We agree with DOD that the combatant commands need flexibility based on operational requirements. The recommendation was intended to allow for such flexibility by suggesting the types of direction that could be included in the guidance, which would facilitate consistency in the use of terminals and data across the commands. Further, as we recommended, this guidance should be developed in coordination with the combatant commanders. DOD concurred with the fourth recommendation, that the Under Secretary of Defense for Personnel and Readiness fully register SPOT-ES data in the DSE to make data visible and trusted, including taking the necessary steps related to authoritative data sources. DOD stated that it agreed to complete the process of registering the system’s data, including validation of authoritative data sources. We believe that, if fully implemented, these steps would address the recommendation and enhance the value of the system’s data for authorized users seeking authoritative data on contracts and contractor personnel. DOD concurred with the fifth recommendation, that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to update SPOT provisions during the process of updating operational contract support guidance. DOD stated that DOD Instruction 3020.41, Operational Contract Support, is scheduled to be republished in 2016 and will be updated to specifically identify the statutorily required data elements to ensure collection and reporting can be accomplished. We believe this action, if fully implemented, would address the recommendation and help to ensure that DOD possesses the capability to collect and report statutorily required information and to clarify responsibilities and procedures. USAID agreed with the recommendation that the Administrator of USAID issue current and comprehensive guidance regarding data collection on contract support for future contingency operations outside the United States that involve combat operations. USAID stated that such guidance would explicitly describe the Section 844 statutory requirement, list each of the eight data elements required by Section 844, and stipulate the data source that USAID will use to collect and report data on each specific element. USAID stated that the guidance would refer back to its policies and procedures as appropriate and supplement them as needed to ensure that responsibilities and procedures are adequately defined. We believe that issuing such guidance would address the recommendation and help to ensure that USAID possesses the capability to collect and report statutorily required information. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the Secretary of State, and the Administrator of the U.S. Agency for International Development. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) and United States Agency for International Development (USAID) have assessed resources needed to sustain the systems used to track contracts and contractor personnel, we reviewed and compared agencies’ funding information, cost estimates, systems plans and schedule for DOD’s Synchronized Predeployment and Operational Tracker–Enterprise Suite (SPOT-ES) and USAID’S Global Acquisition and Award System (GLAAS) to accepted cost-estimating guidance and internal- control standards. We did not review resources State needs because State uses SPOT-ES and does not contribute any funds to its operation or development. We limited our review of funding information for SPOT-ES and GLAAS to fiscal years 2013 through 2015 because SPOT-ES’s current program office assumed full operational and management control of SPOT-ES in fiscal year 2013 and GLAAS became operational at all USAID offices worldwide in fiscal year 2013. Also, we defined resources to include the costs of maintaining and updating each system. For GLAAS, we reviewed funding information provided by the system’s program office, funding estimates found in the President’s Budget Requests, and Office of Management and Budget information on informational-technology spending. We acquired and reviewed funding information for SPOT-ES from the DOD Comptroller, the program office, and the President’s Budget Requests. We accepted values as reported by the Office of the Under Secretary of Defense (Comptroller). For USAID, we compared GLAAS’s business cases for fiscal years 2013 and 2014, GLAAS’s Earned Value Management Metrics, and GLAAS’s cost estimates for its modernization projects and operational activities to determine how costs were assessed or adjusted. For DOD, we reviewed and analyzed SPOT-ES schedules for upgrades, the program acquisition baseline, and the SPOT-ES fiscal year 2013 business case to determine whether costs were assessed and estimates were updated. We also interviewed officials from the agencies’ system program offices about how they developed, and decided whether to update, cost estimates. To determine the extent to which DOD has developed business rules and processes to ensure the timeliness and reliability of data, we obtained and reviewed documents on system usage and business rules and user guides for SPOT-ES. These included: the January and November 2014 versions of DOD’s Business Rules for SPOT-ES; DOD’s User Guides for SPOT, the Joint Asset Movement Management System (JAMMS), and the Total Operational Picture Support System (TOPSS); and State’s and USAID’s Business Rules for SPOT-ES. We sent a structured set of questions on data reliability and use and evaluated the responses. These responses included details about data quality checks that the program office performed. We also sent a series of requests for specific data from both SPOT and JAMMS records to the SPOT-ES program management office during August and September 2014, and received data pulls based on those requests in installments from September through November 2014. We interviewed officials at the three agencies about their efforts to improve SPOT-ES timeliness and reliability. We determined that the data were not reliable for determining exact numbers of contractor personnel or their exact locations at a point in time. However, we used them primarily to illustrate that data accuracy depended on contractors and contracting officers entering data according to the business rules, and secondarily to provide approximate contractor personnel totals, and they were sufficiently reliable for those purposes. At DOD, we interviewed officials in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (Deputy Assistant Secretary of Defense for Program Support and Directorate for Defense Procurement and Acquisition Policy); at the Joint Staff, Directorates for Manpower and Personnel (J-1) and Logistics (J-4); and U.S. Central Command headquarters and U.S. Forces-Afghanistan. We also circulated questions to officials at the other five geographic combatant commands about their experiences with SPOT-ES, particularly with SPOT and JAMMS, and analyzed their responses. U.S. Africa Command did not provide written responses but addressed some of the questions during other meetings related to operational contract support. At State, we met with officials from the Offices of Logistics Management and Acquisitions Management in Arlington, Virginia. In the interest of producing an unclassified report, we excluded the classified versions of both SPOT and TOPSS from our scope. To determine the extent to which DOD has completed interoperability testing and registered and approved SPOT-ES data, we reviewed DOD guidance on sharing data and information in the department, as well as guidance on interoperability. We reviewed relevant documents, including DOD’s Defense Information Systems Agency Joint Interoperability Certification of SPOT-ES; Interim Certificates to Operate granted to SPOT-ES by the Interoperability Steering Group; the DOD Data Services Environment (DSE) Concept of Operations; and SPOT-ES’s Tailored Information Support Plan and DSE profile. We obtained access to DOD’s DSE and reviewed SPOT-ES’s profile to identify whether the program office had completed all appropriate steps to register the system’s data in the DSE: this included comparing SPOT-ES’s profile against another profile for an approved authoritative data source. Also, we interviewed DOD officials from the SPOT-ES program office about the system’s capabilities; and conducted a telephone interview with officials from the DOD Defense Information Systems Agency’s DSE to discuss SPOT-ES’s profile and confirm what requirements were and were not completed for the system in the DSE. We also obtained and reviewed responses provided by officials with the Defense Information Systems Agency’s DSE regarding SPOT-ES’s DSE profile and completion of requirements in the registry; and responses provided by the SPOT-ES program office regarding technical information on the system. We also circulated a standard set of questions to DOD and USAID on the systems used to track contract and contractor personnel data and analyzed the results, and determined that the information was sufficiently reliable for the purposes for which we used it. That is, we determined that DOD collects and reports on contract and contractor personnel data and USAID collects and reports on contract data through its own system. We did not verify figures about the total value of contracts, total number of contractor personnel, or other attributes of the data. The standard set of questions we circulated to DOD and USAID asked detailed and technical questions about the systems. For example, for system architecture we asked about system interfaces and for DOD to identify the systems that provide data to SPOT-ES. Similarly, we asked USAID to identify what systems interface with GLAAS. We also asked about data quality controls and limitations: for example, we asked DOD and USAID to provide their perception regarding the data quality of their respective systems for collecting and reporting contract and contractor personnel data. In the case of USAID, we also asked for its perception regarding the data reliability of DOD’s SPOT. We collected responses from DOD and USAID regarding their contract and contractor personnel data collection systems and conducted follow-up on their responses when needed. To determine the extent to which agencies have developed guidance to meet statutory data collection and reporting requirements related to contract support for future contingency operations, we obtained and reviewed documents, including relevant provisions in State’s Foreign Affairs Manual; USAID’s Automated Directives System and Acquisition and Assistance Policy Directives; and DOD Instruction 3020.41, Operational Contract Support. We analyzed these provisions to determine whether they addressed each of the eight specific data elements related to contracts and contractor personnel that are in section 844 of the National Defense Authorization Act for Fiscal Year 2013. We also interviewed officials to learn about how they related departmental or agency guidance to statutory requirements. At DOD, these included: officials at the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (Program Support); deputy director of the Joint Staff for Logistics; and SPOT-ES program office. At State, we met with officials from the Offices of Logistics Management and Acquisitions Management in Arlington, Virginia; at USAID, we met with officials from the Bureau of Management (Office of Management Policy, Budget and Performance). We conducted this performance audit from April 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following individuals made contributions to this report: Carole F. Coffey, Assistant Director (retired); Tim DiNapoli; Jamarla Edwards; Rebecca Guerrero; Michael Holland; Amie Steele Lesser; Sally Newman; Richard Powelson; Michael Shaughnessy; and Michael Silver.
SPOT-ES contains data on almost 1 million contractor personnel who have supported DOD, State, and USAID in contingency operations. Also, USAID's GLAAS provides data, such as award value, for reports to Congress on contract support. The National Defense Authorization Act for Fiscal Year 2013 mandated that GAO review the data systems of DOD, State, and USAID related to certain contract support. This report evaluates the extent to which, among other things, (1) DOD and USAID have assessed resources needed to sustain the systems used to track contracts and contractor personnel; (2) DOD has developed business rules and processes to help ensure the timeliness and reliability of SPOT-ES data; and (3) DOD has completed interoperability testing and registered and approved data for SPOT-ES. GAO reviewed DOD and USAID documents, such as cost schedules, business rules, and user manuals, and interviewed cognizant officials. The U.S. Agency for International Development (USAID) has assessed resources that it needs to sustain its contract data system, the Global Acquisition and Assistance System (GLAAS), but the Department of Defense (DOD) has not assessed all resources that it will need to sustain the Synchronized Predeployment and Operational Tracker–Enterprise Suite (SPOT-ES). DOD, the Department of State (State) and USAID use SPOT-ES as a repository of information on contracts and contractor personnel in contingency operations; USAID also uses GLAAS to record information about contracts. DOD uses the budget process to identify resources it projects it will need in the next budget year to modernize and operate its systems, but DOD has not updated its life-cycle cost estimate or fully defined and assessed its plans to determine all resources needed to sustain SPOT-ES. For example, DOD has not updated its life-cycle cost estimate since 2010, despite changes to costs due to schedule delays, because officials said the system has proven stable. Also, DOD has not defined some of its plans that involve cost elements that need to be included in the estimate because it accepted the system's previous program management estimates as reported. GAO's Cost Estimating and Assessment Guide states that cost estimates should be current and comprehensive. Without regularly updating life-cycle costs and defining and assessing plans to provide a full accounting for the systems' costs, management will have difficulty planning program resource requirements and making decisions. DOD has business rules for the entry of contract and contractor personnel data in SPOT—the database component of SPOT-ES—but lacks reasonable assurance that SPOT provides personnel data that are consistently timely and reliable because the department does not use its available mechanisms for assessing contractor performance to track whether contractors enter data in accordance with the business rules. The business rules, DOD guidance, and an applicable Defense Federal Acquisition Regulation Supplement clause describe how contractors and contracting officers are to enter data in SPOT. Using existing mechanisms for tracking contractor performance could provide DOD reasonable assurance that contractors have abided by business rules to enter and provide timely and reliable data. DOD has completed SPOT-ES interoperability testing, but has not fully registered or approved the system's data. DOD Instruction 8320.02 directs heads of DOD components to register authoritative data sources and metadata in DOD's Data Services Environment (DSE), its primary online repository for technical descriptions related to information technology and systems for all authorized users, and provides policy that data will be visible and trusted. GAO found that registration for SPOT-ES data was not completed, although program officials thought they had completed all the steps needed to register the system. Full registration and approval in the DSE would help ensure that data are visible and trusted. GAO recommends, among other things, that DOD regularly update its lifecycle cost estimate for SPOT-ES to include defining and assessing its plans for SPOT-ES; use mechanisms to track contractor performance of SPOT-ES data entry; and complete SPOT-ES registration in the DSE. DOD concurred with these recommendations, and described planned steps to address them.
In 1995, VA began transforming its delivery and management of health care to expand access to care and increase efficiency. As part of this transformation VA decentralized decision-making and budgeting authority to 22 Veterans Integrated Service Networks, which became responsible for managing all VA health care. The networks and their health care locations became responsible for responding to changing inpatient food service needs and for maintaining or improving quality. Since 1995, the networks have focused on providing care in the most appropriate setting by following headquarters’ guidance and responding to performance measurement incentives. This has resulted in an increase in outpatient care and a decrease in inpatient care. The inpatient average daily census numbers have declined by 35 percent during this period (see fig. 1). Because the decreased number of inpatients meant less need for food services, VA downsized its inpatient food service staff by about 22 percent as a result of actions taken by networks and inpatient locations (see fig. 2). Unlike most health care systems, VA divides its food service operations into inpatient and retail operations, usually with separate kitchens and staff at each inpatient location. The NFS program, funded by appropriations, is responsible for ensuring that VA’s inpatients receive quality nutrition as an integrated part of their health care. VCS is generally responsible for providing food and other retail services to outpatients, visitors, and employees at VA’s health care delivery locations. Although the law authorizes VCS to receive appropriations, VCS has operated for many years solely on funds earned from sales. As with direct health care services, VA’s networks have also explored ways to improve services that support health care, such as food service operations. While VA networks have the option to focus exclusively on improving the efficiency of in-house provision of food service, they also have the option of competing their in-house operations versus contractors to improve efficiency. VA could do this through the Office of Management and Budget (OMB) Circular A-76 process. In the A-76 process, the government identifies the work to be performed—described in the performance work statement—and prepares an in-house cost estimate, based on its most efficient organization, to compare with the best offer from the private sector. To enhance the efficiency of food service, VA has consolidated food production (the cooking and preparation of food) for 28 inpatient locations into kitchens at 10 VA inpatient locations. One of these consolidations took place in the Central Texas Healthcare System and resulted in elimination of food production at two facilities. This example illustrates key elements of the consolidation process. Before consolidation, the Temple, Waco, and Marlin locations each produced their own food for average daily inpatient populations of 664, 679, and 74, respectively. After consolidation, food for Waco and Marlin was produced at Temple because adequate space was available and driving distances (the time needed to transport food) to the receiving locations were less than 90 minutes. The consolidation was phased in over about 3 years and completed in 1998. The consolidation required one-time equipment purchases of about $1 million and resulted in recurring annual labor savings of about $1.3 million.Labor savings were achieved by a reduction of 32 employees, primarily through attrition and buyouts. The Central Texas Healthcare System produces food in one location and transports it to other locations using an advance food preparation and delivery system. Food is prepared in advance and chilled for serving up to 5 days later. The chilled food can be transported in refrigerated trucks from one location to another without losing freshness or becoming unsafe. The food is reheated at the location where it is served. VA reports that patient satisfaction at the Central Texas Health Care System is higher, as measured by patient surveys, since consolidation. VA’s NFS dietitians continue to have responsibility for ensuring food quality and that the nutrition needs of patients are met. Additional VA health care regions provide opportunities for consolidation. For example, four VA locations in the Chicago area are within a 1-hour drive of one another (see fig. 3); in fact, three are within 20 minutes of each other. Yet all four continue to prepare their own food for inpatients. The Chicago network is developing plans for food consolidation for some of these locations. Chicago (Lakeside) Chicago (Westside) Overall, VA currently has 63 unconsolidated production locations within 90 minutes’ drive of another production location.Our analysis suggests that VA could increase its efficiency by consolidating food production for these 63 locations into 29 production locations (see fig. 4).These consolidations could save an estimated $12 million annually from a reduction of 348 employees, with as many as 38 positions eliminated in a single location. To achieve these savings, we estimate that VA may have to make a one-time investment of an estimated $11 million to purchase advance food preparation and delivery equipment. (One-time expenditures are held to this amount because 24 of the potential consolidation locations already own the advance food delivery equipment, which makes up the bulk of equipment costs.) Making the changes required to consolidate food production requires management commitment to a process that may take several years and much effort to achieve but one that could yield significant savings. Network officials indicated in our survey of VA’s health care networks that 29 production locations are considering or planning to consolidate food production. In commenting on a draft version of this report, VA stated that networks 1 (Boston), 3 (Bronx), 8 (Bay Pines), 12 (Chicago), and 22 (Long Beach) have conducted feasibility studies to consider consolidated production. VA has already consolidated some food production locations in these networks. However, these networks could potentially consolidate 14 additional locations into 7 locations. VA’s actual savings from consolidations could exceed our estimates for two reasons. First, VA’s Central Texas Health Care System consolidation, from which we obtained a benchmark for estimating potential savings, does not appear to have yet achieved its full savings potential, which suggests that our savings may be understated. VA officials have indicated that several food service positions will not be filled when they become vacant. Some positions were retained to minimize involuntary separation of employees. Second, we used a 90-minute driving distance to determine potential consolidations and it seems possible that VA could elect to use greater distances. For example, the VA facility in Dayton is preparing and delivering food as far as Butler, Pennsylvania—a 6-hour drive. The Dayton facility has technologies that can keep food safely chilled for more than 30 days. In addition, two facilities in Texas that are about a 2-hour drive from one another are currently in the process of consolidating their food systems. Using greater travel distances could allow more facilities to be consolidated, thereby increasing cost savings. VA can save millions of dollars in labor costs by employing VCS workers, rather than NFS workers, to provide inpatient food service. These savings can be achieved because these workers are paid, on average, about 30 percent less than NFS wage grade employees. The wage differences between the two result from differences in how wage rates for their respective pay schedules are determined.VCS job descriptions are similar to those of NFS and both receive similar training when providing inpatient food services. VCS workers are federal government employees paid under the Non Appropriated Funds Regular Wage Rate Schedule. NFS workers are also federal government employees but are paid under the Federal Wage System Regular and Special Production Facilitating Wage Rate Schedule. Both VCS and NFS employees have the same standard government benefit coverage. VA is able to employ VCS workers to provide inpatient services through NFS agreements with VCS under the Economy Act. Recently, nine VA locations began to employ VCS workers rather than NFS workers to provide inpatient food services (see app. II for a list of these locations). In some of these locations, VCS employees provide all inpatient food services; in others VCS workers are only beginning to be included in inpatient food services. In all cases, NFS dietitians continue to ensure food service quality. Before these changes to VCS inpatient food service, VCS had only provided retail food service at these locations. Three of the locations converting to VCS labor were at Marion, Illinois, and the Jefferson Barracks and John Cochran locations in St. Louis, Missouri. These examples illustrate different stages of VCS conversion and different sizes of health care facilities. VA began its VCS conversion in Marion, Illinois, in 1997. Today, Marion employs mostly VCS workers to serve an average daily census of 95 patients. VA reports that patient satisfaction is higher, as measured by patient surveys, than it was before and that NFS dietitians continue to be responsible for quality. When the conversion to VCS employees is complete, VA estimates that $375,000 a year could be saved through reductions in wage costs. NFS workers have left Marion inpatient food service through normal attrition, including retirement, moving to other VA jobs, or leaving VA voluntarily. Personnel changes were monitored by the facility’s Labor Management Partnership Council, which included union representation. Those employees who remain retain their NFS salaries. St. Louis’s two locations began VCS integration in 1999. Today the consolidated St. Louis locations serve an average daily census of 301 inpatients by employing NFS employees and a VCS manager. Other VCS employees are being recruited. When fully implemented, VA estimates that St. Louis could save $803,000 in wage costs annually. St. Louis expects to follow Marion’s experience in protecting current NFS employees’ job security and salary and phasing in VCS conversion. Our analysis suggests that VA could lower labor costs by an estimated $67 million annually (in addition to the estimated $12 million consolidation savings discussed earlier) if less-expensive VCS workers are employed in place of NFS workers at 166 additional locations. The Marion and St. Louis experiences suggest that the full extent of these savings would be realized over a number of years as VCS conversion is phased in. However, some savings can be achieved in the first year of implementation. Currently, NFS wage grade workers provide inpatient food services at these 166 locations. VCS employees could cook and prepare food, distribute food to patients, and retrieve and wash dishes, trays, and utensils for inpatients at these locations while NFS dietitians continue to assure quality. Three locations—Kansas City, Leavenworth, and Topeka—are scheduled to begin conversion to VCS inpatient food service provision. In our survey of VA health care networks, VA officials indicated that another location is considering conversion. Making the changes required to convert to VCS inpatient food service provision requires management commitment to a process that may take several years and much effort to achieve but has the potential for significant cost savings. Actual savings may vary from our estimates because of many local factors at each inpatient location. To determine actual savings through the use of VCS employees, VA would need to conduct studies of each inpatient food location and weigh alternatives for providing the lowest-cost food service while maintaining quality. VA would also need to incorporate in this process consideration of the effect such changes could have on other VA priorities, such as maintaining job opportunities for veterans and compensated work therapy patients. A key element of such a study is recognition that VA’s inpatient food service operations are developing along the lines of other hospital food service operations, which are changing the nature of the hospital food service industry. This includes the use of more pre-prepared food products, less need for specialized cooking skills, and more reliance on computer ordering for preparation and placement of food on patient trays. All of these processes reduce both the need for a higher-skilled work force and the degree of training needed to successfully produce and distribute hospital food, whether VA inpatient food service is provided by NFS or VCS. NFS and VCS managers agree that employees can be trained more quickly today than in the past to provide inpatient food services. VCS managers also believe that higher turnover rates for lower-paid employees would not adversely affect services. VA uses private contractors for inpatient food services at two inpatient locations—Sodexho Marriott at its Anchorage domiciliary and SkyChef at the Honolulu nursing home. These locations have no VCS retail food services and have only a long-term-care inpatient mission. In addition, both locations began inpatient food services with a contractor rather than with NFS employees. While VA has used competitive sourcing only to a limited extent, our analysis suggests that VA may be able to lower costs by determining if in- house or private sector provision of food services is more cost effective. VA could realize additional savings by competing, through the use of OMB’s Circular A-76, the costs of government provision of these services versus the costs of private-sector provision. Our work at the Department of Defense shows that, by competitive sourcing under OMB Circular A-76, costs decline through increased efficiencies whether the government or the private sector wins the competition to provide services.This work indicates that savings are probable for VA, but we cannot estimate potential savings from competitive sourcing because of uncertainty regarding the availability of interested contractors at each VA location, the price of contractor services, and the extent to which VA food services units are able to decrease their operating costs in a competitive process. Savings from competitive sourcing might be higher if VA expanded competitive sourcing to include locations that combine NFS inpatient and VCS retail operations. When food contractors provide services to non-VA hospitals, they usually operate both inpatient and retail as one operation and most of their profits come from retail sales, according to food service contractors with whom we spoke. However, VA may not offer the most attractive business opportunity for food contractors for two reasons. First, VCS opposes consideration of contracting for retail food services because it uses profits from a minority of profitable locations to subsidize operations at the remainder. Moreover, VCS believes that some of its other retail activities, such as vending of toiletries and personal articles that are not generally provided by food service contractors, are not viable without retail food. This is important to VCS because it receives no appropriations and funds its operations based on revenues earned. Second, the small size of VA inpatient workloads at many locations may be less attractive to contractors because there is less opportunity to spread fixed costs over higher volume. For example, 27 percent of VA locations have an average daily census of less than 100 inpatients, and 56 percent have an average daily census of less than 200. However, it may be possible for potential contractors to combine food services at smaller locations with services at other nearby VA and non-VA locations to generate higher volume. To achieve savings through competitive sourcing, VA would need to conduct studies of each inpatient food location to weigh alternatives for providing the lowest-cost food service while maintaining quality. In these studies, VA would need to consider the effect such changes could have on other VA priorities, such as maintaining job opportunities for veterans and compensated work therapy patients. To date, however, VA has done little to explore either its own experience with using contractors or contractor interest. Although fostering competition among government and private contractors to provide food services can be a time-consuming process, it offers opportunities to create more efficient and less costly operations when in-house organizations win the competition, or savings when private competitors win. This process can be demanding, however, and requires strong management commitment to achieve. VA could foster competition among government and private providers in the provision of inpatient food service by using the competitive process of OMB’s Circular A-76. VA could compete all its food service operations or any part of these services at each location. VA could consider competitive sourcing alone or in combination with consolidation or use of VCS employees, as we discussed earlier. VA has opportunities to save millions of dollars by systematically considering consolidating food production, employing VCS workers to provide inpatient food services, and competitive sourcing. VA already has experience in implementing these options at a number of locations, although VA’s experience with food service contractors is limited. VA has not, however, systematically compared these options at all 177 inpatient locations. Using a systematic approach to assess available options at each location would allow VA to provide food service at the lowest cost consistent with maintaining quality. We recommend that the Acting Secretary of Veterans Affairs direct the Under Secretary for Health to direct the 22 networks to (1) systematically assess each inpatient food service location to determine if consolidation, employment of VCS workers, competitive sourcing, or a combination of these options would reduce costs while maintaining quality; and (2) implement the least-costly options in a timely manner. We received written comments on a draft of this report from VA’s Acting Secretary and the National President of AFGE. Their comments and our responses are discussed in the following sections. The comments in their entirety from VA and AFGE are in appendixes III and IV, respectively. VA agreed in principle with our recommendations, noting that it is already consolidating food production locations, converting to VCS inpatient food service provision, and using competitive sourcing. VA should be commended for its progress to date. However, VA has not systematically assessed each of these options at each location as we recommend. VA stated that the three options we identified are part of its Nutrition and Food Service strategic plan for improving quality and cost effectiveness. In our review of the plan, we found the VCS option to be clearly identified. However, the consolidation option discussed in the plan appears to deal with NFS consolidation with other services rather than consolidating food production locations and we found no reference to competitive sourcing. In addition, we found no reference to the systematic assessments we recommend. We believe the strategic plan could help VA implement our recommendations if the plan clearly specified that all three options we identified to reduce costs are to be systematically assessed for each location. Although VA agreed with our recommendation for timely implementation, it provided no operational plan or timeline for conducting the assessments we recommended. VA states that it is assessing the feasibility and subsequent implementation of these options at a deliberate pace to carefully consider relevant factors. We agree that VA should carefully consider these factors but believe the recommended assessments should be completed as expeditiously as possible. Delay means that millions of dollars per year may be spent unnecessarily on food services. VA expressed several specific concerns on a number of issues. Consolidationoffoodproduction. VA raised issues regarding (1) the need to do a study at each location, (2) transportation of perishable food, (3) costs, (4) VA’s Subsistence Prime Vendor (SPV) program, and (5) integration of NFS employees with environmental management services. First, VA stated that studies of food consolidation have already been done in Veterans Integrated Services Networks 1 (Boston), 3 (Bronx), 8 (Bay Pines), 12 (Chicago), and 22 (Long Beach), suggesting that additional studies are not needed at each location in these networks. We commend VA’s efforts to study ways to reduce costs in these networks. However, based on our discussions with NFS officials at several of these networks and reviews of several of these studies, we disagree that VA has systematically assessed all three options in each network. VA focused more on the potential for consolidations, but this option may be even more cost- effective if implemented in conjunction with the use of VCS employees or competitive sourcing in these networks. Because VA has not assessed all three options, it may not have identified the least-costly options in each network. Second, VA stated that the safety of transporting perishable food products and related logistics are key factors in determining the viability of consolidating VA facility food production. VA’s statement suggests that, as a result, fewer locations may be able to consolidate than we estimated and that the speed of consolidation could be slow. We agree that VA needs to carefully consider these factors, but we factored in the transportation and logistical issues in our analysis based on VA’s experience. As discussed in the report, VA has successfully addressed these factors in 28 other locations that are comparable to the potential locations we identified. Therefore, we do not view such factors as reasons for not moving ahead expeditiously but rather as factors that require strong management commitment in order to realize potential savings. Third, VA stated that large capital investment costs for equipment and space are key factors affecting the viability of potential consolidations. Again, we agree. However, investment costs must be assessed within the context of potential savings. For example, once fully implemented the savings realized in 1 year under the consolidation of food services in the Central Texas Healthcare System exceeded the investment costs, making that consolidation viable. We included in our assessments of the viability of consolidation at other VA locations the costs of a blast chill system of food production, such as that operated by the Central Texas Healthcare System, and the costs of the related advanced food delivery equipment.Therefore, the potential consolidation locations we identified could result in annual savings greater than the required investment costs within a reasonable time period. Fourth, VA also stated that its SPV program needs to be considered in consolidation decisions. The SPV program reduces the costs of food items through high-volume purchases by all of VA and certain other government agencies. We agree that the SPV program should be considered in consolidation decisions at each location, but we are doubtful that this would affect a decision on whether to consolidate. Our review of consolidations showed that savings result from reduced labor costs, not reduced food costs. Moreover, we are doubtful that the SPV program will affect food costs in a consolidation because the same number of patients will be fed whether consolidation occurs or not and all VA locations already participate in the SPV program. Fifth, VA stated that integration of NFS employees with environmental management services should be considered in consolidations. NFS integration with environmental management services includes having some employees work in both services so that an employee with downtime in food services can work in environmental services and vice-versa. Again, we agree that this factor should be considered in consolidations at each location, but it is unclear how this would affect a consolidation decision. While integrating NFS workers with other services can reduce food production costs without consolidation by shifting unneeded staff time and charges to other services, it is unlikely to reduce costs to the degree they would be reduced in consolidation. Consolidation reduces costs primarily through economies of scale whereby fewer workers in one location can produce food for patients in two or more locations than the smallest number of workers combined could produce food separately at each location. Therefore, consolidation would provide greater cost savings. In addition, NFS integration with environmental management services could be included in a consolidation. EmployingVCSworkers. VA raised issues regarding (1) time needed to phase in conversions, (2) variability in savings by location, (3) separation costs, and (4) training costs. First, VA stated in its comments, and we agree, that the savings from converting to VCS workers would take years to fully achieve. However, VA officials told us that some savings are possible in the first year of implementation. The magnitude of savings possible makes it worth the effort even if several years are required to fully achieve savings. Our report reflects this point. Our savings estimate of $67 million represents the total potential annual cost reductions for employing VCS workers to provide inpatient food services and not the savings that could be realized in fiscal year 2001. VA would not realize the full savings at each location for a number of years because VCS workers would only be phased in when NFS workers left through normal attrition such as retirement, voluntarily leaving for other VA jobs, or for jobs outside VA. Second, VA stated that potential savings from employing VCS workers to provide inpatient food services would vary from location to location, making it difficult to project a total cost benefit at this time. We agree that actual savings achieved would likely vary from location to location. However, we estimated total potential savings assuming that VA’s locations could save an average of about 30 percent of combined wage and benefit costs. This rate approximates the rate VA is realizing in its conversion to VCS employees at Marion, Illinois. VCS headquarters managers and network and facility officials in the VCS conversions studied agreed that using a 30 percent savings rate is reasonable for estimating nationwide savings. Third, VA also suggests that our estimated savings for employing VCS workers are overstated because of additional separation costs for NFS employees that would be required to implement this option. We do not agree. In the VCS conversions we reviewed, NFS workers typically continue working until they leave through normal attrition including retirement, moving to other jobs in VA, or leaving VA voluntarily. Thus, no special separation costs are incurred. Fourth, VA states that training costs could reduce our estimated savings. VA said these training costs would be for (1) NFS workers who leave food service to take other VA jobs, (2) VCS employees who replace NFS employees, and (3) part-time workers providing food service. We do not agree that these costs would reduce our estimated savings. As previously discussed, in VCS conversions NFS workers are expected to leave through normal attrition such as retirement, voluntarily leaving for other VA jobs, or voluntarily leaving for jobs outside VA. The training for NFS employees taking other jobs would be required whether NFS or non-NFS employees were hired for those jobs. Similarly, training for VCS employees replacing NFS employees would be required whether the replacements were VCS or other employees. Finally, both VCS and NFS already use many part-time workers and VA indicates it will continue this strategy. As a result, these training costs would be required in any event and are not additional costs. Competitivesourcing. Although VA concurred with our recommendation to consider competitive sourcing as an option in providing food services, VA raised concerns about the opportunities to use contractors in VA’s inpatient settings. We agree, as stated in the report, that VA may not offer the most attractive business opportunity for food contractors because of VA’s unique structure for providing inpatient and retail food services separately at its locations and because of the small inpatient workload at most locations. Because of these and other uncertainties we could not estimate the number of locations that could benefit from competitive sourcing or the potential savings. Nonetheless, we believe that competitive sourcing should be considered because of its potential to increase efficiency. As previously discussed, our work in other areas has shown that the competitive sourcing process reduces costs through increased efficiency whether the government or a contractor wins the competition to provide services. AFGE opposed all three options we included for study in our recommendations, expressing a number of concerns regarding these options. AFGE’s overarching concern is whether VA should focus its cost containment strategies on efforts that, in its view, could further impoverish current workers or compromise food quality. While we understand and appreciate AFGE’s legitimate concerns about current workers’ wages and employment and the quality of food provided to veterans, we believe VA can adequately address these concerns when implementing our recommendations. In the past, VA has demonstrated the ability to implement comparable options without adversely affecting food service workers. Further, our discussions with VA officials indicate that they remain sensitive to the importance of taking appropriate steps to prevent adverse effects on current food service workers. We discuss AFGE’s specific concerns below. EmployingVCSworkers. AFGE expressed six concerns about employing VCS workers in place of NFS workers to provide inpatient food service. First, AFGE stated that our estimate of $67 million in annual savings from employing VCS workers is misleading. AFGE said that the savings we estimated would be a one-time occurrence and establish a new baseline once achieved. We do not agree. Because there is no specific appropriation for inpatient food services, VA will not return savings from its food service operations to the U.S. Treasury and thereby establish a new lower baseline budget for VA. Rather, VA retains the savings achieved through management efficiencies in its budget, thereby making the savings available for other purposes in each subsequent year. Second, AFGE suggested that part of the savings we estimated are based on the government paying less for its match of employee health care premiums because lower-paid VCS employees will less frequently participate in government-sponsored health care plans than NFS employees. We did not assume that government costs would be less because fewer VCS workers would participate in government-sponsored health care plans than NFS workers. Information provided by VA shows that the proportion of NFS and VCS workers currently purchasing health insurance through government plans is 32 and 25 percent, respectively. Third, AFGE said that our estimated savings for VA in employing VCS workers are overstated because they do not include increased federal costs for programs such as Medicaid, the Earned Income Tax Credit, the CHIP (Children’s Health Insurance Program), Head Start, Housing and Urban Development rent subsidies, and other expenses related to increasing the ranks of the working poor. We disagree that our savings are overstated because our assessment of VA’s recent experience suggests there would be little or no additional costs to other federal programs as a result of VCS conversion. Based on VA’s experience to date, no NFS worker has had his or her wages reduced or lost employment under the VCS conversions we reviewed and no VCS worker was required to accept lower wages and benefits than they already had or could obtain elsewhere. In VCS conversions, NFS workers are being replaced as a result of normal attrition, including retirement, voluntarily moving to other jobs in VA, or voluntarily leaving for non-VA jobs. As such, the departing NFS workers would have the same impact on other federal programs as if there were no VCS conversion. Current VCS workers who replace NFS workers maintain their wages and benefits and therefore have no impact on other federal programs. Newly-hired VCS workers who replace NFS workers choose VCS over other employment opportunities. Presumably, wages for these new workers are competitive with wages in jobs these workers otherwise would have taken. Fourth, AFGE raised questions regarding the legality of VCS providing inpatient food services in place of NFS employees under the Economy Act. AFGE questioned if VCS could enter into an agreement under the Economy Act and supervise civil service employees, such as NSF employees, and if VCS and NFS employees with similar job descriptions could be paid different wages. We found no legal deficiency in these areas under VA’s use of the Economy Act. An “instrumentality of the United States,” VCS is authorized to receive and has received appropriated funds credited to a revolving fund. VCS’s revolving fund is a permanent, indefinite appropriation available to cover its operating expenses. Therefore, we agree with VA that VCS can be a party to an agreement under the Economy Act. In addition, VCS employees hold “excepted” positions within the federal civil service and are not barred from supervising NFS employees. Finally, VCS employee positions are exempt under 38 U.S.C. 7802 (5) from requirements of title 5 of the United States Code regarding equal pay and VCS employees are subject to a different pay scale than NFS employees. Fifth, AFGE said that it will take years to realize the estimated savings. We agree that it will take years to fully realize these savings, as our discussion of Marion and St. Louis indicate, but some savings can begin to accrue in the first year of implementation. Moreover, the amount of savings possible makes it worth the effort even if several years are required to fully achieve them. Sixth, AFGE said that higher VCS turnover rates will create problems for converting to VCS provision of inpatient food services. We do not agree. Based on experience to date, VCS managers at headquarters and at Marion have stated that turnover has not affected their ability to provide inpatient food services or affected quality. Consolidationoffoodproduction. AFGE expressed two concerns related to consolidation of food production and incorrectly stated that we said that VCS opposes consolidation. First, AFGE said that our estimates of kitchen consolidation savings are overstated because we underestimate the financial and practical costs of losing in-house food production. We do not agree. Our savings estimates account for additional costs required by consolidation that were identified by VA officials and representatives of the food service industry who have consolidated food production locations. As we discuss in our evaluation of the Central Texas Healthcare System, our savings model is conservative and probably understates savings. Second, AFGE stated that consolidations lower the quality of food provided to veterans. For example, AFGE expresses concerns regarding frozen food and other issues. We disagree. As we discussed in the report, VA reports that patient satisfaction increased at the Central Texas Healthcare System after consolidation, as measured by improvements in the taste and temperature of food. The Central Texas Healthcare System received an award from VA headquarters for reducing costs and maintaining quality in its consolidation activities. The award included citations for (1) provision of consistently high-quality meals, (2) improvements in timeliness, (3) increased patient satisfaction, and (4) maintenance of quality controls. Moreover, in all VA locations that consolidate, NFS dietitians continue to have quality control responsibility to ensure that veterans’ nutrition needs are met. AFGE also stated that we noted that VCS opposes privatization and centralization. We said that VCS opposes privatizing the services it provides, but we did not say that VCS opposes consolidation. In fact, VCS officials told us that VCS does not oppose consolidation. Competitivesourcing. AFGE expressed five concerns about competitive sourcing. First, AFGE stated that there is no evidence that contracting saves money. We believe it is important to distinguish between an objective to contract and an objective to compete government versus private service provision. Our recommendation is that VA consider competitively sourcing food service operations rather than outright contracting as an end in itself. Competitive sourcing can result in the government either retaining its position as service provider, or contracting with a private provider. As we have discussed, our work shows that competitive sourcing reduces cost through increased efficiency. The costs are reduced whether government or the private contractor wins the competition. We believe it would be a mistake to eliminate the competitive sourcing option for reducing VA’s costs. Second, AFGE expressed concern as to whether VA would use the OMB Circular A-76 process for competitive sourcing or contract without the benefit of a public-private competition. We agree that VA could, under limited circumstances specified in OMB’s Circular A-76, convert to contract performance without cost comparison. However, our recommendation to VA was that it consider competitive sourcing rather than contracting. VA agreed in principal with our recommendation. Third, AFGE also expressed concern about the quality of food service under contracting. We do not share AFGE’s concern because the same quality controls VA currently uses for in-house provision of food service could be included and enforced in the contract if a private firm chooses to compete and wins the competition under competitive sourcing. We note that some of VA’s medical affiliates, including major university hospitals, provide inpatient food service through contractors. Fourth, AFGE expressed concern that veterans currently employed in VA’s in-house food production could lose their jobs if a contractor wins the competition. We agree this is possible. As stated in the report, we believe that VA should include this as a consideration in its assessments of food service at each location. We note, however, that government employees adversely affected by decisions under the OMB A-76 process competition often are offered positions with winning contractors. VA could specify, as other agencies have, that a contractor hire such employees if it wins the competitive sourcing competition. Fifth, AFGE stated that there is little opportunity for a contractor to provide services less expensively than VA if VA uses lower-paid VCS employees. AFGE believes that the only way to lower costs in contracting is to lower wages and does not believe this is possible if a contractor is competing with VCS’s wage rates. We disagree. Competitive sourcing is an incentive to both government and the contractor to increase efficiency as much as possible to achieve cost reductions. These increased efficiencies can be achieved through improvements in process operations that reduce the amount of capital or human resources needed to process the same workload. As arranged with your staff, we are sending copies of this report to the Honorable Hershel W. Gober, Acting Secretary of Veterans Affairs; interested congressional committees; and other interested parties. We will make copies available to others upon request. If you have any questions about this report, please call me at (202) 512- 7101. Other staff who contributed to this report are listed in appendix V. We reviewed the Department of Veterans Affairs (VA) inpatient food services for fiscal year 1999 to assess potential savings nationwide if VA were to implement system-wide the three types of initiatives it has used in some of its VA inpatient health care locations: (1) consolidating food production, (2) employing Veterans Canteen Service (VCS) rather than Nutrition and Food Service (NFS) workers to provide inpatient food services, and (3) competitive sourcing. We interviewed VA headquarters officials in NFS, VCS, the Office of General Counsel, and other offices. We obtained documents from headquarters on the consolidation of food service, the use of VCS labor, and contracting with private food service contractors. We obtained data on food services at each inpatient location by surveying each Veterans Integrated Service Network. We obtained information on food service needs, how VA provides services, costs, and number of meals at each VA inpatient location. Networks and locations also provided us with information on advance food technologies and excess capacity, and with additional information on consolidating food services, the use of VCS, and private contractors. We also obtained additional data through interviews, documents, and physical inspections of kitchen facilities and food delivery at VA locations. We visited Veterans Integrated Service Network 17 (Dallas) locations in Temple, Marlin, Waco, and Dallas. We also visited locations in Marion, Illinois, and Jefferson Barracks and John Cochran in St. Louis, Missouri, in Veterans Integrated Network 15 (Kansas City). To estimate savings from consolidation, we first identified areas with multiple food production locations, using the criterion that two or more locations were located within 90 minutes’ driving distance of each other. We then examined the combined workloads and costs of unconsolidated locations in these markets to determine whether savings could be achieved through consolidation. Locations were considered to be already consolidated if they received 80 percent or more of their food from another location or produced 80 percent or more of the food for another location. Our analysis of VA cost data and discussions with VA officials suggested that the ratio of employees to the average number of daily patients (average daily inpatient census) is an appropriate measure for benchmarking savings in food services. We confirmed this relationship using 1999 data by regressing average daily patients on total employees. The resulting model showed that the average daily patients accounted for 86 percent of the variation in staffing. We computed savings estimates for the consolidations using the staffing ratio of one employee per 6.7 average daily patients. This staffing ratio was achieved by the Central Texas Healthcare System after completing consolidation of inpatient food services at Temple, Marlin, and Waco. To validate this measure we spoke to VA officials representing both NFS and VCS, who agreed that using the Central Texas Healthcare System staffing ratio after consolidation was a reasonable, perhaps conservative, estimate of achievable staffing levels. Some VA production locations, in fact, are more efficient (lower ratio of employees to the average number of daily patients) than operations at the Central Texas Healthcare System. To calculate total savings from food consolidation we first multiplied the total average number of daily patients of the proposed market by the Central Texas Healthcare System staffing ratio (one employee per 6.7 average daily patients) to arrive at a projected employee total for the consolidated market.We then subtracted this projected total from the fiscal year 1999 employee total of the individual locations in an area to determine the number of employees not needed, if any. Cost savings for the area were computed by multiplying the number of positions saved by the average salary costs of NFS wage grade, including benefits, within each market. We aggregated savings from each market to determine the total savings from food consolidation. The one-time investment for equipment was estimated by assuming that one location in each consolidated area required an advance food preparation system and every location required an advance food delivery system. To project the total cost of advance food preparation equipment (a fixed cost that includes items such as the blast chiller), we multiplied the cost of Central Texas Healthcare System’s advance food preparation system (purchase amount adjusted to 1999 dollars) by the number of locations within areas that required this system. We calculated the total cost for the advance food delivery systems (a variable cost that includes items such as reheating carts, trays, and plates) by multiplying the total average daily patients of locations without this system by Central Texas Healthcare System’s cost per average daily patients (adjusted to 1999 dollars). We calculated the costs of transporting food from a central location using data obtained from the Central Texas Healthcare System. To project the total costs of transportation for the consolidated areas, we multiplied the annual cost of one leased refrigerated truck by the total number of consolidated areas. Because this cost recurs each year, we subtracted this cost from the annual recurring savings from consolidation. We determined the potential savings from converting from NFS to VCS labor by applying a 30 percent savings reduction to NFS employee costs. VCS salaries are based on the Department of Defense’s survey of food service worker wages in a local area, and are competitive with the private sector. Nationally, NFS salaries average about 70 percent of total NFS food production costs. VCS salaries are normally about 30 percent below NFS salaries. VCS headquarters established this percentage, and network and facility officials have agreed that using a 30 percent savings rate is reasonable. We also conducted a literature review of the food services industry, interviewed selected non-VA food service officials and officials from the private vendor sector and food service industry organizations, and visited contractor food production facilities. We validated survey data used to construct cost estimates by comparing questionable data supplied on the 1999 survey with VA data sources. When necessary, we also contacted survey respondents and/or VA officials to clarify or correct data. We performed our review between October 1999 and November 2000 in accordance with generally accepted government auditing standards. Deborah L. Edwards, James C. Musselwhite, William R. Stanco, John R. Kirstein, Thomas A. Walke, Elsie M. Picyk, Susan Lawes, John G. Brosnan, and Roger J. Thomas contributed to this report. VALaundryService:ConsolidationsandCompetitiveSourcingCouldSave Millions(GAO/01-61, Nov. 30, 2000). VAHealthCare:VAIsStrugglingtoRespondtoAssetRealignment Challenges(GAO/T-HEHS-00-91, Apr. 6, 2000). VAHealthCare:VAIsStrugglingtoAddressAssetRealignmentChallenges (GAO/T-HEHS-00-88, Apr. 5, 2000). VAHealthCare:LaundryServiceOperationsandCosts(GAO/HEHS-00-16, Dec. 21, 1999). VAHealthCare:FoodServiceOperationsandCostsatInpatientFacilities (GAO/HEHS-00-17, Nov. 19, 1999). Veterans’HealthCare:FiscalYear2000Budget(GAO/HEHS-99-189R, Sept. 14, 1999). VAHealthCare:ImprovementsNeededinCapitalAssetPlanningand Budgeting(GAO/HEHS-99-145, Aug. 13, 1999). VAHealthCare:ChallengesFacingVAinDevelopinganAssetRealignment Process(GAO/T-HEHS-99-173, July 22, 1999). VAHealthCare:ProgressandChallengesinProvidingCaretoVeterans (GAO/T-HEHS-99-158, July 15, 1999). Veterans’Affairs:ProgressandChallengesinTransformingHealthCare (GAO/T-HEHS-99-109, Apr. 15, 1999). VAHealthCare:CapitalAssetPlanningandBudgetingNeedImprovement (GAO/T-HEHS-99-83, Mar. 10, 1999). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
The Department of Veterans Affairs (VA) could save millions of dollars by systematically consolidating food production, employing Veterans Canteen Service workers to provide inpatient food services, and using competitive sourcing. VA already has experience in implementing these options at several locations, although VA's experience with food service contractors is limited. Using a systematic approach to assess available options at each location would allow VA to provide food service at the lowest cost while maintaining quality.
In our April 2013 report, we found that costs increased and schedules were delayed for all four of VA’s largest medical-facility construction projects, when comparing November 2012 construction project data with the cost and schedule estimates first submitted to Congress. Since our 2013 report, these projects have experienced further increases and delays. When we compared the most recent construction project data,of December 2014, with the cost and schedule estimates first submitted as to Congress, cost increases ranged from 66 percent to 144 percent,representing a total cost increase of over $1.5 billion and an average increase of approximately $376 million per project. For example, the cost for the New Orleans project increased by nearly $40 million. Schedule delays have also increased since our April 2013 report. Specifically, in April 2013 we reported that the schedule delays ranged from 14 to 74 months with an average delay of 35 months per project. The delays now range from 14 to 86 months. For instance, the delays in Orlando have extended from 39 months to 57 months. Table 1 presents updated information on cost increases and schedule delays for these four projects compared with original estimates. We found in April 2013 that of the four largest medical-facility construction projects VA had underway, Denver had the highest cost increase. We reported that the estimated cost increased from $328 million in June 2004 to $800 million, as of November 2012. Further, VA’s initial estimated completion date was February 2014; subsequently VA estimated the project would be completed in May 2015. However, in April 2014, VA’s primary contractor on the project had expressed concerns that the project would ultimately cost more to complete. In a January 2015 update, VA stated that the final project cost and schedule will be determined pursuant to execution of interim cost plus fixed fee contract and issuance of a long term contract by the U.S. Army Corps of Engineers. In commenting on a draft of our April 2013 report, VA stated that using the initial completion date from the construction contract would be more accurate than using the initial completion date provided to Congress; however, using the initial completion date from the construction contract would not account for how VA managed these projects before it awarded the construction contract. Cost estimates at this earlier stage should be as accurate and credible as possible because Congress uses these initial estimates to consider authorizations and make appropriations decisions. We used a similar methodology to estimate changes to cost and schedule of construction projects in a previous report issued in 2009 on VA construction projects. We believe that the methodology we used in our April 2013 and December 2009 report on VA construction provides an accurate depiction of how cost and schedules for construction projects can change from the time they are first submitted to Congress. It is at this time that expectations are set among stakeholders, including the veterans’ community, for when projects will be completed and at what cost. In our April 2013 report, we made recommendations to VA, discussed later in this statement, to help address these cost and schedule delays. In our April 2013 report, we found that different factors contributed to cost increases and schedule delays at each of the four locations we reviewed: Changing health care needs of the local veteran population changed the scope of the Las Vegas project. VA officials told us that the Las Vegas Medical Center was initially planned as an expanded clinic co-located with Nellis Air Force Base. However, VA later determined that a much larger medical center was needed in Las Vegas after it became clear that an inpatient medical center shared with the Air Force would be inadequate to serve the medical needs of local veterans. Decisions to change plans from a shared university/VA medical center to a stand-alone VA medical center affected plans in Denver and New Orleans. For Denver and New Orleans, VA revised its original plans for shared facilities with local universities to stand-alone facilities after proposals for a shared facility could not be finalized. For example, in Denver, plans went through numerous changes after the prospectus was first submitted to Congress in 2004. In 1999, VA officials and the University of Colorado Hospital began discussing the possibility of a shared facility on the former Fitzsimons Army base in Aurora, Colorado. Negotiations continued until late 2004, at which time VA decided against a shared facility with the University of Colorado Hospital because of concerns over the governance of a shared facility. In 2005, VA selected an architectural and engineering firm for a stand-alone project, but VA officials told us that the firm’s efforts were suspended in 2006 until VA acquired another site at the former Army base adjacent to the new university medical center. Design restarted in 2007 before suspending again in January 2009, when VA reduced the project’s scope because of lack of funding. By this time, the project’s costs had increased by approximately $470 million, and the project’s completion was delayed by 14 months. The cost increases and delays occurred because the costs to construct operating rooms and other specialized sections of the facility were now borne solely by VA, and the change to a stand-alone facility also required extensive redesign. Changes to the site location by VA delayed efforts in Orlando. In Orlando, VA’s site location changed three times from 2004 to 2010. It first changed because VA, in renovating the existing VA hospital in Orlando, realized the facility site was too small to include needed services. However, before VA could finalize the purchase of a new larger site, the land owner sold half of the land to another buyer, and the remaining site was again too small. Unanticipated events in Las Vegas, New Orleans, and Denver also led to delays. For example, VA officials at the Denver project site discovered they needed to eradicate asbestos and replace faulty electrical systems from pre-existing buildings. They also discovered and removed a buried swimming pool and found a mineral-laden underground spring that forced them to continually treat and pump the water from the site, which impacted plans to build an underground parking structure. In our April 2013 report, we found that VA had taken steps to improve its management of major medical-facility construction projects, including creating a construction-management review council. In April 2012, the Secretary of Veterans Affairs established the Construction Review Council to serve as the single point of oversight and performance accountability for the planning, budgeting, executing, and delivering of VA’s real property capital-asset program. The council issued an internal report in November 2012 that contained findings and recommendations that resulted from meetings it held from April to July 2012. The report stated that the challenges identified on a project-by-project basis were not isolated incidents but were indicative of systemic problems facing VA. In our 2013 report we also found that VA had taken steps to implement a new project delivery method—called the Integrated Design and Construction (IDC) method. In response to the construction industry’s concerns that VA and other federal agencies did not involve the construction contractor early in the design process, VA and the Army Corps of Engineers began working to establish a project delivery model that would allow for earlier contractor involvement in a construction project, as is often done in the private sector. We found in 2013 that VA did not implement IDC early enough in Denver to garner the full benefits. VA officials explained that Denver was initiated as a design-bid-build project and later switched to IDC after the project had already begun. According to VA officials, the IDC method was very popular with industry, and VA wanted to see if this approach would effectively deliver a timely medical facility project. Thus, while the intent of the IDC method is to involve both the project contractor and architectural and engineering firm early in the process to ensure a well coordinated effort in designing and planning a project, VA did not hire the contractor for Denver until after the initial designs were completed. According to VA, because the contractor was not involved in the design of the projects and formulated its bids based on a design which had not been finalized, these projects required changes that increased costs and led to schedule delays. VA staff responsible for managing the project said it would have been better to maintain the design-bid-build model throughout the entire process rather than changing mid-project because VA did not receive the value of having contractor input at the design phase, as the IDC method is supposed to provide. For example, according to Denver VA officials, the architectural design called for curved walls rather than less expensive straight walls along the hospital’s main corridor. The officials said that had the contractor been involved in the design process, the contractor could have helped VA weigh the aesthetic advantages of curved walls against the lower cost of straight walls. In our April 2013 report we identified systemic reasons that contributed to overall schedule delays and cost increases, and recommended that VA take actions to improve its construction management of major medical facilities: including (1) developing guidance on the use of medical equipment planners; (2) sharing information on the roles and responsibilities of VA construction project management staff; and (3) streamlining the change order process. Our recommendations were aimed at addressing issues we identified at one or more of the four sites we visited during our review. VA has implemented our recommendations; however, the impact of these actions may take time to show improvements, especially for ongoing construction projects, depending on several issues, including the relationship between VA and the contractor. Since completing our April 2013 report, we have not reviewed the extent to which these actions have affected the four projects, or the extent to which these actions may have helped to avoid the cost overruns and delays that occurred on that specific project. On August 30, 2013, VA issued a policy memorandum providing guidance on the assignment of medical equipment planners to major medical construction projects. The memorandum states that all VA major construction projects involving the procurement of medical equipment to be installed in the construction will retain the services of a Medical Equipment Specialist to be procured through the project’s architectural engineering firm. Prior to issuance of this memorandum, VA officials had emphasized that they needed the flexibility to change their heath care processes in response to new technologies, equipment, and advances in medicine. Given the complexity and sometimes rapidly evolving nature of medical technology, many health care organizations employ medical equipment planners to help match the medical equipment needed in the facility to the construction of the facility. Federal and private sector stakeholders reported that medical equipment planners have helped avoid schedule delays. VA officials told us that they sometimes hire a medical equipment planner as part of the architectural and engineering firm services to address medical equipment planning. However, in our April 2013 report we found that for costly and complex facilities, VA did not have guidance for how to involve medical equipment planners during each construction stage of a major hospital and has sometimes relied on local Veterans Health Administration (VHA) staff with limited experience in procuring medical equipment to make medical equipment planning decisions. Thus, we recommended that the Secretary of VA develop and implement agency guidance to assign medical equipment planners to major medical construction projects. As mentioned earlier, in August 2013, VA issued such guidance. In September 2013, in response to our recommendation, VA put procedures in place to communicate to contractors the roles and responsibilities of VA officials that manage major medical facility construction projects, including the change order process. Among these procedures is a Project Management Plan that requires the creation of a communications plan and matrix to assure clear and consistent communications with all parties. Construction of large medical facilities involves numerous staff from multiple VA organizations. Officials from the Office of Construction and Facilities Management (CFM) stated that during the construction process, effective communication is essential and must be continuous and involve an open exchange of information among VA staff and other key stakeholders. However, in our April 2013 report, we found that the roles and responsibilities of CFM and VHA staff were not always well communicated and that it was not always clear to general contracting firms which VA officials hold the authority for making construction decisions. This can cause confusion for contractors and architectural and engineering firms, ultimately affecting the relationship between VA and the general contractor. Participants from VA’s 2011 industry forum also reported that VA roles and responsibilities for contracting officials were not always clear and made several recommendations to VA to address this issue. Therefore, in our 2013 report, we recommended that VA develop and disseminate procedures for communicating—to contractors—clearly defined roles and responsibilities of the VA officials who manage major medical-facility projects, particularly those in the change-order process. As discussed earlier in this statement, VA disseminated such procedures in September 2013. On August 29, 2013, VA issued a handbook for construction contract modification (change-order) processing which includes milestones for completing processing of modifications based on their dollar value. In addition, as of September 2013, VA had also hired four additional attorneys and assigned on-site contracting officers to the New Orleans, Denver, Orlando, Manhattan and Palo Alto major construction projects to expedite the processing and review of construction contract modifications. By taking steps to streamline the change order process, VA can better ensure that change orders are approved in a prompt manner to avoid project delays. Most construction projects require, to varying degrees, changes to the facility design as the project progresses, and organizations typically have a process to initiate and implement these changes through change orders. Federal regulations and agency guidance state that change orders must be made promptly, and agency guidance states in addition that there be sufficient time allotted for the government and contractor to agree on an equitable contract adjustment. VA officials at the sites we visited as part of our April 2013 review, including Denver, stated that change orders that take more than a month from when they are initiated to when they are approved can result in schedule delays, and officials at two federal agencies that also construct large medical projects told us that it should not take more than a few weeks to a month to issue most change orders. Processing delays may be caused by the difficulty involved in VA and contractors’ coming to agreement on the costs of changes and the multiple levels of review required for many of VA’s change orders. As discussed earlier, VA has taken steps to streamline the change order process to ensure that change orders are approved in a prompt manner to avoid project delays. Chairman Miller and Ranking Member Brown, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions about this testimony, please contact David Wise at (202) 512-2834 or [email protected]. Other key contributors to this testimony include are Ed Laughlin (Assistant Director), Nelsie Alcoser, George Depaoli, Raymond Griffith, Hannah Laufe, Amy Rosewarne, Nancy Santucci, and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The VA operates one of the nation's largest health care delivery systems. In April 2013, GAO reported that VA was managing the construction of 50 major medical-facility projects at a cost of more than $12 billion. This statement discusses VA construction management issues, specifically, (1) the extent to which the cost, schedule, and scope for four selected major medical-facility projects has changed and the reasons for these changes, (2) actions GAO reported that VA had taken since 2012 to improve its construction management practices, and (3) VA's response to GAO's recommendations for further improvements in its management of these construction projects. This statement is based on GAO's April 2013 report ( GAO-13-302 ) and May 2013 ( GAO-13-556T ) and April 2014 ( GAO-14-548T ) testimonies. GAO included selected updates on VA projects--”located in Denver, Colorado; Las Vegas, Nevada; New Orleans, Louisiana; and Orlando, Florida--and documentation obtained from VA in April 2014 and January 2015. In April 2013, GAO found that costs substantially increased and schedules were delayed for Department of Veterans Affairs' (VA) largest medical-facility construction projects, located in Denver, Colorado; Las Vegas, Nevada; New Orleans, Louisiana; and Orlando, Florida. As of January 2015, in comparison with initial estimates, the cost increases for these projects ranged from 66 percent to 144 percent and delays ranged from 14 to 86 months. Since the 2013 report, some of the projects have experienced further cost increases and delays. For example, the cost for the New Orleans project increased by nearly $40 million, and delays at the Orlando project has extended from 39 months to 57 months. Several factors, including changes to veterans' health care needs, site-acquisition issues, and a decision in Denver to change plans from a medical center shared with a local medical university to a stand-alone VA medical center, contributed to increased costs and schedule delays. In its April 2013 report, GAO found that VA had taken some actions since 2012 to address problems managing major construction projects. Specifically, VA established a Construction Review Council in April 2012 to oversee the department's development and execution of its real property programs. VA also took steps to implement a new project delivery method, called Integrated Design and Construction, which involves the construction contractor early in the design process to identify any potential problems early and speed the construction process. However, in Denver, VA did not implement this method early enough to garner the full benefits of having a contractor early in the design phase. VA stated it has taken actions to implement the recommendations in GAO's April 2013 report. In that report, GAO identified systemic reasons that contributed to overall schedule delays and cost increases at one or more of four reviewed projects and recommended ways VA could improve its management of the construction of major medical facilities. In response, VA has issued guidance on assigning medical equipment planners to major medical facility projects who will be responsible for matching the equipment needed for the facility in order to avoid late design changes leading to cost increases and delays; developed and disseminated procedures for communicating to contractors clearly defined roles and responsibilities of the VA officials who manage major medical-facility projects to avoid confusion that can affect the relationship between VA and the contractor; and issued a handbook for construction contract modification (change-order) processing which includes milestones for completing processing of modifications based on their dollar value and took other actions to streamline the change order process to avoid project delays. VA has implemented GAO's recommendations; however, the impact of these actions may take time to show improvements, especially for ongoing construction projects, depending on several issues, including the relationship between VA and the contractor. In its April 2013 report, GAO recommended that VA (1) develop and implement agency guidance for assignment of medical equipment planners; (2) develop and disseminate procedures for communicating to contractors clearly defined roles and responsibilities of VA officials; (3) issue and take steps to implement guidance on streamlining the change-order process. VA implemented GAO's recommendations.
In May 2001, the FBI initiated a major IT upgrade project known as Trilogy. Trilogy consisted of three parts: (1) the Information Presentation Component (IPC) to upgrade FBI’s computer hardware and software, (2) the Transportation Network Component (TNC) to upgrade the FBI’s communication network, and (3) the User Application Component (UAC) to upgrade and consolidate the FBI’s five most important investigative applications. The IPC component provided for new desktop computers, servers, and commercial-off-the-shelf automation software, including Web-browser and e-mail software to enhance usability by the agents. The TNC component called for upgrading the complete communication infrastructure. These upgrades were expected to provide the physical infrastructure that would run the applications that were to be developed under the UAC component of the Trilogy project to replace the FBI’s paper case files with electronic files and improve efficiency and replace the obsolete Automated Case Support system, the FBI’s primary investigative application that uploads and stores case files electronically. Our 2006 audit of the project’s costs identified significant internal control deficiencies over administration of contracts and interagency agreements, the processing (review, approval, and payment) of invoices, and the accountability over assets purchased under the project. More specifically, we reported that the FBI’s review and approval process for contractor invoices did not provide an adequate basis for verifying that goods and services billed were actually received by the FBI or that payments were for allowable costs. This occurred in part because responsibility for the review and approval of invoices was not clearly defined in the interagency agreements related to the Trilogy project and because contractors’ invoices frequently lacked the detailed supporting documentation necessary for an adequate review of invoice charges. During our audit, we identified more than $10 million in questionable contractor costs paid by the FBI for the Trilogy project. With respect to property, we reported that the FBI: (1) did not adequately maintain accountability for purchased computer equipment; (2) relied extensively on contractors to account for Trilogy assets while they were being purchased, warehoused, and installed; (3) did not establish controls to verify the accuracy and completeness of contractors’ records on which the FBI was relying; (4) did not ensure that only the items approved for purchase were acquired by the contractors, and that the bureau received all those items; and (5) did not establish adequate physical control over the assets. As a result of these deficiencies, we identified more than 1,200 pieces of missing equipment that we estimated to be worth more than $7.5 million. We made 22 recommendations to the FBI in our 2006 report on Trilogy. Of the 22 recommendations, 17 were focused on developing agencywide policies and procedures to address internal control weaknesses in the FBI’s procurement and contract administration processes. The remaining five recommendations were specific to the Trilogy project and were related to contractor overpayments and accountable property. The FBI discontinued the virtual case file component of its Trilogy project in March 2005, after it was determined to be infeasible and cost prohibitive to implement as originally envisioned. FBI’s Sentinel project was approved in July 2005 and was to succeed and expand on elements of the Trilogy project, namely to provide the FBI with a modern, automated investigative case-management system. The Sentinel project management office (PMO) had designed and implemented policies and procedures that assigned specific invoice-review responsibilities and required Sentinel contractors to provide detailed support for all invoiced amounts and to obtain advance approval from the Sentinel PMO for travel, overtime, and other direct costs. With respect to Sentinel equipment, we reported that the Sentinel PMO had established policies and procedures specific to the Sentinel project to ensure Sentinel’s equipment purchases were properly authorized and that received property was timely inspected and entered into the FBI’s Property Management Application (PMA). However, we did identify some additional opportunities for the Sentinel PMO to improve controls over purchased equipment for the Sentinel project. We made five recommendations to the FBI related to Sentinel. The corrective actions developed by the FBI were sufficient to address 21 of the 22 Trilogy recommendations and all 5 of the Sentinel recommendations we made in our prior reports. The FBI substantially addressed 17 Trilogy recommendations related to contract administration, invoice processing, and property accountability by establishing or revising policies and procedures, 4 by contracting for follow-up audits of the Trilogy costs, and the 5 Sentinel recommendations by revising Sentinel policies and procedures. Of the 27 prior recommendations, 17 focused on establishing, revising, or reinforcing policies and procedures with FBI- wide applicability. We found that the FBI had sufficiently developed, revised, or updated these policies and procedures as we recommended. For example, in response to our recommendation that the FBI revise its policies and procedures to require that accountable assets be entered into PMA immediately upon receipt rather than within the prior 30-day time frame, the FBI issued a new policy that required that accountable property be recorded in PMA within 48 hours of being received. Appendix II provides information on each of the 27 Trilogy and Sentinel recommendations and the specific corrective actions developed by the FBI. We also made four recommendations in our Trilogy report related to the recovery of overpayments and reimbursement of questionable costs from Trilogy contractors. In response to these recommendations, the Defense Contract Audit Agency (DCAA), an independent third party, was engaged to perform post audit reviews of contractor billings for the Trilogy project. DCAA conducted separate audits of the billings submitted by the two prime contractors, Computer Sciences Corporation (CSC) and Science Applications International Corporation (SAIC), as well as the billings submitted by the numerous subcontractors, and identified over $18 million in questioned costs. DCAA defines questioned costs as those costs that are not acceptable for negotiating a fair and reasonable contract price. DCAA’s audits included reviewing the areas with potential overpayments we had identified as well as assessing if other identified questionable costs should be reimbursed. The most significant questioned costs were costs incurred outside the effective dates of temporary labor agreements, missing supporting documentation, application of incorrect billing rates, unapproved timesheets, unapproved overtime, and subcontractor overbillings. The one recommendation that the FBI had not fully addressed from our Trilogy report recommended that the FBI investigate the 1,205 assets that we identified as missing, lost, or stolen and determine whether any confidential or sensitive information may be exposed to unauthorized users, and identify any patterns related to the equipment that could necessitate a change in FBI policies and procedures. These assets consisted of a variety of information technology items, including desktop computers, servers, and laptops that could potentially contain confidential or sensitive information that could be exposed to unauthorized users. In February 2011, FBI officials provided documentation accounting for the status of all but 134 assets, including desktop computers, laptops, and servers that could contain sensitive information. With regard to the 134 assets, the FBI stated that all of these assets had a useful life of 7 years or less and that if they were not already returned or destroyed, they are now obsolete and that spending more time or resources to search for the obsolete equipment would be wasteful. Instead the FBI is focused on implementing a new property management system, and incorporating property management lessons learned from the Trilogy project. However, FBI officials also stated they would make the necessary entries to properly record any of the remaining 134 assets for which they subsequently determine the status. Although the FBI developed or revised policies and procedures in response to 17 of our prior recommendations, our testing to assess their implementation FBI-wide identified possible issues in certain areas. In our testing of the four recommendations dealing with interagency agreements and contracts, we found that they were effectively implemented, but we identified a new issue unrelated to our prior recommendations. In our implementation testing for the remaining 13 corrective actions, we identified indications of implementation issues for 3 of them. As shown in table 1, our tests related to policies and procedures over interagency agreements and contracts indicated that the FBI had effectively implemented these corrective actions. In the course of testing the interagency agreement sample transactions, we identified a new issue unrelated to our prior recommendations. Specifically, the Federal Acquisition Regulation (FAR) requires that any interagency agreement entered into under the authority of the Economy Act, 31 U.S.C. § 1535, be supported by a Determination and Findings document. The Determination and Findings form identifies the responsible agencies to the agreement (requesting agency and servicing agency), is prepared by the requesting agency, and identifies the goods or services that are to be provided by the servicing agency. In addition, it documents the requesting agency’s determination that, among other things, the use of an interagency acquisition is in the best interest of the government, and the supplies or services cannot be obtained as conveniently or economically by contracting directly with a private source. The FAR also requires that the requesting agency complete the Determination and Findings form before placing an order for supplies or services with another government agency. In reviewing our statistical sample of 55 interagency agreements with regard to implementation of our prior recommendations, we identified 54 interagency agreements that were required to comply with FAR requirements related to Determination and Findings and found that 15 of them did not comply with these FAR requirements. For these 15 cases, the required Determination and Findings forms supporting the execution of interagency agreements between the FBI and other federal entities were prepared and signed after the interagency agreements were executed—in some cases more than a year later.  Three Determination and Findings forms were signed less than 3 months after the date of the related purchase orders were issued.  One Determination and Findings form was signed between 3 months and 6 months after the date of the related purchase order was issued.  Seven Determination and Findings forms were signed between 6 months and 1 year later.  Four Determination and Findings forms were signed more than a year after the date of the related purchase orders. Based on the results of our review, we are 95 percent confident that the total percentage of interagency agreements executed by the FBI in fiscal year 2009 that lacked a required Determination and Findings form prior to the FBI placing the order could be as much as 39.5 percent. FBI officials acknowledged that the Determination and Findings forms were not completed prior to placing orders for goods and services and provided two explanations. The interagency agreements and related documentation for some of them were executed by a new employee who was instructed to prepare and include the Determination and Findings forms after the files had been reviewed by the Unit Chief, and a contracting officer did not prepare and submit the interagency agreement documentation to the Unit Chief in a timely manner for the others. The FBI’s monitoring of the interagency agreement process did not identify that the Determination and Findings forms were not properly prepared as required. Internal controls should be designed to assure that ongoing monitoring occurs in the course of normal operations. By not completing a required Determination and Findings form prior to issuing a purchase order, obligating the agency for the purchase order amount, the requesting agency risks obligating funds for supplies and services or both that are not in the best interest of the government, and executing a contract that is not in compliance with federal laws or regulations. Of the remaining 13 corrective actions that involved the implementation of FBI-wide policies and procedures, our testing found indications that 3 of them may not have been fully or consistently implemented. As shown in table 2, our tests of non-statistically selected transactions identified implementation issues primarily in policies and procedures related to review of contractor invoices and accountability for purchased assets. As shown in table 2, our detailed testing found instances in which the FBI had not fully implemented the policies and procedures established in response to our prior recommendations in this area. Internal control standards require agencies to establish controls that reasonably ensure, among other things, that funds, property, and other assets are safeguarded against waste, loss, or unauthorized use. The FBI requires contractors bidding on contracts to submit proposals that include direct labor categories and rates, subcontractor labor categories and rates, and other direct costs used to calculate the total cost of their proposal. Contractor invoices must include key information such as employee name, labor classification, rate of pay, hours worked for billed labor charges and support for other charges. FBI guidance states that staff performing invoice reviews should compare the key data to the contractor proposal to verify the accuracy of amounts charged. During our review of a non-statistical sample of 37 contractor invoices, we found unsupported charges of $292,684 on five invoices submitted by three contractors for three separate contracts that totaled $6,293,046 for prime contractor and subcontractor direct labor, materials, and other direct costs. Specifically, these totals include:  We reviewed an invoice, dated October 5, 2009, submitted by one contractor that included direct labor charges of $16,963 for one labor group that was not included in the contractor’s cost proposal. The FBI acknowledged that the labor group was not listed in the original proposal by the contractor but stated that during the course of the contractual effort, the contractor determined there was a need for labor to be performed on the contract that required the skill set of a labor group that had not been included in the contractor’s cost proposal. In addition, the FBI stated that the rate charged resulted in a savings to the FBI under this contract without affecting the contract schedule or deliverables. However, the FBI did not provide us with documentation supporting the FBI’s approval of the new labor rate for the contract prior to the period billed on the invoice. In addition, the invoice included $50,000 for the work of a subcontractor. In our review of the contractor’s proposal related to subcontractor labor, we noted that it included, for this specific subcontractor, a proposed labor rate of $184.84 for 610 hours for a total of $112,752. However, the invoice documentation did not include any information such as the name of the subcontractor employee(s), the labor category, the hours worked, or the rate of pay under other direct costs that would allow the FBI to verify the accuracy and validity of the charges. In our review of two invoices submitted by another contractor, we found that the contractor had billed the FBI $97,851 for direct labor and subcontractor labor at rates, six for the contractor and three for a subcontractor, which were not included in the contractor’s proposal.  Similarly, in our review of two invoices submitted by a contractor for a third contract we found that they included labor charges of $127,870 at hourly labor rates, for four contractors and two subcontractors, which were not supported by the contractor’s proposal. We also discussed our findings related to the second and third contractor’s invoices with FBI officials, and they explained that in reviewing the invoices they focus on the status of the project and its various components or tasks. They also stated that both contractors submitted monthly reports to the FBI that included the actual costs of the project for each current month as well as the costs of the project to date and compared the costs to project’s budget. However, the FBI also stated that it did not require the contractors to provide analyses for cost variances except when variances exceed thresholds set for the two contracts. Without verifying labor groups and labor rates billed on contractor invoices against the contractor’s proposal as required by FBI policy, the FBI is at increased risk that it will not identify erroneous or improper billings and will disburse government funds for unallowable contractor charges. As shown in table 2, we also found instances in which the FBI did not record accountable property items in its system in a timely manner and did not accurately record key accountability information such as location and serial numbers as required by the FBI’s policies and procedures.  The FBI’s revised policy, which is in response to our prior recommendation, requires that accountable property be recorded into the Property Management Application (PMA) within 48 hours of receipt instead of within 30 days of receipt, which was the FBI’s policy at the time our 2006 report. Internal control standards require agencies to establish controls that reasonably ensure, among other things, that funds, property, and other assets are safeguarded against waste, loss, or unauthorized use. In our review, we found 406 pieces of accountable property out of the 674 we tested had not been recorded in PMA within 48 hours of being received as now required and that some had not been recorded until more than a month after being received. However, we also noted that the FBI, while not adhering to its more stringent current policy, had recorded 90.7 percent of the accountable property we tested within 30 days of receipt. This represents an improvement from the situation that existed at the time of our Trilogy work. During its agencywide upgrade of hardware and software under the Trilogy project, the FBI only recorded 28.4 percent of accountable property that we reviewed, within 30 days of receipt, as reported in our 2006 report. FBI management acknowledged that property was not being recorded in compliance with its policy. FBI management officials explained that this condition was due to property being ordered and received by numerous FBI divisions and field offices and that some of these divisions and field offices, did not have dedicated staff for recording purchased assets in PMA immediately upon receipt of the property. This situation serves to delay the recording of the assets in PMA. In addition, they explained that some accountable property ordered by the various FBI offices is delivered to FBI storage facilities and held for security reasons before being delivered to the end user and that these properties are not recorded in PMA until received by the end user. Recording of property in PMA is critical in establishing accountability. The longer it takes to record property in PMA, the greater the risk that property may be stolen or lost without detection by the FBI. In 2006, the FBI issued a policy to all FBI divisions that made recording the location field when accountable property is added to PMA or when corrections to records are made, mandatory. In addition, the policy stated that the information recorded in the location data field is the location of the property within the division or “legat.” In reviewing the data entry screens for recording assets in the FBI’s property management application we noted that there are fields that can be utilized by the property custodian to provide a location within the division or legat. In our review of the PMA screens for the selected property items we found that the information recorded in PMA for 80 of the 674 records did not provide sufficient information on the location of the property within the division as required. In addition, we found that the serial number field was either blank, incomplete, or had the entry “719TOBEADDED” in the PMA records for 14 of the 674 tested assets. In addition, the five records that had “719TOBEADDED” recorded, had not been updated for more than a year. We also found 45 PMA records in which the model description was entered as “TO BE ADDED.” The model description was missing for all 45 assets for more than a year, with 6 of these assets lacking this information for almost 2 years since they were first entered in PMA. We brought these findings above to the attention of FBI officials. With regard to the location information, the FBI stated that while the location field is mandatory, there is no requirement on the amount of detail to be listed. However, as mentioned previously, the 2006 policy issued by the Asset Management Unit clearly states that the information recorded in the location data field is the location of the property within the division or legat. The lack of key information such as model, manufacturer, description, serial number, and specific location information in PMA as required by FBI policy would limit its ability to investigate assets reported as missing during physical inventories. In addition, inadequate location information results in the lack of a systematic means of identifying where an asset is located or when it is moved, transferred, or disposed of. The FBI has taken action to address 26 of the 27 recommendations we made in our prior Trilogy and Sentinel reports. Many of these actions involved developing policies and procedures. Developing and communicating policies and procedures, while critical, is only the first step that the FBI must take to address the identified internal control weaknesses. Management must also ensure that the policies and procedures are effectively implemented throughout the agency. Although we found that the FBI had effectively implemented policies and procedures related to interagency agreements and contracts, our tests on the statistical selected transactions showed that additional action is needed to ensure that Determination and Findings forms are properly completed before the FBI enters into interagency agreements. With an estimated 40 percent of its interagency agreements lacking a properly completed Determination and Findings form, the FBI increases the risk that it is obligating funds for supplies and/or services that are not in the best interest of the government or executing a contract that is inconsistent with federal laws or regulations. Further, we identified several other areas where the implementation of policies and procedures, primarily related to review of contractor invoices and accountability for purchased assets, may need to be strengthened. Our testing of selected invoice transactions identified unsupported labor categories and rates billed by contractors. This situation points to a lack of thorough review of contractor invoices. This weakness puts the FBI at risk of making payments to contractors for questionable or improper charges. Additionally, our testing of selected accountable property items identified property items that were not timely or accurately recorded. This problem decreases the FBI’s ability to adequately safeguard its accountable property. Identifying and correcting any systemic weaknesses in these areas will be critical to achieving sustainable improvements in the FBI’s agencywide controls over its procurement activities. We recommend that the Director of the FBI direct the Chief Financial Officer take the following three actions.  Enhance the monitoring of the interagency agreements process to ensure that Determination and Findings forms are prepared, when applicable, in accordance with federal and agency requirements. In the area of contractor invoice review and approval, we recommend the Director of the FBI to direct the Chief Financial Officer to:  review agencywide implementation of the new or revised policies and procedures related to our prior recommendations to verify that invoice costs are in accordance with contract terms to determine if the indications of issues we identified in this report represent systemic, agencywide implementation deficiencies, and take appropriate, cost-effective actions to better ensure agencywide compliance with the applicable policies and procedures. In the area of property accountability, we recommend the Chief Financial Officer be directed to: review agencywide implementation of the new or revised policies and procedures related to our prior recommendations to record specific data for acquired assets within specified time frames to determine if indications of issues we identified in this report represent systemic, agencywide implementation deficiencies, and take appropriate, cost-effective actions to better ensure agencywide compliance with the applicable policies and procedures. In its written comments, FBI concurred with our recommendations and stated that it has already initiated changes to its processes and procedures to address our recommendations. FBI stated that it provided interagency agreement training to its contract specialists and is now testing an application to monitor, collect, and document information for all FBI interagency agreements. The FBI further stated that it is taking steps to ensure that invoices are properly reviewed, including strengthening its procurement training curriculum and modifying the current contract specialist file review checklist to include comparing invoiced labor categories and costs to labor categories and costs in supporting contracts. Additionally, the FBI stated that it has developed an accountable property officer training course intended to help ensure that its divisions have an effective and efficient property management program, and that actions are under way to configure a new property management application to include additional controls to better track physical location of purchased assets. If properly implemented, the activities outlined in FBI’s letter should help further improve FBI’s accountability for future interagency acquisitions and accountable property. FBI’s comments are reprinted in their entirety in appendix III. FBI also provided technical comments, which we have incorporated as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies to interested congressional committees. We will also send copies to the Attorney General, the Director of the Federal Bureau of Investigation, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9471 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To address our first objective to determine whether the FBI’s new or revised policies and procedures and other specific corrective actions were sufficient to address the 27 recommendations we made in our Trilogy and Sentinel reports, we performed an assessment of the FBI’s corrective action plans and reviewed additional supporting documentations received from the FBI. Specifically, in its required 60-day letter to Congress, the FBI explained the corrective actions it had taken or planned to take to address the issues we identified in our report. In addition, in mid-2006, the FBI submitted additional documentation to GAO which included revised or updated corrective action plans for each recommendation. Also, in the third quarter of 2009, the FBI submitted additional documentation to GAO for selected recommendations to support additional corrective action steps taken since 2006. We also identified key operations and management officials at the FBI responsible for the development of the corrective actions and conducted interviews and walk- throughs to ensure that we fully understood the corrective actions. We reviewed additional information and documentation identified during our interviews, as well as new and revised policies and procedures and training materials received from the FBI, and utilized this information to make a determination of whether the corrective actions were adequately designed to address our recommendations. To address our second objective to determine whether there were any indications of implementation issues related to the policies and procedures that the FBI developed to address 17 of the 27 recommendations, we selected statistical samples of interagency agreements and contracts. We then non-statistically selected purchase orders, invoices, and accountable property from the contracts selected in the statistical samples and performed a variety of detailed tests. In our review of the FBI’s corrective action plans, we determined that the agency had continued to take corrective actions to address our recommendations through fiscal year 2008. Therefore, in order to obtain a more representative population of transactions that had occurred after the last corrective actions had been put in place, we decided to select statistical samples from a population of transactions that occurred in fiscal year 2009. Because we selected statistical samples for testing implementation of certain new or revised policies and procedures, we assessed the reliability of the FBI’s contracting, interagency-agreement, and property- data files by first identifying and documenting the controls in place at the FBI for ensuring accurate and complete data is recorded into information systems during the FBI’s contracting, interagency-agreement, and property-acquisition processes and then assessing whether these controls appeared adequate. We inquired about the processes by which interagency agreements, contracts, and purchase orders are completed and recorded and developed an understanding of the controls designed to ensure data entered into FMS for interagency agreements and contracts is accurate and complete. In addition, we reviewed DOJ’s annual financial statement internal control reports for fiscal years 2007, 2008, and 2009 to identify any material weaknesses or reportable conditions related to the information systems identified in the step above. We also analyzed data listings to identify any anomalies in the data fields such as blank cells or inconsistent naming conventions for contracts and interagency agreements and obtained explanations for any anomalies noted. Based on these steps, we determined the FBI’s contract, interagency-agreement, and property-data files were sufficiently reliable to address the objectives of this report. We selected a statistical sample of 55 interagency agreements from a total population of 494 interagency agreements executed by the FBI during fiscal year 2009. In our testing of interagency agreements, we verified that all agreements clearly defined the roles and responsibilities relative to contract administration including invoice submission for both parties. During our testing, we also considered new guidance on interagency agreements issued by the Office of Management and Budget’s (OMB) Office of Federal Procurement Policy that the FBI disseminated to all of its procurement chiefs. We selected a statistical sample of 32 contracts from a population of 51 contracts executed in fiscal year 2009. In addition, for each of the 32 contracts in the sample, we selected all related purchase orders for testing. The total number of purchase orders selected for testing was 34. Our contract and purchase-order testing consisted of determining whether the contracts and/or purchase orders (1) clearly specified key cost determination provisions; (2) clearly reflected the appropriate Federal Acquisition Regulation travel cost requirements; and (3) contained provisions regarding the contractor’s review of subcontractor charges. In addition, for purchase orders only, we determined whether the purchase orders were sufficiently detailed to verify the receipt of property and other goods and services. We obtained a listing of all invoices that had been submitted to the FBI for the purchase orders we selected noting that invoices had not yet been submitted to FBI for 7 of the 34 purchase orders. A total of 110 invoices had been submitted for the other 27 purchase orders we selected—one invoice each had been submitted for 16 of the purchase orders, two invoices each for 3 of the purchase orders, and three or more invoices had been submitted for the remaining 8 purchase orders. In selecting invoices for testing, we selected all invoices for those purchase orders that only had either one or two invoices. For each of the 8 purchase orders with three or more invoices, we selected the invoice with the highest dollar value for testing and one other invoice on a non-judgmental basis. The total number of invoices selected for testing was 37. Our invoice testing consisted of determining whether the contractor’s invoice and supporting documentation (1) provided evidence of the FBI review and approval of the charges by the parties designated in the contract; (2) included evidence that goods and services billed on the invoice were received; (3) provided sufficient information to support the charges; (4) included amounts that were appropriate and in accordance with contract terms; and (5) provided evidence that the FBI properly documented the resolution of invoice discrepancies. We also asked the FBI to provide a listing of all accountable property included in its Property Management Application (PMA) for each of the 34 purchase orders we reviewed. According to the FBI’s listing, 20 of the 34 purchase orders included accountable property that had been recorded in the FBI’s PMA. There were a total of 674 individual items of accountable property for the 20 purchase orders. We included all 674 items of accountable property for our property testing. Our property testing consisted of determining whether the FBI (1) entered in PMA the appropriate purchase order number, asset description, and physical location of the accountable property purchased; (2) entered all accountable property in PMA within the time frame specified in the FBI’s policy; (3) assigned bar codes to the accountable property when received and annotated the assigned bar codes in the receiving reports and in PMA; (4) properly documented any accountable property rejected immediately upon delivery; and (5) properly updated the PMA records of all accountable property returned after being accepted. We requested comments on a draft of this report from the FBI. We received written comments from the FBI on August 11, 2011, and have summarized those comments in the Agency Comments section of this report. FBI’s comments are reprinted in appendix III. We conducted this performance audit from February 2010 through September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our audits to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Interagency agreements and contract Administration 1. To improve FBI’s controls over its review and approval process for cost-reimbursement type contract invoices, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that future interagency agreements establish clear and well-defined roles and responsibilities for all parties included in the contract administration process, including those involved in the invoice review process, such as contracting officers, technical point of contacts, contracting officer’s technical representatives, and contractor personnel with oversight and administrative roles. In July 2008, FBI’s Senior Procurement Executive issued Procurement Guidance Document 08-10 to all Bureau Procurement Chiefs that incorporated a memorandum from the Office of Federal Procurement Policy (OFPP), Office of Management and Budget (OMB), which discussed new guidance on interagency agreements. The new OFPP guidance, issued in June 2008, requires the requesting agency and the servicing agency to assign specific roles for each agency and is to be fully implemented for all interagency agreements executed after November 3, 2008. The guidance discusses, among other things, the need for defining roles such as the COTR and establishing specific responsibilities for those roles. It further elaborates on responsibilities for identifying an appropriate invoice review official prior to submittal of the first invoice and inspecting and rejecting contract work as necessary. 2. To improve FBI’s controls over its review and approval process for cost-reimbursement type contract invoices, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that labor rates, ceiling prices, treatment of overtime hours, and other key terms for cost determination are clearly specified and documented for all contracts, task orders, and related agreements. In December 2008, FBI’s Policy Training Unit created an intranet site, the Contract Specialist Corner, to provide contract specialists/contract officers with procurement information and guidance. The site includes links to procurement guidance and directives issued by DOJ, the FBI’s Policy Training Unit, and Federal Acquisition Regulation (FAR) circulars issued by the FAR council, as well links to standard procurement forms used in the procurement process. The site also provides access to contract execution checklists for different contract types, as well as standard FAR clauses applicable to various types of acquisitions including clauses related to labor rates, ceiling prices, treatment of overtime hours, and other key contract terms. In addition, in 2009, the Policy Training Unit created a separate intranet site for contracting officer technical representatives (COTR) as well as one for field offices to provide on-line access to procurement guidance and documentation. In addition, since January 2009 the Policy Training Unit has held monthly training sessions for contract specialists/contracting officers to ensure that directives issued by DOJ and FBI are being implemented properly. Development of policies and procedures and other actions taken process for cost-reimbursement type contract invoices, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that an appropriate process is in place to assess the adequacy of contractor’s review and documentation of submitted subcontractor charges before such charges are paid by FBI. In December 2008, FBI’s Policy Training Unit created an intranet site, the Contract Specialist Corner, to provide contract specialist/contract officers with procurement information and guidance, including references (and hyperlinks) to all procurement guidance documents that have been issued by DOJ, the Policy Training Unit, and FAR circulars issued by the FAR council. In addition, the site includes a section, FAR Matrix of Clauses, which provides information on applicable FAR clauses, including guidance useful to contract specialists in determining whether subcontractor clauses are necessary. In addition, in 2009, the Policy Training Unit also created two other separate intranet sites for contracting officer technical representatives (COTR) and Field Offices. The COTR site includes links to procurement guidance that includes discussion of contract administration responsibilities related to the COTR, prime contractor and subcontractor, as well as provides access to the documentation related to the activities of the COTR. In addition, since January 2009, the Policy Training Unit has held monthly training sessions for contract specialists/contracting officers to ensure that directives issued by DOJ and FBI are being implemented properly. In December 2008, FBI’s Policy Training Unit created an intranet site, the Contract Specialist Corner, to provide contract specialist/contract officers with procurement information and guidance, including references to all procurement guidance documents issued by DOJ, the Policy Training Unit, and FAR circulars issued by the FAR council, including those related to travel cost requirements (i.e., using the lowest standard coach or equivalent airfare). Specifically, the site includes a section, FAR Matrix of Clauses, which provides information on applicable FAR clauses, including those related to travel cost requirements. In addition, since January 2009, the Policy Training Unit has held monthly training sessions for contract specialists/contracting officers to ensure that directives issued by DOJ and FBI are being implemented properly. process for cost-reimbursement type contract invoices, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that future contracts clearly reflect the appropriate Federal Acquisition Regulation travel cost requirements, including the purchase of the lowest standard, coach, or equivalent airfare. FBI has established policies and procedures designed to provide guidance on its invoice review and approval process. Specifically, FBI issued two electronic communications that provided guidance on the invoice review and approval process. The first electronic communication, titled “Invoice Processing - Purchase Orders and Contracts,” states that the Contracting Officer is responsible for ensuring that the requirements for a proper invoice are attached and incorporated as a condition of the purchase order and for contracts, ensuring that the applicable clause is included. The second electronic communication, titled “Vendor Invoice and Payment Matter,” provides guidance on the information that constitutes a proper invoice and on the documentation required to support the payment of invoices. FBI incorporated these electronic communications in its Manual of Administrative Operations and Procedures (MAOP) Part 2 - Section 6-5.2, titled “Invoices under Purchase Orders/Contracts,” issued in February 2007. The section states that prior to submitting an invoice to the FBI’s Contracting Officer (CO) for approval and payment, the FBI requesting division is responsible for ensuring that goods and services are received in accordance with contract terms. Section 6-5.2 of the MAOP also states that the FBI CO is responsible for verifying that all required information is on the invoice before approving it. process for cost-reimbursement type contract invoices, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that appropriate steps are taken during the invoice review and approval process for every invoice cost category (i.e., labor, travel, other direct costs, equipment, etc.) to verify that the (1) invoices provide the information required in the contract to support the charges, (2) goods and services billed on invoices have been received, and (3) amounts are appropriate and in accordance with contract terms. 6. To improve FBI’s controls over its review and approval process for cost-reimbursement type contract invoices, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that the resolution of any questionable or unsupported charges on contractor invoices identified during the review process is properly documented. FBI has established policies and procedures related to properly documenting the resolution of any questionable or unsupported charges on invoices identified during the invoice review process. Specifically, Part 2, Section 6-9.3.3 of FBI’s Manual of Administrative Operations and Procedures (MAOP), version dated 2/26/07, titled “Review of Invoices,” and FBI’s Electronic Communication titled “Vendor Invoice and Payment Matter” specifies requirements for FBI to properly document the reasons for determining that an invoice is improper, the date the invoice is returned to the vendor, and the date a corrected invoice is received from the vendor. Development of policies and procedures and other actions taken the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that invoices are paid only after all verified purchase order and receipt documentation has been received by FBI payment officials and reconciled to the invoice package. In November 2006 FBI issued a policy titled, “Implementation of Invoice Submission Form for Invoices and Intragovernmental Payment and Collection Process by CPCSU,” 319E-HQ- A1487524-FD. This policy requires that all invoices sent to the commercial payment unit beginning in November 2006 must include a completed invoice submission form as the cover sheet. The new required form includes the following fields to be completed by the submitting division: vendor number, invoice date, acceptance date, purchase order number, purchase order line number, purchase order quantity, total amount, date the COTR and the contracting officer (1) received the invoice form, (2) approved it—with their signature, and (3) the date they sent it on to the next responsible party. Collectively, these actions to establish policies requiring confirmation of receipt of goods and services before payment addresses our recommendation. 8. To address issues on the Trilogy project that could represent opportunities for recovery of costs, the Administrator of General Services, in coordination with the Director of FBI, should determine whether other contractor costs identified as questionable in this report should be reimbursed to FBI by contractors. 9. To address issues on the Trilogy project that could represent opportunities for recovery of costs, the Administrator of General Services, in coordination with the Director of FBI, should further investigate whether DynCorp Information Systems’ labor rates exceeded ceiling rates and pursue recovery of any amounts determined to have been overpaid. 10. To address issues on the Trilogy project that could represent opportunities for recovery of costs, the Administrator of General Services, in coordination with the Director of FBI, should confirm the Science Applications International Corporation’s (SAIC) informal Extended Work Week policy and work with SAIC to determine and resolve any overpaid amounts. 11. To address issues on the Trilogy project that could represent opportunities for recovery of costs, the Administrator of General Services, in coordination with the Director of FBI, should consider engaging an independent third party to conduct follow-up audit work on contractor billings, particularly areas of vulnerability identified in this report. In January 2007 GSA requested, on behalf of the FBI, that Defense Contract Audit Agency (DCAA) perform post-award audits of direct costs incurred and billed by contractors under the FBI’s Trilogy project. In March 2008, DCAA reported the results of its audit related to SAIC reporting questioned costs of $3.7 million. As a result of this audit, FBI recovered $3.2 million from SAIC. In December 2008, DCAA reported the results of its audit of the direct costs incurred and billed by Computer Sciences Corporation (CSC) reporting questioned costs of $14.95 million, $9.7 of which was related to labor charges. In addition to questioned costs related to labor rates exceeding ceiling rates, DCAA reported additional CSC questioned costs of $1,825,952 related to airfare costs because the costs were inadequately supported and exceeded the lowest customary standard coach or equivalent airfare, and $979,187 of labor costs because of lack of personnel qualifications or lack of supporting documentation that shows the employees’ labor qualifications. DCAA’s report also incorporated evaluations of costs incurred by the largest subcontractors that performed under CSC’s task order. The report questioned costs for subcontractors CACI, DigitalNet, PlanetGov/Apptis, Inc. and others totaling $5.1 million. The types of subcontractor questioned costs reported by DCAA included 1) supporting timesheets that were either not certified by the consultant or not approved by an appropriate/approving authority 2) use of personnel that did not meet the minimum labor qualifications, and 3) unsupported or inadequately supported transactions. With regard to these questioned costs, in March 2009, the Department of Justice, Office of Inspector General, began an investigation to determine whether the billings were potentially fraudulent and involved criminal conduct by CSC and its subcontractors. In commenting on a draft of this report, the FBI informed GAO that the Inspector General’s report on this matter was under review by OIG management. Development of policies and procedures and other actions taken 12. To improve FBI’s accountability for purchased assets, the Director of FBI should instruct the Chief Financial Officer to reinforce existing policies and procedures so that when assets are delivered to FBI sites, they are verified against purchase orders and receiving reports. Copies of these documents should be forwarded to FBI officials responsible for reviewing invoices as support for payment. The FBI has reinforced policies and procedures related to verifying assets received to purchase orders and receiving reports. Specifically, FBI issued an Electric Communication to all divisions in November 2005 to reinforce FBI’s policy that all accountable property be entered in the Property Management Application (PMA) immediately upon receipt and that data recorded in FBI’s financial management system must include the purchase order number and the destination division. The information entered into the financial management system is to be immediately uploaded to PMA for verification of the accuracy of property being recorded in PMA with specific purchase orders. In addition, the FBI issued an electronic communication to all divisions in November 2006 to implement a new invoice submission form to be used with all commercial invoices to improve the information provided to the payment unit for invoices. The electronic communication required that the invoice submission form include various fields, including, purchase order line number - purchase order line number to charge the invoice, purchase order quantity – quantity to be paid for the invoice, and Total amount – total amount to be paid by purchase order line. 13. To improve FBI’s accountability for purchased assets, the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures so that (1) purchase orders are sufficiently detailed so that they can be used to verify receipt of equipment at FBI sites, and (2) contractor invoices are formatted to tie directly to purchase orders and facilitate easy identification of equipment received at each FBI site. The FBI formed the Policy Training Unit in 2007 which is responsible for all acquisition policy and acquisition training within the FBI, including implementation, updates, and training of specialized acquisition matters. Since January 2009, the Policy Training Unit holds monthly contractor specialist training sessions at which participants discuss new procurement guidance issued by DOJ and procurement directives issued by FBI and implementation. According to FBI, during these procurement training sessions, the Policy Training Unit staff have stressed the importance to contracting officers of generating purchase orders with a sufficient level of detail so that the requesting division can use the purchase order to verify equipment receipt. In addition, the FBI formed the Acquisition Strategy and Planning Unit in March 2006. The Acquisition Strategy and Planning Unit developed training materials for acquisition planning that included guidance on the FD-369, the requisition form which is also used in generating purchase orders, which states that equipment and services be on separate lines. 14. To improve FBI’s accountability for purchased assets, the Director of FBI should instruct the Chief Financial Officer to establish a policy to require that upon receipt of property at FBI sites, FBI personnel immediately identify all accountable assets and affix bar codes to them. In order to record an asset on PMA it must have a bar code assigned. In November 2005, FBI issued an electric communication to reinforce accountable property policies and procedures by requiring that all divisions record accountable property in PMA within 48 hours of being received. If fully and effectively implemented, this policy should improve the FBI’s accountability over purchased assets. Development of policies and procedures and other actions taken the Director of FBI should instruct the Chief Financial Officer to revise FBI’s policies and procedures to require that all bar codes are centrally issued and tracked through periodic reconciliation of bar codes issued to those used and remaining available. Assigned bar codes should also be noted on a copy of the receiving report and forwarded to FBI’s Property Management Unit. FBI has taken action to strengthen controls that help ensure all accountable property is bar coded and properly recorded by issuing a March 2009 electronic communication (approved by the Chief Financial Officer) that requires all FBI offices to perform a weekly review of the On Order Report to ensure that all property that has been receipted in FBI’s financial management system is also added to PMA. The On Order Report lists all property items associated with a particular purchase order that do not show having been recorded in the PMA, thus providing FBI with the ability to identify and investigate potentially missing and/or unrecorded property items. Also, our September 2010 walk-through of the process to enter assets into PMA found that the bar code numbers are annotated on copies of the receiving reports that are forwarded to the property custodian as we had recommended. In addition, we noted that the FBI’s Asset Management Unit is solely responsible for issuing bar codes to FBI offices and the unit requires that all offices sign an FD-281 verifying receipt of bar codes. 16. To improve FBI’s accountability for purchased assets, the Director of FBI should instruct the Chief Financial Officer to revise FBI policies and procedures to require that accountable assets be entered into PMA immediately upon receipt rather than within the current 30-day time frame. In November 2005, FBI issued an electric communication to reinforce accountable property policies and procedures by requiring that all divisions record accountable property in PMA within 48 hours of being received. 17. To improve FBI’s accountability for purchased assets, the Director of FBI should instruct the Chief Financial Officer to require officials inputting data into PMA to enter (1) the actual purchase order number related to each accountable equipment item bought, (2) asset descriptions that are consistent with the purchase order description, and (3) the physical location of the property. The FBI has taken action to enhance its PMA to help ensure the accuracy and completeness of reporting on the status of property on order, the On Order Report, including installing a system edit limiting data entry to those transactions with valid purchase order numbers. Specifically, FBI issued an electric communication “Property Management Application Policy Change” on 02/15/2006 that provides that assets added to PMA must include data in the PMA location field. We confirmed that the FBI developed and put in place system edit checks that were effective in ensuring that only valid purchase order numbers were recorded into PMA and that related purchase order information was automatically and accurately uploaded from the financial management system into PMA. Development of policies and procedures and other actions taken the Director of FBI should instruct the Chief Financial Officer to establish policies and procedures related to the documentation of rejected or returned equipment so that the (1) equipment that is rejected immediately upon delivery is notated on the receiving report that is forwarded to FBI officials responsible for invoice payment and (2) equipment that is returned after being accepted at an FBI site (e.g., items returned due to defect) is annotated in PMA, including the serial number and location of any replacement equipment, under the appropriate purchase order number. In September 2006, FBI issued an electric communication to all divisions requiring that the Asset Management Unit be notified of any accountable property returned to vendors to ensure that the return of the property is recorded in PMA. If a replacement is provided by the vendor the Asset Management Unit is to revise the PMA records to provide descriptive information for the new property item (serial number and barcode) and show it as “Active” and show the barcode of the returned piece of property as “Inactive.” In addition, when a piece of accountable property delivered to FBI is returned and a replacement is to be provided by the vendor, the item is to remain on FBI’s On Order Report until a replacement is received and not to be recorded in PMA. We also noted, during our discussions with officials from the FBI’s commercial payment unit that FBI’s financial management system will not allow a disbursement of funds until the property has been recorded as received. FBI issued an electric communication “2009 Wall-to-Wall Inventory of Property, Plant, Equipment, and Issued Personal Property” that included additional directions to the inventory staff to ensure that, property information recorded in PMA was accurate. Specifically, the directive included detailed steps on how to request changes to correct errors in the manufacturer or model description fields in PMA for property identified during the inventory. The directive included as an attachment, a form titled “2009 INVENTORY CHANGES FOR MANUFACTURER & MODEL #s” to request a change or modification to these fields. the Director of FBI should instruct the Chief Financial Officer to expand the next planned physical inventory to include steps to verify the accuracy of asset identification information included in PMA. Development of policies and procedures and other actions taken the Director of FBI should instruct the Chief Financial Officer to reassess overall physical inventory procedures so that all accountable assets are properly inventoried and captured in the PMA system and that all unlocated assets are promptly investigated. FBI has reassessed its inventory procedures as reflected in changes to instructions provided to all divisions through electronic communications in advance of wall-to-wall inventories. The FBI issued an electronic communication in advance of the wall to wall inventory titled “2009 The Wall-to- Wall Inventory of Property, Plant, Equipment and Issued Personal Property” on 3/31/2009 for the wall-to-wall inventory to be held beginning 4/13/09. The electronic communication includes instructions that are related to findings we reported. For example, the 2009 electronic communication states that during the inventory, only the PMA custodians will have access to change only the serial number due to the fact that 95% of the manufacturer and/or model information has now been standardized on PMA. In addition, the 2009 electronic communication included additional language that stressed a complete inventory, stating that all additions must be added to the PMA during the inventory period in order to comply with a “fully completed” wall-to-wall inventory because failure to comply with the procedure would result in an “incomplete” inventory. The 2009 electronic communication also included new language on property issued to individual employees and contractors, which stated, Each office is to ensure that the issued personal property for each employee and contractor is inventoried. Discrepancies for issued personal property are to be forwarded to the appropriate FBIHQ Program Managers and the Asset Management Unit should be notified of these changes. Lastly, the electronic communication included a section titled “Required Procedures for Concluding Inventory” that included additional language related to lost and stolen property and additions. Beginning in 2008, the FBI’s Finance Division’s audit unit expanded its audit coverage to include conducting audits of compliance with FBI policies and procedures for review and approval of contractor invoices, government purchase card usage, and oversight for the semi-annual audits of property and equipment at the FBI field offices and divisions. Further, the Audit Unit Chief informed us that his unit expanded the scope of its audits to include audits of internal controls related to accountable property on a periodic basis as needed in response to significant events at FBI. For example, in fiscal year 2009 the FBI initiated the Next-Generation Workstation Tech Refresh program (NGW) to purchase approximately 47,000 new computers. The Finance Division audit unit performed on-site reviews at the Seattle field office and the Portland field office to determine whether required controls over property accountability were in place as the computers from the NGW program were being distributed throughout the agency. the Director of FBI should instruct the Chief Financial Officer to establish an internal review mechanism to periodically spot check whether the steps listed above- including verifications of purchase orders and receiving reports against received equipment, immediate identification and bar coding of accountable assets, maintenance of accurate asset listings, prompt entry of assets into PMA, documentation of rejected and returned equipment, and improved bar coding and inventory procedures- are being carried out. Development of policies and procedures and other actions taken the Director of FBI should instruct the Chief Financial Officer to investigate all missing, lost, and stolen assets identified in this report to (1) determine whether any confidential or sensitive information and data may be exposed to unauthorized users and (2) identify any patterns related to the equipment (e.g., by location, property custodian, etc.) that necessitates a change in FBI policies and procedures, such as assignment of new property custodians or additional training. In March 2011, the FBI provided us with documentation on the results of its investigation into the status of the over 1,200 assets that acknowledged that 134 assets remained unaccounted for. The assets that remain unaccounted for include some that could contain sensitive information, such as desktop computers, laptops, and servers.We initially identified 1,404 assets as missing, lost, or stolen, but prior to the issuance of our 2006 report the FBI provided additional documentation that enabled us to verify an additional 199 assets on PMA that we initially determined to be missing. lost, or stolen, reducing the number reported to 1,205. The FBI has not provided an analysis to identify any patterns related to these assets that would be helpful for identifying any needed changes to FBI policies and procedures. 23. We recommend that the Director of the FBI direct the Sentinel Program Manager to modify existing Sentinel policies and procedures to require the Sentinel property manager to verify for every property shipment that data in the Lockheed Martin database are complete and accurate before using these data to create or update FBI’s official property records in PMA. FBI modified its existing policies and procedures to include a policy of verifying accuracy and completeness of data in the Lockheed Martin database. Specifically, Section 2.2.3 of FBI’s Property Management Policy & Procedures, prepared by Office of IT Program Management, Sentinel Program and Business Management Team, version dated 9/3/2008, titled “Receipt of Property” policy specifies requirements of the Sentinel Property Manager to verify the accuracy of equipment data on the property list uploaded by the contractor before using this data to enter equipment to PMA upon receipt and verification of equipment. The equipment data on the property list refers to the Lockheed Martin database for Sentinel project. 24. We recommend that the Director of the FBI direct the Sentinel Program Manager to modify existing Sentinel policies and procedures to require that the Sentinel property manager perform monthly reconciliations of the key property records (i.e., the BOM, the vendor invoices, the Lockheed Martin database, and PMA) throughout each subsequent phase of Sentinel rather than a single close-out reconciliation at the completion of a phase. FBI modified its existing policies and procedures to include periodic reconciliation of the key property records. Specifically, Section 3.2 of FBI’s Property Management Policy & Procedures, prepared by Office of IT Program Management, Sentinel Program and Business Management Team, version dated 9/3/2008, titled “Data Requirements” specifies that the Sentinel Property Manager will perform at least monthly reconciliations of the key property records including Bill of Materials, invoices, contractor property list (Lockheed Martin database) and PMA throughout the Sentinel Program. Sentinel Program Manager to modify existing Sentinel policies and procedures to require the Sentinel property manager to document the initial inspection of property as it is received, including verification that the property was properly barcoded. FBI modified its existing policies and procedures to include initial inspection of property upon receipt, including verification that the property was properly barcoded. Specifically, Section 2.2.3 of FBI’s Property Management Policy & Procedures, prepared by Office of IT Program Management, Sentinel Program and Business Management Team, version dated 9/3/2008, titled “Receipt of Property” specifies that the Sentinel Property Manager will match quantity, manufacturer, make, model and serial number and barcode number on the shipping document, packing lists, or invoice to the physical inventory of equipment received, and after the verification the document will be annotated with receipt noted and signed by the Property Administrator and Sentinel Property Manager. 26. We recommend that the Director of the FBI direct the Sentinel Program Manager to modify existing Sentinel policies and procedures to require the Sentinel property manager to record in the Lockheed Martin database the date Sentinel property is received to allow for assessments of whether Sentinel property was timely recorded into PMA. FBI modified its existing policies and procedures to require the received date be tracked. Specifically, Section 3.2 of FBI’s Property Management Policy & Procedures, prepared by Office of IT Program Management, Sentinel Program and Business Management Team, version dated 9/3/2008, titled “Data Requirements” specifies that in order to be useful to both the PMO and contractor the key data points, including the date received and Shipping Document/Item Receipt Verification Date, should be tracked by either an electronic spreadsheet or listed on a locally devised inventory sheet. According to FBI’s Sentinel team, the “Date Received” column in the Lockheed Martin database now tracks the received date of each property. 27. We recommend that the Director of the FBI direct the Sentinel Program Manager to modify existing Sentinel policies and procedures to require the Sentinel property manager to follow up on and document actions taken with respect to the 20 property records we identified as having valuation discrepancies, including any adjustments to the valuations in either FBI’s or the contractor records. FBI followed up with the 20 items identified in our report and took necessary corrective action to eliminate valuation discrepancies. We reviewed accountable property records maintained in the PMA and the Lockheed Martin database provided by FBI for those 20 items and verified that FBI has made adequate adjustments to the valuation in PMA and the Lockheed Martin database. PMA that we initially determined to be missing. lost, or stolen, reducing the number reported to 1,205. Staff members who made key contributions to this report include: Phillip McIntyre, Assistant Director; William E. Brown; Sharon Byrd; Liliam Coronado; Joshua Edelman; Francine DelVecchio; Francis Dymond; Wendy Johnson; Galena Phillips; Lisa Reijula; and Seong Bin Park.
The FBI has spent over $900 million on the Trilogy and Sentinel information technology (IT) projects intended to provide FBI with an upgraded IT infrastructure and an automated case management system to support FBI agents and analysts. In February 2006 and July 2008, GAO reported on significant internal control weaknesses related to FBI's contract administration, processing of contractor invoices, and accountability for equipment acquired for these projects. GAO made 27 recommendations to the FBI to address these deficiencies. The FBI concurred with all 27 recommendations. This report provides an assessment of (1) the FBI's corrective actions to address GAO's 27 recommendations and (2) whether there were any indications of implementation issues related to the policies and procedures the FBI developed to address 17 of the 27 recommendations. GAO reviewed FBI policies and procedures, performed walk-throughs, and conducted detailed tests on statistically and nonstatistically selected samples of transactions. The corrective actions developed by FBI were sufficient to address 21 of the 22 Trilogy recommendations and all 5 Sentinel recommendations. The FBI substantially addressed: 17 Trilogy recommendations related to contract administration, invoice processing, and property accountability by establishing or revising policies and procedures; 4 by contracting for follow-up audits of the Trilogy costs; and the 5 Sentinel recommendations by revising Sentinel policies and procedures. The one Trilogy recommendation that FBI did not address completely was related to 1,205 missing, lost, or stolen Trilogy assets. As of February 2011, the FBI had researched and determined the status of all but 134 of these assets. FBI officials stated that almost all of these assets had a useful life of 7 years, and if they were not already returned or destroyed, they are now obsolete. There are diminishing returns to continue to pursue these assets, which included several information technology items that could potentially contain sensitive information. However, if the FBI is able to determine the status of any of these assets in the future, officials stated that they will make the entries to properly record them in FBI's property management application (PMA). In assessing implementation of the policies and procedures developed in response to GAO's 17 Trilogy recommendations related to contract administration, invoice processing, and property accountability, GAO found that policies and procedures related to the 4 recommendations dealing with contract administration, including interagency agreements, were effectively implemented but also identified a new issue. Specifically, GAO found that forms--required by the Federal Acquisition Regulation to support the use of interagency agreements to conveniently or economically obtain supplies and services--were not timely completed for 15 of 54 statistically selected interagency agreements tested, and found that FBI's monitoring did not identify this deficiency. GAO estimates that as much as 39.5 percent of FBI's fiscal year 2009 interagency agreements did not meet this requirement, increasing the risk that funds may have been disbursed for goods or services that were not in the best interest of the government. In addition, GAO's testing of FBI's implementation of polices and procedures for the remaining 13 recommendations that were related to invoice processing and property accountability found indications of implementation issues in 3 areas. (1) Regarding the review of contractors' invoices, 5 invoices (of the 37 tested) that had been reviewed and approved by FBI officials included labor rates that were not fully supported by the contract documentation. Without verifying labor charges against the contractor's proposal as required by FBI policy, there is an increased risk of disbursing funds for unallowable charges. (2) For property accountability, GAO found instances in which FBI (1) did not record accountable property items in its system in a timely manner and (2) did not accurately record key accountability information, such as location and serial numbers, as required by FBI's policies. These shortcomings increase the risk that assets could be lost or stolen and not be detected and investigated in a timely manner. GAO makes three new recommendations to improve interagency agreement controls and determine if additional actions are necessary to improve controls for invoice processing and property accountability. The FBI concurred with all three recommendations and discussed actions it has initiated to address GAO's recommendations.
Older industrial U.S. cities that have experienced steady, long-term population declines and job losses, called legacy cities, also have diminished revenues and ability to provide services, such as drinking water and wastewater services, according to recent studies. These cities are largely scattered across the Midwest and Northeast regions. Two studies identified a number of factors that have contributed to the cities’ decline, including the loss of major industries, suburban flight, and reduced housing market demand. These factors have contributed to such effects as decayed buildings and neighborhoods, or blight; increased vacant land; and increased rates of poverty. The two studies also noted that fiscal and other challenges for cities with declining populations were created by a combination of decreased revenues and increased costs of city services. With most legacy cities having experienced peak population levels in the 1950s and 1960s, they have experienced such declines for a long and sustained period and may have greater fiscal challenges than other cities. Many older U.S. cities, including legacy cities, also face water and wastewater infrastructure problems, including lead pipes in drinking water service lines that connect the main pipeline in the street to an individual home or apartment building. In the late 19th and early 20th centuries in the United States, lead was often used in the construction of drinking water service lines because of its malleability and ease of use, among other factors, as described in a National Bureau of Economic Research study. According to the results of a 2016 American Water Works Association survey, about 7 percent of the total population served by U.S. drinking water utilities has either full or partial lead service lines serving their homes. The survey results also indicate that the highest percentages of systems with lead service lines are located in the Midwest and Northeast. Ingesting lead may cause irreversible neurological damage as well as renal disease, cardiovascular effects, and reproductive toxicity. In addition, older U.S. cities, primarily in the Midwest and Northeast, have wastewater systems constructed as combined sewer systems and face challenges controlling overflows from these systems, called combined sewer overflows, during storms. Combined sewer systems collect stormwater runoff, domestic sewage, and industrial wastewater into one pipe, unlike sanitary sewer systems that collect domestic sewage and industrial wastewater in sewer lines that are separated from stormwater pipelines. Both types of systems may overflow during storm events. Under normal conditions, the wastewater collected in combined sewer pipes is transported to a wastewater treatment plant for treatment and then discharged into a nearby stream, river, lake, or other water body. However, during heavy rain or snow storms, when the volume of the wastewater can exceed a treatment plant’s capacity, combined sewer systems release excess untreated wastewater directly into nearby water bodies. According to EPA documents, as of September 2015, 859 communities across the country, primarily in the Northeast and Midwest, have combined sewer systems. According to the results of EPA’s 2012 survey of clean water infrastructure needs, projects to prevent or control combined sewer overflows, which involve building large holding tanks or tunnels, will cost about $48 billion over the next 20 years. The federal government works in partnership with states to help ensure drinking water is safe and to protect the quality of the nation’s rivers, streams, lakes, and other waters. As required by the Safe Drinking Water Act, EPA sets standards for public drinking water utilities that generally limit the levels of specific contaminants in drinking water that can adversely affect the public’s health. Under the Clean Water Act, EPA regulates point source pollution—that is, pollution such as wastewater coming from a discrete point, for example, an industrial facility or a wastewater treatment plant. Most states have primary responsibility for enforcing the applicable requirements of the Safe Drinking Water Act and administering the applicable requirements under the Clean Water Act, and EPA also has oversight and enforcement authority. Generally speaking, states and EPA may take administrative action, such as issuing administrative orders, or judicial action, such as suing an alleged violator in court, to enforce environmental laws such as the Safe Drinking Water Act and Clean Water Act. An administrative action may be issued as a consent order, which is an enforceable agreement among all parties involved, and a judicial action may result in a consent decree, which is also an enforceable agreement signed by all parties to the action. The federal government and states also provide financial assistance for water and wastewater infrastructure, either through grants to states or grants and loans to cities. EPA’s Drinking Water SRF and Clean Water SRF programs provide annual grants to states, which states use, among other things, to make low- or no-interest loans to local communities and utilities for various water and wastewater infrastructure projects. States are required to match the federal grants by providing an amount equal to at least 20 percent of the federal grants. EPA has provided about $18.3 billion to states for the Drinking Water SRF from 1997 through 2015 and about $39.5 billion for the Clean Water SRF from 1988 through 2015. In those same periods, states provided about $3.3 billion to the states’ Drinking Water SRF programs and about $7.4 billion to the states’ Clean Water SRF programs. In addition to the SRF programs, the federal government can provide financial assistance for water and wastewater infrastructure projects through two programs that primarily serve a range of purposes, including assistance with public works projects and providing housing assistance or economic development assistance. The first program is HUD’s Community Development Block Grant Program, which provides federal funding to cities, counties, other communities, and states for housing, economic development, neighborhood revitalization, and other community development activities, including water and wastewater infrastructure. The second program is the Department of Commerce’s Economic Development Administration’s Public Works Program, which awards grants to economically distressed areas, including cities that meet the statutory and regulatory eligibility criteria, to help rehabilitate, expand, and improve their public works facilities, among other things. In addition, FEMA’s Public Assistance Grant Program and Hazard Mitigation Grant Program may provide funding for water and wastewater infrastructure projects in certain circumstances when the President has declared a major disaster. In addition to the funds they use to match federal grants, if required, states can also provide assistance to help water and wastewater utilities address infrastructure needs. More specifically, some states have special programs or funds to pay for water and wastewater projects, and others use their state bonding authority to provide funds to utilities for projects. For example, Georgia has the Georgia Fund, which provides low-interest loans to water and wastewater utilities for water, wastewater, and solid waste infrastructure projects. Ohio and West Virginia sell bonds to support utility projects. Water and wastewater utilities are generally subject to requirements under the Safe Drinking Water Act and Clean Water Act, respectively, and are responsible for managing and funding the infrastructure needed to meet requirements under these acts. To pay for general operations, maintenance, repair, and replacement of water and wastewater infrastructure, utilities generally follow a strategy of raising revenues by charging rates to their customers, according to an American Water Works Association document. More specifically, utilities charge users a rate for the water or wastewater service provided, raising these rates as needed. Utilities generally develop long-term capital improvement plans—from 5 to 20 years—to identify the infrastructure they will need to repair and replace pipes, plants, and other facilities. To pay for large capital projects, utilities generally issue or sell tax-exempt municipal bonds in the bond market or get loans from banks, their state governments, or federal lenders. According to a 2016 Congressional Research Service report, in 2014, at least 70 percent of water and wastewater utilities relied on municipal bonds or other debt to finance their infrastructure needs and sold bonds totaling about $34 billion, to pay for their infrastructure projects. Utility bonds are rated by the three major ratings agencies, Moody’s, Fitch, and Standard and Poor’s. As water and wastewater utilities increase rates to pay for maintaining old and building new infrastructure, according to government and industry groups, rate affordability is a concern, particularly for low-income customers. According to a 2010 Water Research Foundation study, one-third of customers in the lowest 20th percentile income level have had months where they could not pay all their utility bills on time and are three times more likely to have their service disconnected. The study also found, when household budgets near poverty thresholds as defined by the Census Bureau, competing needs may determine whether a household can pay its utility bills. Furthermore, according to a 2016 Water Research Foundation study, utility revenues are affected by a reduction in the average per household indoor water use, which has declined nationally by 22 percent since 1999 with the increased use of water conservation appliances like low-flow toilets and clothes washers. EPA addresses the affordability of water and wastewater utility rates in several different ways, including the following. The Safe Drinking Water Act authorizes states to provide additional subsidization to disadvantaged communities, which are service areas that meet state-established affordability criteria. Under the Safe Drinking Water Act, EPA must under some circumstances identify variance technology that is available and affordable for public water systems serving a population of 10,000 or fewer to meet new drinking water standards. As established in EPA’s 1998 variance technology findings, its most recent policy regarding drinking water affordability, EPA continues to use drinking water bills above a national-level 2.5 percent of median household income as affordability criteria to identify affordable compliance technologies. The Clean Water Act authorizes states to provide additional subsidization to benefit certain municipalities, including those that meet state affordability criteria, in certain circumstances. We refer to municipalities that meet the affordability criteria as disadvantaged communities in this report. In 1994, EPA issued its Combined Sewer Overflow Control Policy, which remains in effect, to provide guidance for permitting and enforcement authorities to ensure that controls for combined sewer overflows are cost-effective and meet the objectives of the Clean Water Act. Under the policy, implementation of combined sewer overflow controls may be phased in over time depending on several factors, including the financial capability of the wastewater utility. EPA issued guidance in 1997 on how to assess a city’s financial capability as a part of negotiating schedules for implementing Clean Water Act requirements. The guidance considers wastewater costs per household that are below 2.0 percent of median household income to have a low or midrange effect on households. In 2016, EPA’s Water Infrastructure and Resiliency Finance Center, which was created in 2015 to provide expertise and guidance on water infrastructure financing, published a report on customer assistance programs that utilities across the United States have developed to help their low-income customers pay their bills. EPA’s Environmental Financial Advisory Board (a group created to provide expert advice on funding environmental programs and projects), the U.S. Conference of Mayors, industry groups, and others have critiqued EPA’s definition of affordability and have suggested that EPA use other measures to assess the effect of water and wastewater bills on low-income households and a community’s overall financial capability. For example, in 2007 and again in 2014, EPA’s Environmental Financial Advisory Board recommended that EPA use the lowest 20th percentile of income—as opposed to 2.5 percent of median household income—as a measure of a household’s ability to afford a rate increase, when assessing the affordability of infrastructure to control combined sewer overflows on low-income customers. In 2013, the U.S. Conference of Mayors issued a tool for assessing affordability that using EPA policies considers a cost increase of less than 4.5 percent for water and wastewater bill as affordable. Based on discussions with local governments and in response to these critiques, EPA has taken steps to clarify its guidance with memorandums issued in 2012 and 2014, which describe flexibilities in applying affordability indicators. Legislation has been introduced to address the affordability of increases in utility rates. One bill, the Water Resources and Development Act of 2016, introduced in the Senate in April 2016, would provide a definition of affordability that differs from current EPA definitions and would require EPA to update its financial capability guidance after a National Academy of Public Administration study on affordability. Another bill would provide federal assistance to help low-income households maintain access to sanitation services, including wastewater services. According to industry reports about the proposed legislation, the proposed program is similar to the Department of Health and Human Services’ Low Income Home Energy Assistance Program that provides assistance to low-income households to help pay their heating bills. Midsize and large cities with declining populations are generally more economically distressed, with higher poverty and unemployment rates and lower per capita income than growing cities. Little research has been done on the water and wastewater infrastructure needs of cities with declining populations, but the needs of 10 selected midsize and large cities we reviewed generally reflected the needs of cities nationally. Of the 674 midsize and large cities across the nation that had a 2010 population greater than 50,000, 99 (15 percent) experienced some level of population decline from 1980 to 2010. As shown in figure 1, about half of these 99 midsize and large cities (50) are in the Midwest; 28 percent (28) are located in the Northeast; and 21 percent (21) are located in the South. None of these midsize and large cities with declining populations was located in the western states. Michigan and Ohio have the largest numbers of midsize and large cities with declining populations—each with 14 cities. Based on our analysis of the Census Bureau’s American Community Survey data (5-year estimates for 2010 through 2014), cities with declining populations have had significantly higher rates of poverty and unemployment and lower household income—characteristics of economic distress—compared with growing cities of the same size. Compared with midsize and large cities that had growing populations over the same time, cities with declining populations had higher estimated poverty rates (23.6 percent compared with 16.5 percent), higher estimated levels of unemployment (12.5 percent compared with 9.2 percent), and lower estimated median household income ($40,993 compared with $57,729),as shown in table 1. These differences become more stark when cities with the greatest rates of population loss are compared with cities with the greatest rates of growth. Specifically, the 19 cities that lost 20 percent or more of their population had an average poverty rate of 31.4 percent compared with an average of 16.3 percent for cities with 20 percent or more growth. Moreover, unemployment in cities with the greatest estimated population loss was 16.5 percent compared with 9.1 percent in highest growth cities, and median household income was $32,242 compared with $58,140. Another distinguishing factor for cities with declining populations is high levels of vacant housing and low median home values. On average, cities with declining populations had 13.5 percent of their housing stock vacant, and growing cities had vacancy rates of 8.6 percent. Cities with the greatest population loss had nearly 20 percent vacant housing stock (19.7 percent), compared with 8.5 percent in cities with the most population growth. Cities with declining populations also had much older housing stock (average house being built in 1954 compared with 1976) and lower median home values ($137,263 compared with $253,522). Cities with declining populations also had some significantly different demographic characteristics than cities with growing populations. The 99 cities with declining populations had a higher estimated share of African American residents than cities with growing populations (28.5 percent compared with 11.1 percent) and a lower estimated share of the population with bachelor degrees (24.4 percent compared with 32.5 percent). (See table 2 for details on characteristics.) Academic research on U.S. cities with declining populations has been conducted for over a decade but has not focused on the water and wastewater infrastructure needs of these cities. The few studies and EPA reports we identified on water and wastewater infrastructure needs in cities with declining populations focused on the feasibility and challenges of rightsizing infrastructure, that is, downsizing or eliminating underutilized infrastructure to meet reduced demands. Among other challenges to rightsizing infrastructure, the studies described significant capital costs in decommissioning existing infrastructure and physical difficulty in removing components in depopulated areas without affecting the entire water or wastewater system. These studies also provided information on other strategies for maintaining underutilized water infrastructure in cities with declining populations. These strategies include using asset management to establish maintenance priorities and repair schedules; coordinating projects for water, wastewater, road, and other infrastructure to gain cost efficiencies; and using vacant lands for stormwater management generally and to help control sewer overflows as part of rightsizing. In addition, the studies highlighted the financial challenges of utilities managing water and wastewater infrastructure in cities with declining populations, resulting from decreasing revenues from fewer ratepayers, and personnel challenges of these utilities because of reductions in personnel to achieve cost savings. EPA’s 2011 drinking water needs survey found that nationally, the largest infrastructure needs identified, by estimated costs, addressed two areas: distribution and transmission systems and drinking water treatment infrastructure. Distribution and transmission systems include pipelines that carry drinking water from a water source to the treatment plant or from the treatment plant to the customer. Drinking water treatment infrastructure includes equipment that treats water or removes contaminants. Consistent with EPA’s national estimates, representatives we interviewed from seven of nine drinking water utilities for the 10 cities identified pipeline repair and replacement as a major need. For example, representatives from one utility told us that its distribution pipelines were approximately 80 years old and that within the next 15 to 20 years almost all of them will need to be updated. Representatives from another utility said that almost all 740 miles of the utility’s pipelines need to be replaced. At roughly $100 per foot, replacing all pipelines will cost more than $390 million. Representatives from seven of the nine drinking water utilities said that their utilities had high leakage rates (sometimes reflected in estimates of nonrevenue water), ranging from about 18 to 60 percent, above the 10 to 15 percent maximum water loss considered acceptable in most states according to an EPA document and indicating the need for pipeline repair or replacement. (See app. III for details of utilities’ drinking water infrastructure needs for the 10 cities.) Of the 10 utilities we reviewed that were responsible for drinking water infrastructure, representatives from 6 noted that they were aware that some portions of their or their customer-owned portions of service lines connecting individual houses or apartment buildings to the main water lines contain or may contain lead, although most of these utilities did not express concern about the risk of lead in their water. In addition, representatives we interviewed from 5 drinking water utilities out of the 10 we reviewed named treatment plant repair and replacement as one of their greatest needs. Representatives from one utility told us that the utility’s water treatment plant is over 100 years old and is in need of replacement or backup, which they said would cost an estimated $68.6 million. The clear well in the plant, that is, the storage tank used to disinfect filtered water, was built in 1908. If the tank fails, the main source of potable water for customers would be interrupted, leaving the community without water. EPA’s 2012 wastewater needs survey found that the largest infrastructure needs for wastewater systems fell into three categories: combined sewer overflow correction (i.e., control of overflows in combined sewer systems); wastewater treatment, or infrastructure needed to meet treatment under EPA standards; and conveyance system repair, or the infrastructure needed to repair or replace sewer pipelines and connected components to maintain structural integrity of the system or to address inflow of groundwater into the sewer system. Consistent with EPA’s national estimates, utilities serving 7 of the 10 cities we reviewed face high costs to control combined sewer overflows. (See app. IV for details of utilities’ wastewater infrastructure needs for the 10 cities.) According to EPA’s wastewater needs survey, estimated costs for infrastructure improvements to control combined sewer overflows for wastewater utilities serving 7 of the 10 cities we reviewed ranged from $7.1 million to $1.98 billion. In addition, representatives we interviewed from wastewater utilities that serve 5 of the 10 cities we reviewed said that they needed to repair or replace their treatment plants. For example, representatives from one utility said that 90 percent of the utility’s original wastewater treatment plant, which was built in 1938, was still in place and required constant attention to keep it running. Finally, representatives we interviewed from wastewater utilities providing services to 9 of the 10 cities we reviewed discussed collection system repair as a major need. For example, representatives from one utility said that the city sewer lines date back to the mid-1800s. They recently replaced two blocks of the oldest section of sewer lines for $3 million. Our sample of 14 utilities in the 10 cities we reviewed used the traditional strategy of raising rates to increase revenues to address their infrastructure needs, although representatives from half of them said that they had concerns about rate affordability and their future ability to raise rates. All utilities we reviewed also had developed one or more types of customer assistance programs, a strategy to help low-income customers pay their bills. In addition, most utilities were using or had plans to use one or more cost control strategies to address their infrastructure needs, such as asset management (i.e., identifying and prioritizing assets for routine repair or replacement versus emergency repair) or rightsizing to physically change infrastructure to meet current demands (e.g., reducing treatment capacity or decommissioning water lines and sewer lines in vacant areas). Our sample of 14 utilities in the 10 cities we reviewed used the traditional strategy of increasing revenue—raising rates as needed and selling bonds to pay for their infrastructure needs. Of the 14 utilities we reviewed, most raised rates annually, and all but 2 utilities had raised rates at least once since 2012. (See app. V for utilities’ operating revenues, operating expenses, and rate changes.) In addition, according to our review of the utilities’ financial statements, 11 of 14 experienced a decline in revenues in 1 of the years from 2012 through 2014, and over these years raised utility rates, which helped make up for lost revenues or cover increasing operation and maintenance costs. In contrast, the remaining 3 utilities for which we reviewed available financial statements had increasing revenues over the same period. Of the 3 utilities, 2 also raised rates by more than 9 percent or greater in 2 or more consecutive years from 2012 through 2014; the other utility was privately owned and operated and maintained steady revenues with an overall increase of less than 1 percent. Most of the 14 utilities we reviewed used a common rate structure through which customers were charged a modest base rate plus a larger variable rate by volume of water used, according to studies conducted on utility rates. Such a rate structure produces reduced revenues as the amount of water used and sold decreases. In addition to the decline in water use and revenues that many utilities are experiencing nationally, utilities with declining populations are further affected by reduced water sales to fewer ratepayers and face additional declines in revenues. Furthermore, according to representatives we interviewed from some of the utilities, declining populations resulted in operational changes that increased operating costs for their utilities. For example, utility representatives told us that when water sits for extended periods, such as in storage, it may lose its chlorine residual, which allows bacteria and viruses to grow and multiply. For wastewater systems, reduced water flow during dry weather has resulted in stronger sewage sludge and solid deposits that require an adjustment of wastewater treatment processes, according to utility representatives. Even with increased rates, many of the utilities we reviewed deferred planned repair and replacement projects and consequently expended resources on addressing emergencies, such as repairing water pipeline breaks. One water utility management professional estimated that emergency repairs can cost three to four times more than regular repairs. Specifically, representatives we interviewed from half of the utilities willing to speak with us (6 of 12) described themselves as being more reactive in repair and replacement of drinking water and wastewater infrastructure. Representatives from these utilities also told us that they do not have sufficient funding to meet their repair and replacement needs, and some noted large backlogs of planned repair and replacement projects. For example, representatives from one of the utilities we reviewed told us that the utility’s current level of investment would result in the replacement of its water and wastewater infrastructure in 400 years, versus replacement within the industry standard of up to a100 years (or a replacement schedule at 1 percent of infrastructure per year). The 5-year capital plan for another utility we reviewed deferred nearly two-thirds of the listed capital improvement projects because of lack of funding. Representatives from another utility described plans to spend about $8 million to replace water pipelines, but learned that they should be investing about twice as much to maintain their existing service levels, based on recent modeling of the system. With increased rates, representatives we interviewed from more than half of the utilities willing to speak with us identified concerns with keeping customer rates affordable. Specifically, representatives we interviewed from 7 of 12 utilities expressed concern about the affordability of future rate increases for low-income households (i.e., those that have incomes in the lowest 20th percentile income level). Affordability of water and wastewater bills is commonly measured by the average residential bill as a percentage of median-income households. Our analysis of the water and wastewater rates charged in fiscal year 2015 by the 14 utilities we reviewed showed that rates for both water and wastewater bills were considered affordable for customers at or above median-income households. However, these rates were higher than the amount considered to be affordable for low-income customers in 9 of 10 cities we reviewed (see fig. 2). The U.S. Conference of Mayors estimated combined annual water and wastewater bills of more than 4.5 percent of income as unaffordable based on EPA policies. In 4 of the 10 cities we reviewed, the average water and wastewater bill was more than 8 percent of income for low-income households. While they are generally concerned about affordability of rates, representatives from few of the utilities we interviewed said that they planned to change their rate structures, although changes can generate a more reliable and predictable revenue stream to cover costs, according to a 2014 utility study. Of the representatives we interviewed from 12 of the 14 utilities, representatives for 2 utilities said that they were interested in making rate structure changes that would increase cost recovery and that they planned to make incremental changes over time. In addition, 1 utility—Jefferson County, which provides wastewater services to Birmingham—had already made significant changes to its rate structure to stabilize revenues and to meet requirements for exiting bankruptcy. This utility replaced the minimum charge with a monthly base charge scaled by meter size for all customers. The utility also altered its rate structure for the volume of water used for residential customers from a flat fee per volume of water used to an increasing block rate structure where higher fees are charged for incremental blocks of increased water usage. A 2014 Water Resource Foundation study stated that utility representatives hesitate to make rate structure changes because of the potential to significantly alter customers’ monthly bills, and highlighted the need for stakeholders and utility board members to undertake an education and communication strategy when making such changes. In addition to their concerns about the affordability of rates, a few representatives we interviewed said that they expect to have future challenges using bond funding because of the rate increases needed to pay for them. Specifically, representatives we interviewed from 2 of the 12 utilities willing to speak with us—Gary Sanitary District and the city of Youngstown—said that they expected the increased rates would be difficult to afford for residents of the two cities where the median household income is about half the national average and the poverty rate is above 37 percent. All 12 of the utilities whose representatives we interviewed have used bond funding to help finance their water and wastewater infrastructure needs. Of the 14 utilities we reviewed, 10 had strong to very strong ability to pay long-term debt as indicated by fiscal year 2014 debt service coverage ratios we calculated, 2 had moderate ability, and 2 had poor or weak ability. In addition, for 8 of the 14 utilities, their bonds as of June 2016 were ranked within an A level range by the ratings agencies, indicating that they were expected to be able to cover the annual payments for these bonds (see app. VI for the utilities’ financial indicators). All 14 of the utilities we reviewed had developed one or more types of customer assistance programs as a strategy to make rates more affordable for customers who had financial difficulty paying their bills. For 5 of the 14 utilities we reviewed, more than 25 percent of their customers were late in paying their bills. Two of the utilities—Detroit Water and Sewerage Department and Gary Sanitary District—had particularly large numbers of customers who were unable to pay their bills, which was reflected in the lower estimated revenue collection rates of about 86 percent of in-city customers in Detroit and 69 percent of Gary Sanitary District customers, respectively, compared with collection rates averaging 98 percent by the other 8 utilities we reviewed where data were available. For both of these utilities, collecting payments from customers was a challenge, and shut off of water and wastewater services was not uncommon. For example, Detroit Water and Sewerage Department representatives told us that they were still struggling with collections and had lost from $40 million to $50 million in forgone revenues annually for the past few years because of the low collection rate, and had budgeted an additional $1.6 million in fiscal year 2016 to cover expenses related to collecting on delinquent accounts. Similarly, a Gary Sanitary District representative told us that even with rate increases of 30 percent in 2011, revenues had not increased correspondingly and water service shutoffs had increased because customers were unable to pay their bills. According to collections information provided by Gary Sanitary District, in fiscal year 2015, approximately 21 percent of accounts were shut off because of nonpayment. (See app. VII for details on rates and billing collections information for the 14 utilities we reviewed.) At a minimum, nearly all of the utilities we reviewed (13 of 14) entered into payment plans or agreements with customers with unpaid bills (see table 3). In some cases, payment plan assistance was described as more informal or ad hoc, with flexibility to develop a plan that is agreeable to the customer and the utility, depending on the customer’s ability to pay. Other utilities had formalized payment plan programs or policies, requiring a customer to make an initial minimum payment on the outstanding bills, and then accepting payment of the remaining amount in monthly installments over a period of time. In addition, overall, half of the utilities we reviewed (7 of 14) offered direct assistance to low-income, elderly, or disabled customers through bill discounts or assistance to eligible customers in good standing, short-term assistance with unpaid bills (e.g., credit for payment of outstanding water and wastewater bills) and with minor plumbing repairs (e.g., for leaks that can increase water use and monthly bills), or some combination of these three types of assistance. Different rate structures, such as a lifeline rate or reducing fixed charges, can assist low-income or financially constrained customers, according to a 2010 Water Research Foundation Study and EPA’s 2016 report on customer assistance programs, but few of the 14 utilities we reviewed use such structures. For example, through a lifeline rate, a utility can provide its customers with a minimum amount of water to cover basic needs at a fixed base charge. When a customer uses more water than the minimum allotment, the utility increases the rate charged, which in turn increases the customer’s bill. Lifeline or other alternative rates may be targeted to low-income customers, but none of the utilities we reviewed provided special rates based on income. Representatives we interviewed from one utility said that they consciously revised the utility’s rate structure to include lifeline rates to address the needs of customers who could not afford higher rates. An additional 3 of the 14 utilities we reviewed had rate structures that included some volume of water usage with their fixed base charge. Representatives we interviewed from a few utilities (3 of 12) told us that charging special rates for low-income customers is not an option because of local or state laws that do not allow the utilities to differentiate rates among customers. For example, Detroit’s Blue Ribbon Panel on Affordability’s February 2016 report noted potential legal constraints in the state of Michigan in implementing an income-based rate structure, where customers pay a percentage of their income toward their water bills. Most of the utilities (13 of 14) we reviewed were using or had plans to use one or more strategies to address their water and wastewater infrastructure needs by controlling costs or increasing the efficiency of the physical infrastructure or overall management of the utility. For example, asset management can help utilities more efficiently identify, prioritize, and plan for routine repair or replacement of its assets, versus facing costly emergency repairs. Table 4 shows the strategies used by the 14 utilities we reviewed, including asset management, major reorganization, and rightsizing physical infrastructure to meet current demands. Overall, the most common cost control and efficiency strategy used by the 14 water and wastewater utilities we reviewed was asset management. Some of the utilities (4 of 14) had asset management programs in place, and most of the remaining utilities had plans for or were in initial stages of implementing the strategy. In contrast, we found that the other strategies—rightsizing, major reorganization, expanding the utility’s customer base, and public-private partnerships—were used to a limited extent by the utilities we reviewed. In particular, rightsizing was among the least-used strategies. Many of the utility representatives we interviewed told us that rightsizing was not practical or feasible. For example, even with vacant housing averaging 21 percent in these cities, according to American Community Survey data (5-year estimates, 2010 through 2014), representatives of some utilities reviewed (6 of 14) told us that decommissioning water and sewer lines was not practical or feasible because they did not have entirely vacant blocks or needed to maintain lines to reach houses that were farther away. However, as part of rightsizing, representatives we interviewed for five wastewater utilities said that they have incorporated in their plans, or were considering using, vacant lands for green infrastructure to help control stormwater runoff that can lead to sewer overflows. Green infrastructure uses a range of controls, such as vegetated areas, stormwater collection, or permeable pavement, to enhance storage, infiltration, evapotranspiration, or reuse of stormwater on the site where it is generated. (See app. VIII for information on utilities’ use of cost control strategies). While not specifically designed to address the water infrastructure needs of midsize and large cities with declining populations, six federal programs and one policy we reviewed could provide these cities with some assistance. As of June 2016, none of the six federal programs we reviewed administered by the four agencies that fund water and wastewater infrastructure needs were specifically designed to assist such cities in addressing their water infrastructure needs. Yet most of the 14 utilities we reviewed received funding from one or more of these programs for their water and wastewater infrastructure projects. In addition to these programs, under EPA’s 1994 Combined Sewer Overflow Policy, cities or utilities meeting eligibility criteria can take a phased approach over an extended period to build the needed infrastructure to correct combined sewer overflows and comply with the Clean Water Act. None of the six federal programs we reviewed that can fund water and wastewater infrastructure needs were specifically designed to provide funds to cities with declining populations for water and wastewater infrastructure projects. The programs are as follows: Drinking Water and Clean Water SRF programs. Under the Safe Drinking Water Act and Clean Water Act, EPA provides annual grants to states to capitalize their state-level Drinking Water and Clean Water SRF programs, and states can use the grants to provide funding assistance to utilities, including low- or no-interest loans, among other things. Overall, the state Drinking Water SRF and Clean Water SRF programs help reduce utilities’ infrastructure costs, increase access to low-cost financing, and help keep customer rates affordable. The federal laws establishing the SRF programs do not specifically address cities with declining populations, although states are generally authorized to use a percentage of their capitalization grants to provide additional subsidies to disadvantaged communities. States provide additional subsidies in the form of principal forgiveness or negative interest rates, which reduce loan repayment amounts. The amounts that states set aside for additional subsidies vary from year to year based on requirements in annual appropriations acts and state funding decisions. Most of the 10 states in which the 10 cities in our review were located used median household income as one indicator for disadvantaged communities for both Drinking Water and Clean Water SRF programs. HUD Community Development Block Grants. HUD provides federal funding, through the Community Development Block Grant program, for housing, economic development, neighborhood revitalization, and other community development activities, including water and wastewater infrastructure. The department provides block grant funding to metropolitan cities and urban counties across the country, known as entitlement communities, and to states for distribution to non-entitlement communities. Federal law requires that not less than 70 percent of the total Community Development Block Grant funding will be used for activities that benefit low- and moderate-income persons. In 2015, HUD provided $2.3 billion in block grant funding to entitlement communities, including midsize and large cities. However, according to department officials we interviewed, entitlement communities choose to use only a small portion of the grant funding to support water and wastewater infrastructure projects. In fiscal year 2015, according to HUD data, about $43.8 million, or 1.9 percent of block grant funding provided to entitlement communities, including midsize and large cities, was used for water and wastewater infrastructure projects. Economic Development Administration Public Works program. The administration’s Public Works program awards grants competitively to economically distressed areas, including cities that meet the eligibility criteria, to help rehabilitate, expand, and improve their public works facilities, among other things. A Public Works grant is awarded if, among other things, a project will improve opportunities for the successful establishment or expansion of industrial or commercial facilities, assist in the creation of additional long-term employment opportunities, or primarily benefit the long-term unemployed and members of low-income families in the region. In fiscal year 2015, according to Economic Development Administration data, the agency provided $101 million as Public Works grants, of which about $14.9 million or 14.7 percent was used for water or wastewater infrastructure projects. Agency officials told us that the program’s main priority is enabling distressed communities to attract new industry, encourage business expansion, diversify local economies, and generate or retain long-term jobs in the private sector. As a result, projects funded with Public Works grants may include a water infrastructure project, but that water infrastructure project would be a secondary effect of an economic development project. Agency officials said that a common water and wastewater infrastructure project funded by Public Works program grants involves installing a main drinking water pipeline or sewer line to a new or renovated industrial park. FEMA Public Assistance and Hazard Mitigation grant programs. FEMA’s Public Assistance and Hazard Mitigation grant programs may provide funding for water and wastewater infrastructure projects when the President has declared a major disaster, but these programs are not specifically designed to assist cities with declining populations. The agency’s Public Assistance program provides grants to states and others for the repair, restoration, reconstruction, or replacement of public facilities, including water and wastewater infrastructure damaged or destroyed by such a disaster. In fiscal year 2015, FEMA awarded about $6.5 billion for public assistance projects; however, the agency was unable to determine the portion of public assistance funding that was used for water and wastewater infrastructure projects. The agency’s Hazard Mitigation grant program provides grants for certain hazard mitigation projects to substantially reduce the risk of future damage, hardship, loss, or suffering in any area affected by a major disaster. In fiscal year 2015, FEMA awarded about $1.2 billion in grants to states and communities for mitigation projects. Of that amount, about $8.1 million, or 0.7 percent, was awarded for water and wastewater mitigation projects, according to Hazard Mitigation grant program data. Hazard Mitigation grants do not need to be used for a project within the designated disaster area as long as the project has a beneficial effect on that area. The grants are competitively awarded to states, which identify in their applications the mitigation projects that would be funded with the grants. Cities, including those with declining populations, can submit applications to the state for Hazard Mitigation projects for their water and wastewater facilities, which the state may choose to include its Hazard Mitigation grant application to FEMA. While these six programs were not specifically designed to provide funding to cities with declining populations, such cities or their related utilities can receive funding from these programs for water and wastewater infrastructure projects. Table 5 shows the funding that each of the utilities in our 10 selected cities received from the programs from fiscal years 2010 through 2015. In total, cities received almost $984 million from the federal agencies. As shown in table 5, 11 of the 14 utilities we reviewed received Drinking Water or Clean Water SRF funding from fiscal years 2010 through 2015, and 1 utility was awarded additional subsidies. Specifically, the Birmingham Water Works Board received $1.7 million (out of $11.6 million) from the Drinking Water SRF program as an additional subsidy in the form of principal forgiveness for green projects, or water infrastructure projects that include energy and water efficiency improvements, green infrastructure, or other environmentally innovative activities. According to most of the representatives we interviewed from 12 utilities, SRF funding is the most common federal funding they receive for water and wastewater infrastructure projects. Overall, in fiscal year 2015, 41 states provided about $416 million, or 23 percent, of their Drinking Water SRF program funds for water and wastewater infrastructure projects in disadvantaged communities, and 31 states provided about $648 million, or 12 percent, of their Clean Water SRF program funds for such projects (see fig. 3). Representatives we interviewed from some utilities said that it is difficult to use SRF funding because the total amount of funding available statewide is limited; states restrict the amount of funding available to individual projects; and states prioritize projects that address Safe Drinking Water Act and Clean Water Act compliance issues, such as acute violations of drinking water standards or health advisory levels. Also shown in table 5, 1 of the 14 utilities we reviewed, the Sewerage and Water Board of New Orleans, received Community Development Block Grant funds for water and wastewater infrastructure projects from fiscal years 2010 through 2015. Officials in Youngstown, Ohio, also told us that some block grant funding was awarded to faith-based organizations to provide low-income residents with various types of housing and other assistance, which may include assistance with paying utility bills. None of the 14 utilities we reviewed received the Economic Development Administration’s Public Works funding for water or wastewater infrastructure projects from fiscal years 2010 through 2015. The FEMA programs—Public Assistance and Hazard Mitigation—provided nearly 50 percent of total federal funding for water and wastewater infrastructure received by cities we reviewed in fiscal years 2010 through 2015. Specifically, 2 of the 14 utilities we reviewed—the Sewerage and Water Board of New Orleans and the Charleston Sanitary Board—received Public Assistance grants from FEMA after flood events in fiscal years 2010 through 2015. In addition, 2 of the 14 utilities we reviewed—the Birmingham Water Works Board and the Sewerage and Water Board of New Orleans—received Hazard Mitigation grants. In addition to providing assistance through SRF funding, EPA has a policy—the Combined Sewer Overflow Policy—that could help cities with declining populations. The policy, adopted in 1994, allows a city or utility to extend its implementation schedule—the period of time it has to build the necessary infrastructure to control combined or sanitary sewer overflows—under consent decrees entered into with EPA or the state, or administrative orders issued by EPA or state permitting authorities. An extended implementation schedule spreads the costs of planned infrastructure projects over time and helps make wastewater rate increases required to pay for the infrastructure projects more affordable for a utility and its customers. EPA’s financial capability assessment guidance, issued in 1997, uses a two-phase approach to assess a city or utility’s financial capability based on: (1) the combined impact of wastewater and combined sewer overflow control costs on individual households (residential indicator) and (2) the socioeconomic and financial conditions of a city or utility (financial capability indicator). Each city or utility is ranked as low, medium, or high for the residential indictor and weak, midrange, or strong for the financial capability indicator. The combined indicators show the overall financial burden—low, medium, or high—resulting from the estimated costs for the planned infrastructure projects. Cities or utilities with a high financial burden—those with a high residential indicator and low-to-midrange financial capability indicators— are generally expected to implement combined sewer overflow control projects within 15 years to 20 years of the consent decree. EPA and states can also apply this two-phase approach to determine appropriate implementation schedules for cities or wastewater utilities to address other Clean Water Act requirements, including control of sanitary sewer overflows. According to EPA officials, implementation schedules can be negotiated past 20 years if infrastructure projects are large and complex, or if the necessary user rate increases put too great a burden on customers with incomes below median household income. EPA issued a memorandum in 2012 that provided guidance on developing and implementing effective integrated planning for cities and utilities building wastewater and stormwater management programs. According to the 2012 memorandum, under integrated planning, cities and utilities prioritize the wastewater and stormwater infrastructure projects that should be completed first. According to EPA documents, cities and utilities may use integrated planning to prioritize required wastewater and stormwater projects over a potentially longer time frame, helping to keep customer rates more affordable. Building on its 2012 memorandum, EPA issued a memorandum in 2014 to provide greater clarity on the flexibilities built into the existing financial capability guidance. The 2014 memorandum identifies key elements EPA uses in working with cities and utilities to evaluate how their financial capability should influence implementation schedules in both permits and enforcement actions. It also includes examples of additional information that may be submitted to provide a more accurate and complete picture of a city’s or utility’s financial capability. Overall, 9 of the 14 utilities providing wastewater services to the 10 cities we reviewed are under consent decrees entered into with EPA or administrative orders from a state agency to address combined sewer overflows or sanitary sewer overflows, according to EPA, state, and utility officials. Specifically, according to these officials, 7 utilities are under consent decrees or administrative orders to address combined sewer overflows; some of these decrees or orders are also required to address sanitary sewer overflows. The remaining 2 utilities are under consent decrees to address sanitary sewer overflows, according to these officials. According to utility representatives we interviewed and documents we reviewed, these 9 utilities or the cities they serve expect to spend an estimated $10.5 billion to comply with consent decrees and administrative orders to enforce Clean Water Act requirements. According to EPA officials, 4 utilities we reviewed had consent decrees with EPA that fell within the high financial burden category and had implementation schedules extending more than 15 years: Pittsburgh’s implementation schedule was for 19 years; Youngstown’s schedule was for 31 years, St. Louis’s schedule was for 23 years, and New Orleans’ schedule was for 27 years. One of the 10 cities we reviewed, New Orleans, had a consent decree with integrated planning, and officials from 2 additional cities said that they were discussing the use of integrated planning with EPA. We provided a draft of this report to the Environmental Protection Agency, the Economic Development Administration, and the Department of Housing and Urban Development for review and comment. None of the agencies provided written comments or stated whether they agreed with the findings in the report, but all three agencies provided technical comments that we incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, the Administrator of the Economic Development Administration, the Secretary of Housing and Urban Development, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IX. Our objectives were to examine (1) what is known about the economic characteristics of midsize and large cities with declining populations and their drinking water and wastewater infrastructure needs; (2) strategies that selected midsize and large cities with declining populations and their associated utilities used to address their infrastructure needs and the affordability of their drinking water and wastewater rates; and (3) what existing federal programs and policies, if any, could assist midsize and large cities with declining populations, and their associated utilities, in addressing their water infrastructure needs. To examine what is known about the economic characteristics of midsize and large cities with declining populations, we reviewed relevant studies and interviewed experts about cities that have experienced population declines and water and wastewater infrastructure needs. We identified the studies and experts through a literature review and referrals from Environmental Protection Agency (EPA) officials, representatives of water and wastewater industry associations, and academic and nonprofit experts. We contacted nine experts—individuals in academia and the nonprofit sector with expertise in water and wastewater utility management, finance, engineering, and urban planning. For this report, we used U.S. Census Bureau and National League of Cities definitions for midsize cities—those with populations from 50,000 to 99,999—and large cities—those with populations of 100,000 and greater. We identified the number and size of midsize and large cities with sustained population declines by analyzing decennial census population data for midsize and large cities from 1980 through 2010, which we found to be the most extended period for reliable decennial census data related to our review of the consistency of data coding over time. To describe the economic and demographic characteristics of cities with declining populations, we analyzed the Census Bureau’s American Community Survey 5-year estimates for 2010 through 2014, which according to the bureau contain the most precise and current data available for cities and communities of all population sizes. We analyzed the survey data for all cities with population over 50,000 and compared the data for cities with declining populations to those for cities that did not experience a decline during this period. To do this, we created categories of decline and growth, in increments of 9.9 percent or less, 10 to 19.9 percent, or 20 percent and greater, in order to have a minimum number of cities within each category, using decennial census population data. To determine whether cities with declining populations experienced significantly greater levels of economic distress than cities with increasing populations, we performed statistical comparisons of all key economic and demographic characteristics from the American Community Survey data (5-year estimates for 2010 through 2014), following American Community Survey methodology on statistical tests. Specific economic and demographic characteristics that we analyzed included the following: poverty rate percentage, percentage of unemployment, median household income, per capita income, percentage of vacant housing, median housing value, median year housing stock was built, percentage of households receiving Supplemental Nutrition Assistance Program benefits, percentage of white residents, percentage of African American residents, percentage of residents of other races, percentage of residents over 65 years old, percentage of residents with at least a high school diploma, and percentage of residents with a bachelor’s degree. We reviewed Census Bureau documentation for data collection and quality, and determined the decennial data to be sufficiently reliable for our purposes of categorizing cities based on the extent of population growth or decline, and the American Community Survey data sufficiently reliable for analyzing economic and demographic data on midsize and large cities. Because the American Community Survey 5-year data followed a probability procedure based on random selections, the sample selected is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 90 percent confidence interval. This is the interval that would contain the actual population value for 90 percent of the samples we could have drawn. All 5-year American Community Survey percentage estimates presented have margins of error at the 90 percent confidence level of plus or minus 10 percentage points or less, unless otherwise noted. All non- percentage estimates presented using the 5-year American Community Survey had data within 20 percent of the estimate itself, unless otherwise noted. As part of our work for all three objectives, we selected a nonprobability sample of 10 cities that experienced the greatest percentages of population decline from 1980 through 2010 for further review. Using our analysis of decennial census population data from 1980 through 2010, we selected the 10 cities with the greatest declines in population for that period, without repeating cities in any state to allow for geographic distribution. We also selected for size, choosing 5 midsize and 5 large cities. The 10 cities, their 2010 populations, and their percentage declines in population are listed in table 6. This sample of cities is not generalizable to all cities that experienced population declines over this period; however, it highlights the issues faced by a geographically diverse range of cities and corresponding utilities that have experienced the greatest population losses in recent decades. To analyze information on water and wastewater needs for cities with declining populations, we compared national drinking water and wastewater needs data that EPA collected by to information on needs we collected for the utilities providing services to the 10 cities we selected. Because cities may be served by multiple utilities, our sample included the 14 utilities from the 10 selected cities—the 6 that were responsible for both water and wastewater infrastructure, 4 that were responsible solely for drinking water infrastructure, and 4 others that were responsible solely for wastewater infrastructure. We obtained EPA’s data on drinking water infrastructure needs from its 2011 Drinking Water Infrastructure Needs Survey and Assessment and wastewater infrastructure needs from its 2012 Clean Watersheds Needs Survey. EPA obtains these data through surveys of the 50 states, the District of Columbia, and U.S. territories, which for the drinking water needs assessment involves collecting information from a sample of drinking water systems in each state. We assessed the reliability of these data by reviewing the methodologies that EPA used to conduct these surveys and by interviewing EPA officials to understand the appropriate use of the data. We determined that both the drinking water and wastewater needs identified at the national, or aggregate, level were sufficiently reliable for purpose of reporting national needs estimates. However, the fact that some utilities serve multiple cities and counties, and that some cities are served by multiple utilities or multiple treatment facilities, prevented us from uniquely matching utilities and treatment facilities to cities. Therefore, we could not estimate the total drinking water and wastewater needs of utilities in cities with declining populations and instead identified the water and wastewater needs for each of the 14 utilities for the cities in our sample. To do this, we analyzed relevant utility documents, such as capital improvement plans and master plans, and conducted interviews with utility representatives, including executive directors, finance directors, and operations managers, about their water and wastewater infrastructure condition, their greatest infrastructure needs, and their top challenges in addressing their infrastructure needs. We also reviewed EPA wastewater needs data for utilities serving the 10 selected cities, which we found sufficiently reliable to report at the individual utility level based on reviews of documentation and interviews with knowledgeable EPA officials. However, we were unable report EPA drinking water needs data at the individual utility level for the 10 selected cities because of the way that EPA and states collect and extrapolate the data: EPA uses a statistical cost modeling approach to calculate state and national estimates using local data; as a result, the local data may be a modeled result and not actual reported data. To examine the strategies that selected midsize and large cities with declining populations, and their associated utilities, used to address their infrastructure needs, we reviewed relevant reports and studies on utility management and interviewed city and utility representatives for the 10 cities and 14 utilities in our sample. We conducted semistructured interviews with representatives from 12 of the 14 drinking water and wastewater utilities willing to speak with us to gather information on changes in populations served and effects of declining population on system operations, if any; infrastructure needs and condition; financing and management strategies; challenges in managing water and wastewater infrastructure; and their perspectives on the research and assistance needed for utilities serving cities with declining populations. We also collected capital improvement plans, master plans, recent rate studies, and financial statements for fiscal years 2012 through 2014, which we analyzed to determine infrastructure condition, short-term and long-term capital needs, rate structure changes and rate increases, and changes in operating revenues and expenses. To help ensure that we collected the correct information for each city and utility, we clarified our understanding of these documents through interviews with utility officials, follow-up correspondence, and review of draft materials provided by utility officials. Nine of the selected 10 cities are under orders from EPA or the state to correct combined sewer overflows or sanitary sewer overflows (which result in discharge of raw sewage to streams and surrounding areas), or both, from their systems. For these cities, we collected any consent decrees they have with EPA and long-term plans to address their combined sewer overflow controls. We also collected written responses to questions from city officials on basic water and wastewater system information, including estimated population served, number of customer accounts and types of customers (e.g., residential versus industrial), average residential water rate, and billing collections information. For the 2 utilities that declined an interview with us, we reviewed publicly available documents and relevant websites. For all 10 cities, we interviewed city planning officials about population and demographic trends, land use planning, infrastructure planning and strategies, access to funding and resources, and challenges they face in managing their cities with declining populations and revenues. We conducted site visits to 6 of the 10 selected cities, considering geographic distribution and size of the cities, and conducted interviews with the remaining city and utility officials by telephone. Specifically, we visited Gary, Indiana; Youngstown, Ohio; Detroit, Michigan; New Orleans, Louisiana; Niagara Falls, New York; and Macon, Georgia. During site visits, we also interviewed city planning officials; water utility representatives; and relevant stakeholders, including officials from other city departments, such as representatives of Gary’s Department of Environmental Affairs and Green Urbanism and New Orleans’s Resiliency Office. We also met with representatives of nongovernmental organizations working with cities and utilities on water and wastewater infrastructure issues, including the Center for Community Progress, Detroit Future City, and the Greater New Orleans Foundation. As part of our review of utilities and the strategies they used, we reviewed financial statements for fiscal years 2012, 2013, and 2014 for all 14 utilities. Specifically, we reviewed total operating revenues and total operating expenses, excluding depreciation over these 3 years. We then used these data to calculate several basic indicators of utility financial health. We calculated indicators that reflect each utility’s ability to pay its long-term debt, sufficiency to cover operating costs and asset depreciation, the remaining years of the utility’s asset life, and its long- term debt per customer. We selected these indicators based on our review of indicators used by rating agencies, including Moody’s and Fitch, two agencies that rate utilities and the utility sector, and interviews with utility finance experts that EPA identified. We then compared these indicators to scoring systems and median indicators for water and wastewater utilities, used and gathered by Moody’s and Fitch where available, to help describe the extent of existing long-term debt, strength of a utility’s financial condition, and potential future capital needs. In addition, to gauge the financial burden of water and wastewater utility bills for median-income households and low-income households in each of our 10 selected cities, we compared the average annual utility bill as a share of income to levels EPA and the U.S. Conference of Mayors have estimated are affordable. We calculated rates as a share of income in the 10 selected cities using the average residential rate information reported by the cities’ utilities and the median household income and income for the 20th percentile for that city reported in the American Community Survey data (5-year estimates for 2010 through 2014). To examine the federal programs and policies that could be used by midsize and large cities with declining populations, and their associated utilities, to help address their water infrastructure needs, we reviewed relevant laws, regulations, and policies of the federal agencies that fund water and wastewater infrastructure needs. To identify the federal programs, we used our past reports that identified federal funding for water and wastewater infrastructure. Specifically, we reviewed funding information and eligibility requirements for the following six federal programs: EPA’s Drinking Water State Revolving Fund (SRF) program, EPA’s Clean Water SRF program, the Department of Housing and Urban Development’s (HUD) Community Development Block Grant program, the Economic Development Administration’s Public Works program, and the Federal Emergency Management Agency’s (FEMA) Public Assistance and Hazard Mitigation Grant Programs. Because we found that none of the programs was specifically designed to assist cities with declining populations, we reviewed program eligibility requirements to determine if funding assistance was awarded based on the cost of infrastructure projects and a project user’s ability to pay for the projects. Under the Drinking Water and Clean Water SRF programs, states establish affordability criteria for eligibility to receive additional subsidization, and so we also reviewed states’ intended use plans, the plans they develop annually to identify candidates for SRF loans. We also interviewed agency officials from EPA, HUD, and the Economic Development Administration about the programs, and gathered information from FEMA from another GAO team. For each federal funding program we reviewed, we collected funding data for water and wastewater infrastructure projects from federal fiscal years 2010 through 2015, to the extent the data were available. Specifically, we reviewed congressional appropriations and congressional budget justifications for each federal agency to determine the total available funding for each program. To determine expenditures for water and wastewater infrastructure projects, we reviewed EPA’s National Information Management System reports; HUD’s Community Development Block Grant expenditure reports; the Economic Development Administration’s annual reports to Congress; and data provided by FEMA from its Integrated Financial Management Information System. To assess the reliability of the data, we reviewed documentation and gathered information from knowledgeable agency officials about the reliability of the data and found them to be sufficiently reliable to characterize overall national expenditures. In addition to national data, we gathered information from our 10 selected cities and from 12 of the 14 drinking water and wastewater utilities on federal, state, and other funding they received to help address their water and wastewater infrastructure needs from state fiscal years 2010 through 2015. In reviewing policies of the six federal agencies that could help cities and utilities address their water and wastewater needs, we identified EPA’s Combined Sewer Overflow Control policy as one policy that could help wastewater utilities in cities with declining populations address their needs. Specifically, the policy allows a city or utility to phase in combined sewer overflow controls over time, which helps to keep customers’ rates affordable. We reviewed EPA’s policy, first issued in 1994 and updated in 2012 and 2014, to determine how the policy could help cities with declining populations and their wastewater utilities keep wastewater rates affordable. Nine of the 10 cities we reviewed had wastewater utilities under consent decrees or administrative orders to comply with specified Clean Water Act requirements. These include 7 utilities under consent decrees or administrative orders requiring them to address combined sewer overflows; some of these utilities are also required to address sanitary sewer overflows, and 2 utilities are under consent decrees requiring them to address sanitary sewer overflows, according to EPA, city, and utility officials. We collected information from these cities and their utilities on the use of extended implementation schedules and reviewed the consent decrees filed in federal court or administrative orders, and the long-term control plans that the cities developed to correct problems, to the extent the documents were available. We obtained information from city and utility officials on the estimated costs to comply with the consent decrees and administrative orders. We also obtained and reviewed EPA’s list of cities that had consent decrees with extended implementation schedules. We conducted this performance audit from July 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides economic and demographic characteristics for the 10 cities in our review using the U.S. Census Bureau’s American Community Survey 5-year estimates, 2010 through 2014, the most recently available data as of July 2016. Table 7 provides the economic characteristics of the 10 cities that we selected for review. Table 8 provides demographic characteristics for the 10 cities that we selected for review. This appendix presents data on general system characteristics and infrastructure needs of drinking water utilities serving 10 selected cities with declining populations (see table 9). Data were compiled from written responses and oral responses from drinking water utility representatives, annual reports, planning documents, and capital improvement plans, when available. This appendix presents data on general system characteristics and infrastructure needs of wastewater utilities serving 10 selected cities with declining populations (see table 10). Data were compiled from written responses and oral responses from wastewater utility officials; annual reports; planning documents; capital improvement plans; and the Environmental Protection Agency’s Wastewater Needs Survey, when available. This appendix presents data on operating revenues and expenses for the 14 drinking water and wastewater utilities serving the 10 cities with declining populations that we selected for review (see table 11). Data are compiled from financial statements from fiscal years 2012 through 2014. In addition, information on frequency of rate increases and rate increases from 2012 through 2014 is provided. No single indicator or set of indicators is definitive in describing a utility’s financial condition. Financial indicators that reflect the financial strength of a utility’s operations, along with other primary factors—such as the size and health of the system, its service area, the state laws, municipal ordinances, and charters governing its management—and the strength of its rate management and its regulatory compliance drive a utility’s financial condition. The three major rating agencies—Moody’s, Standard and Poor, and Fitch—use many and varying quantitative and qualitative financial indicators to evaluate a utility’s financial condition and associated bond rating. This appendix contains selected financial indicators for utilities serving 10 selected cities with declining populations. The indicators, shown in table 12, were calculated using data from the utilities’ fiscal year 2014 financial statements. These indicators were selected to reflect current and future financial condition, considering current and future debt to address infrastructure needs. A description of each indicator and method of calculation is described below. Debt service coverage ratio is a measure of a utility’s ability to pay its long-term debts. This financial indicator is a key measure in evaluating a utility’s revenue system and is used by all three rating agencies. According to the agencies, a debt service coverage ratio greater than 1.0 indicates that the utility has additional revenue available to cover additional debt payments, if needed. The magnitude by which net revenues are sufficient to cover additional debt, or debt service, indicates the utility’s margin for tolerating business risks or declines in demand, while still assuring repayment of debt. For example, a higher debt service coverage level indicates greater flexibility to withstand customer resistance to higher rates. A debt coverage ratio less than 1.0 indicates that the utility has insufficient revenues to make annual principal and interest payments on long-term debt. Formula: Annual net operating revenues (calculated by subtracting total operating expenses, excluding depreciation from total operating revenues) divided by the annual principal and interest payments (on all long-term debt). Better operating ratio is a measure of a utility’s ability to raise revenues to pay for its operating costs, including depreciation of existing infrastructure. Including depreciation means that a utility’s ability to replace its infrastructure, or capital assets, as they depreciate is also part of the calculation. A better operating ratio greater than 1.0 indicates that the utility has revenues sufficient to cover operation and maintenance expenses, as well as the cost of replacing current capital assets. Formula: Total operating revenues divided by the total operating expenses (including depreciation). Remaining years of useful asset life is a measure of the quality of existing capital assets and overall asset condition. Formula: Total asset useful life (calculated by asset value divided by depreciation) minus the age of the asset in years (calculated by total accumulative depreciation divided by annual depreciation). Long-term debt per customer account is a measure of average debt burden per ratepayer. Utilities are taking on more debt than they have in previous years, according to a Water Research Foundation study. Fitch’s 2016 Water and Sewer Medians report also indicates an increasing trend in median long-term debt per customer for rated utilities over the last 10 years from 2007 through 2016 by 84 percent. Formula: Long-term debt divided by the total number of utility customers (for a combined utility, the aggregate number of water and sewer accounts are used). Recent bond rating is an assessment by a rating agency of a utility’s ability to repay new debt, using all the quantitative and qualitative information that the agency has gathered on the utility’s financial and operating circumstances. A rating is derived from quantitative factors, such as values of financial indicators of past financial condition, and from forecasts of future financial performance. It also depends on qualitative factors, such as utility management’s success in rate setting, complying with environmental regulations, budgeting for annual expenditures, and planning for future capital spending. In addition, a utility’s rating is affected by the rate covenants and debt service reserve requirements it has agreed to in order to issue bonds. This appendix presents data on water and wastewater rates and billings collection information for 14 utilities we reviewed serving 10 selected cities with declining populations (see table 13). Data were compiled from data and information collected from utility officials and American Community Survey data. This appendix describes the use of five cost control strategies by 14 water and wastewater utilities providing service to the 10 cities with declining populations that we reviewed. The five strategies are rightsizing to meet current demands (i.e., reducing treatment capacity or decommissioning water lines and sewer lines in vacant areas), major reorganization, expanding the utility’s customer base, public-private partnerships, and asset management. (See table 4 for corresponding summary table.) Three of the 14 utilities we reviewed have undertaken rightsizing. Representatives we interviewed from 2 of those utilities—Detroit Water and Sewerage Department and Gary Sanitary District—said that they were considering large-scale rightsizing of their water infrastructure to more appropriately meet current demands. According to Environmental Protection Agency (EPA) reports, rightsizing can potentially improve the overall efficiency of the system and reduce long-term maintenance costs. Detroit officials said that they were planning to downsize their water treatment capacity from 1,720 to 1,040 million gallons per day to address reduced water demand experienced in recent years. According to its 2015 updated water master plan, downsizing water treatment capacity will result in a life cycle cost savings of about $450 million to align with projected water demand, which declined by 32 percent from 2000 through 2014, in part because of population decline in the region. Detroit is also investigating selective retirement of water pipelines in vacant areas of the city as part of a long-term strategy to reduce system renewal and rehabilitation costs. Similarly, according to city officials and a utility representative, the city of Gary, in collaboration with the Gary Sanitary District, was in the process of developing a new land use plan and city rezoning that will identify areas appropriate for decommissioning services, including wastewater services, to some neighborhoods with high vacancies. As of November 2015, of approximately 13,000 blighted properties in Gary, about 8,000 were vacant and occupied large portions of neighborhoods on the periphery of the city, according to city planning officials we interviewed. According to a utility representative we interviewed, some areas in the city were in obvious need of rightsizing, and the utility had already shut off water and wastewater service to some streets and city blocks. Many of the utility representatives we interviewed told us that rightsizing was not practical or feasible, which is consistent with the findings from several studies and EPA reports on rightsizing that we identified. For example, the representatives told us that they did not have entirely vacant blocks that would make decommissioning service lines possible—usually a few occupied houses remained. In addition, water and sewer lines must often be kept to maintain service to remaining houses that are further away. Utility and city planning officials we interviewed also noted the political challenges associated with any displacements necessary to decommission water or wastewater services to a neighborhood, or to reduce water infrastructure capacity in a way that might limit growth in the future. As part of considering rightsizing their infrastructures, 5 wastewater utilities we reviewed—Detroit Water and Sewerage Department and Gary Sanitary District and 3 other wastewater utilities we reviewed—indicated that they have incorporated in their plans, or were considering using, green infrastructure to help reduce sewer overflows. Green infrastructure uses a range of controls, such as vegetated areas, stormwater collection, or permeable pavement, to enhance infiltration, evapotranspiration, or reuse of stormwater on the site where it is generated. The use of green infrastructure can help reduce the amount of stormwater that enters the sewer system, preventing sewer overflow events, and is a potentially less costly approach to helping control combined sewer overflows, according to Natural Resources Defense Council reports. Some utility representatives and city planning officials we interviewed said that green infrastructure is an opportunity for improving blighted and vacant areas within their cities. The 10 cities with declining populations we reviewed had housing vacancy rates averaging 21 percent, based on our analysis of American Community Survey data, 5-year estimates 2010 through 2014. According to a study we reviewed, placement of green infrastructure on vacant properties can provide environmental, social, and economic benefits and help address problems created by vacant housing, which when left undemolished contributes to blight, crime, and the further abandonment of neighboring properties and adds debris to the sewer system and contributes to the combined sewer overflow problem. All 5 utilities that had incorporated green infrastructure in their plans to help control sewer overflows, or were considering using green infrastructure, were collaborating with city planners and others on implementation, and three of the 5 utilities collectively committed more than $150 million for green infrastructure, including funding for demolitions in areas targeted for green infrastructure, according to planning documents we reviewed. Challenges to implementing green infrastructure approaches, according to some representatives from utilities and city planning officials, include establishing responsibilities for and funding of maintenance of green infrastructure; proving the effectiveness of green infrastructure approaches; and breaking silos of organizations (e.g., utilities, city departments, and community organizations) that may benefit from supporting green infrastructure. Funding for demolition is also needed to facilitate the repurposing of these properties for green infrastructure and to address the backlog of properties on current city demolition lists, according to a few of the city officials we interviewed. Representatives we interviewed from some of the 14 utilities in our review described undertaking a major reorganization to reduce costs and improve management efficiencies, including the creation of new organizations to manage water and wastewater infrastructure and major staff reduction, and optimization efforts, such as revised organizational structure and job descriptions, within the existing organization. Specifically, 5 utilities we reviewed, undertook major reorganizations. Three of the reorganized utilities created entirely new organizations, independent from their city governments, to manage drinking water and wastewater infrastructure in cases where the cities faced financial challenges. For example, in September 2014 the city of Detroit and surrounding counties entered into an agreement to establish the Great Lakes Water Authority to operate the water supply and sewage disposal system, which were owned by the city of Detroit and operated by the Detroit Water and Sewerage Department. Under the agreement, the Detroit Water and Sewerage Department will operate and maintain the water and sewer lines that provide service to customers within the city boundaries. In addition, the Great Lakes Water Authority will pay the city of Detroit $50 million annually to lease the regional facilities it operates; the Detroit Water and Sewerage Department will use the funds for capital improvements to city-managed infrastructure, among other things. The Great Lakes Water Authority will also dedicate 0.5 percent of revenues annually to fund a regional water assistance program for low-income residents throughout the authority’s service area. Two of the 14 utilities, including one that reorganized, downsized staffing by about 30 percent and 40 percent, respectively, after reorganizing to reduce operational costs and create efficiencies. A fifth utility created a new organizational structure, among other things, to facilitate alignment of work processes between the utility and the city to more efficiently and cost effectively replace water, sewer, and drainage infrastructure alongside the rebuilding of roads. By expanding their customer bases, utilities can take advantage of excess treatment capacity to generate additional revenue. They can also take advantage of economies of scale to spread their costs across a greater number of customers, resulting in lower costs per customer and a stronger financial condition for the utility. Representatives we interviewed from half of the utilities (7 of 14) we reviewed already served a regional area, with a correspondingly larger customer base, well beyond the boundaries of the cities that they serve—some provide service county- wide, some provide service across multiple counties, and a few provide service statewide. According to representatives we spoke with, some (5 of 14) of the utilities we reviewed were looking to expand their customer bases by widening their service areas (e.g., regionalizing), to attract commercial or industrial businesses to locate within their existing service areas, or both. Specifically, 2 utilities were actively seeking opportunities to expand their service areas. These 2 utilities had taken steps such as setting aside funding to support water and sewer packages and benefits for businesses or encouraging business placement within their service areas. One utility was using both approaches to expand its customer base. Many utilities—including some that were already taking steps to expand their customer bases—noted various limitations to doing so. For instance, a few utilities noted competition from other cities trying to attract industry and commercial businesses. In addition, surrounding communities may already have their own water and wastewater infrastructure and utilities, so expanding service areas means convincing existing utilities and their customers of the benefits of receiving services from another utility. For example, one utility representative told us that the utility’s board was discussing the possibility of providing service to a neighboring area, but the cost of connection is $12 million, more than the neighboring city would like to pay. A representative from another utility said that it had attempted to consolidate with neighboring communities but that there was a lack of interest on the part of other communities. Of the 14 utilities we reviewed, few used public-private partnerships as a strategy to help address infrastructure needs. Such partnerships typically involve a government agency contracting with a private partner to construct, operate, maintain, or manage a facility or system, in part or in whole, that provides a public service. Public-private partnerships can take different forms short of a private company purchasing the utility and its facilities, including long-term contractual agreements between a public and a private entity to provide day-to-day operational or management services of facilities or contracting for management consulting services. Of the 14 utilities we reviewed, 4 had some experience with public-private partnerships. One utility had—over the last 25 years—an ongoing contract with a private company to manage the day-to-day operations of its wastewater facility. In the past, another utility had a similar contract with a private company to manage daily operations of its wastewater facility. The third utility hired a private company to work with the utility’s management for several years to identify cost reduction opportunities. Finally, according to the 2015 annual report of its parent company, 1 of the 2 privately owned utilities we reviewed had a series of agreements with public entities for the construction and financing of utility infrastructure, which was leased to its public partners. Of the remaining 10 utilities that did not have experience with public- private partnerships, a few shared varying perspectives on public-private partnerships. Representatives from 1 said that the utility was open to using the strategy. However, representatives from 2 others said that their utilities preferred to be self-reliant because of public perception that private contractors would not take as great care of the facility as the public utility. In addition, representatives from 1 of these privately owned utilities highlighted the benefit to the community of enhanced economies of scale and additional resources provided by a large private utility, such as its parent company, including investor support and shared laboratories for water quality testing. Of the 12 utilities whose representatives we interviewed, representatives from 4 utilities told us that they had asset management systems in place. Asset management is a framework for providing the best level of service at the lowest appropriate cost and involves identifying and prioritizing assets for routine repair or replacement (versus emergency repair). It is a widely recognized tool used across a variety of sectors to manage physical assets, such as highways, machinery, and buildings; in the case of water and wastewater infrastructure, key assets are pipelines, tanks, pumps, and other facilities. Representatives from 1 of the 12 utilities we interviewed, Macon Water Authority, said that it had fully integrated the use of asset management in physical and financial management of the utility. Macon representatives said that they integrated information from their asset management program into a 10-year long-range planning model used to estimate needed income and revenue requirements to manage day-to-day operations, fund replacement of infrastructure, fund normal repairs, and fund maintenance and upgrades. The utility has done this, according to the representatives, while keeping rates low, and representatives acknowledged that receiving a $93.5 million grant from the Federal Emergency Management Agency to replace the utility’s drinking water treatment plant also helped to keep rates low. Representatives we interviewed from 7 of the remaining utilities said that they had partially implemented or were in the initial stages of developing asset management inventories and plans. A few utility representatives we spoke with acknowledged the value of the strategy in identifying priorities for spending. One utility did not have an asset management plan and was not developing one because, according to its officials, it tracks locations of breaks and other maintenance needs and focuses resources on repairing those. In addition to the contact named above, Susan Iott (Assistant Director), Mark Braza, John Delicath, Kaitlan Doying, Holly Halifax, John Mingus, Robert Sharpe, Jeanette Soares, Anne Stevens, Sara Sullivan, Kiki Theodoropoulos, and Swati Sheladia Thomas made key contributions to this report.
Many midsize and large cities throughout the United States, including the Midwest and Northeast, have lost a substantial percentage of their population. These cities face the challenge of a corresponding decline in utility revenues from a loss of ratepayers, which makes it difficult to address their water infrastructure needs. Overall, water and wastewater utilities across the United States face substantial costs to maintain, upgrade, or replace aging and deteriorating infrastructure—approximately $655 billion for water and wastewater utilities over the next 20 years according to EPA's most recent estimates. GAO was asked to review the water and wastewater infrastructure needs in midsize and large cities with declining populations. This report examines (1) the economic characteristics of such cities and their water and wastewater infrastructure needs; (2) strategies that selected cities and utilities have used to address their infrastructure needs and the affordability of their water and wastewater rates; and (3) what existing federal programs and policies, if any, could assist such cities in addressing their needs. GAO analyzed decennial census and American Community Survey data, relevant studies, and utility financial statements for 10 cities with the largest population declines from 1980 through 2010 and 14 water and wastewater utilities in those cities. GAO also reviewed laws, regulations, policies, and guidance for six federal programs; analyzed program and city and utility funding data; and interviewed agency and city officials and representatives from 12 of the 14 utilities. Midsize cities (with populations from 50,000 to 99,999) and large cities (with populations of 100,000 and greater) that have experienced a population decline are generally more economically distressed than growing cities. Specifically, GAO's review of American Community Survey data for 674 midsize and large cities showed that the 99 cities with declining population had higher poverty and unemployment rates and lower median income than cities with growing populations. Little research has been done about these cities' overall water and wastewater infrastructure needs, but the needs of the 10 midsize and large cities that GAO reviewed generally reflected the needs of cities nationally, as identified in needs assessments conducted by the Environmental Protection Agency (EPA). Water and wastewater utility representatives whom GAO interviewed described major infrastructure needs, including pipeline repair and replacement and wastewater improvements to control combined sewer overflows (i.e., wastewater discharges to streams and other water bodies during storms). Utilities for the 10 cities GAO reviewed used the strategy of raising rates to increase revenues to address water and wastewater infrastructure needs and used other strategies to address concerns about rate affordability for low-income customers. Most of the 14 utilities GAO reviewed raised rates annually to cover declines in revenues related, in part, to decreasing water use from declining populations, or to pay for rising operating and capital expenses. To help address rate affordability concerns, all of the utilities reviewed had developed customer assistance programs, a strategy to make rates more affordable, for example, by developing a payment plan agreeable to the customer and the utility. In addition, most utilities were using or had plans to use one or more cost-control strategies to address needs, such as rightsizing system infrastructure to fit current demands (i.e., reducing treatment capacity or decommissioning water or sewer lines in vacant areas). For example, as part of rightsizing, representatives GAO interviewed for 5 wastewater utilities said that they planned or were considering using vacant areas for green infrastructure (vegetated areas that enhance on-site infiltration) to help control stormwater that can lead to sewer overflows. As of June 2016, six federal programs and one policy could assist midsize and large cities with declining populations in addressing their water and wastewater infrastructure needs. Cities with declining populations may receive funding from the six programs, managed by EPA, the Economic Development Administration, the Department of Housing and Urban Development (HUD), and the Federal Emergency Management Agency, for such projects. For example, states can use a portion of EPA's Clean Water and Drinking Water State Revolving Funds to provide additional subsidies in the form of principal forgiveness or negative interest loans to cities that meet state affordability criteria, such as median household income. The Birmingham Water Works Board, one of the 14 utilities GAO reviewed, received $11.6 million from the Drinking Water State Revolving Fund in fiscal years 2010 through 2015, including $1.7 million with principal forgiveness to pay for green projects, such as water efficiency projects. GAO provided a draft of this report to EPA, the Economic Development Administration, and HUD for comment. The agencies provided technical comments that were incorporated, as appropriate.
VHA oversees VA’s health care system, which includes 153 VAMCs organized into 21 VISNs. Each VISN is charged with the day-to-day management of the VAMCs within its network. VA Central Office decentralized its budgetary, planning, and decision-making functions to the VISN offices in an effort to improve accountability and oversight of daily facility operations. However, VA Central Office maintains responsibility for monitoring and overseeing both VISN and VAMC operations. Veterans may elect to have their dialysis treatments through VA or Medicare but cannot receive dialysis benefits from both simultaneously. In 2008, there were over 18,000 veterans enrolled in VA’s health care system diagnosed with ESRD who required dialysis treatments. Of these enrolled veterans, about two-thirds elected to receive their dialysis treatments through VA. However, VA treated less than half of these veterans in VAMC-based dialysis clinics because of capacity limitations and other factors, such as long distances veterans may have to travel to the VAMC.most of these veterans receiving their dialysis treatments through VA’s fee basis program. Through VA’s fee basis program, veterans can select any dialysis provider, as long as their chosen provider accepts VA’s established payment rate for dialysis treatment. Currently, fee basis rates differ by VISN; however, several VISNs are currently paying for dialysis treatments through several multi-VISN-negotiated agreements with dialysis providers. Under these agreements, per-treatment costs for these VISNs currently range from $248 to $310. Veterans who elect to have their dialysis treatments provided through VA—either in VAMCs or through the fee basis program—may not incur any out-of-pocket expenses. This limited internal capacity and other factors resulted in The remaining one-third of all veterans enrolled in VA’s health care system who were diagnosed with ESRD, in fiscal year 2008, elected to have their dialysis treatments paid for by Medicare. The veteran can select any Medicare-certified dialysis provider that accepts Medicare payment and there are few, if any, restrictions on this choice since all major dialysis providers accept individuals covered by Medicare. Medicare reimburses dialysis providers 80 percent of a specified per- treatment base bundled rate—about $230 in 2011—and beneficiaries or private insurance companies are responsible for the remaining 20 percent.for by Medicare, the remaining 20 percent may be an out-of-pocket expense that was about $7,600 per year in 2008, because VA is not authorized to pay these out-of-pocket expenses incurred by veterans covered by Medicare. In 2009, VA began developing the Dialysis Pilot to build its in-house capacity to provide dialysis treatments to veterans in response to several issues, including the following: Rising numbers of veterans needing dialysis. A VA-funded study found that the number of veterans requiring dialysis treatments was projected to increase 6 percent from fiscal year 2008 to fiscal year 2015 and the number of veterans receiving these dialysis treatments from community providers through the fee basis program was projected to increase 16 percent. Rising costs of providing dialysis through the fee basis program. The same VA-funded study also found that VA’s fee basis per- treatment costs were projected to increase about 59 percent from $337 per treatment in fiscal year 2008 to $535 per treatment in fiscal year 2015. Unsuccessful efforts to achieve lower reimbursement rates with fee basis dialysis providers. Another VA-funded study found that if VA adopted Medicare rates, set by the Centers for Medicare & Medicaid Services (CMS), for outpatient dialysis treatments, it would reduce its dialysis fee basis expenditures by 39 percent resulting in projected cost reductions from fiscal year 2011 through fiscal year 2020 of about $2 billion. As a result of this study, VA Central Office instructed VISN directors to begin using Medicare rates as the prevailing reimbursement rate for fee basis dialysis treatments in 2009. However, the major fee basis dialysis providers did not agree to provide dialysis treatments to veterans through VA’s fee basis program at these reduced rates. This resulted in VA continuing to pay for these treatments according to previously established rates, which are typically higher than Medicare rates. In response to these issues, VA Central Office charged VISN 6, a VISN with a significant volume of veterans who require dialysis treatments, with establishing a dialysis workgroup to evaluate VA’s options for dialysis treatment delivery. The Dialysis Workgroup—led by officials from VISN 6, with representatives from VA Central Office and the VHA Chief Business Office and others with financial and clinical expertise—met in 2009 to discuss various options for providing dialysis care for veterans. This workgroup identified several options VA could take to build its internal capacity to provide dialysis treatments to veterans and identified several ways VA could address the rising costs of dialysis care provided through the fee basis program, including (1) building dialysis units in leased space in communities surrounding select VAMCs, (2) purchasing modular dialysis units, (3) modifying existing space in selected VAMCs to expand or build dialysis units, and (4) negotiating pricing agreements with select dialysis providers to allow VAMCs to pay a lower rate for fee basis dialysis treatments. In March 2010, after discussing these options and exploring potential solutions, the Dialysis Workgroup began designing the Dialysis Pilot as an effort to build VA’s capacity to provide dialysis treatments to veterans in VA-operated facilities and reduce fee basis costs. The Dialysis Workgroup projected that the Dialysis Pilot would result in a 5-year cost savings of about $33 million by operating four outpatient dialysis clinics that could each treat 48 veterans a week. The Dialysis Pilot was approved by the Under Secretary for Health in August 2010 and by the Secretary of Veterans Affairs in September 2010. The final four pilot locations selected by the Dialysis Workgroup were Durham and Fayetteville, North Carolina; Philadelphia, Pennsylvania; and Cleveland, Ohio. Each of these locations was provided approximately $2.5 million in start-up funding by VA Central Office to establish an outpatient dialysis clinic with 12 dialysis stations that could treat 48 veterans per week. VA Central Office expected this start-up funding would be repaid by the pilot locations. (See app. I for more information on the current status of each pilot location.) The pilot locations in Durham and Fayetteville, North Carolina, began treating veterans in June 2011. The Philadelphia, Pennsylvania, pilot location is scheduled to open in May 2012, and the Cleveland, Ohio, pilot location is scheduled to open in September 2012. There were a number of weaknesses in VA’s execution of the planning and early implementation phases of the Dialysis Pilot that collectively could limit the achievement of its goals. Specifically, weaknesses in pilot location selection, cost estimation practices, and cost savings calculations could hamper the Dialysis Pilot’s effectiveness. While the Dialysis Workgroup reported using several criteria to select the Dialysis Pilot locations and documented some of these criteria in the approval documents for the Dialysis Pilot, it did not document how these criteria were applied or whether it assessed all 153 VAMCs for potential inclusion in the Dialysis Pilot. According to GAO internal control standards, clearly documenting key information is necessary to ensure that appropriate internal controls for communicating and recording According to Dialysis Workgroup decision-making activities are in place.officials, the Dialysis Workgroup began its pilot location selection process by identifying 13 potential pilot locations using several criteria, including (1) the number of veterans receiving outpatient dialysis treatments living within a 30-mile radius or a 30-minute drive of a VAMC, (2) a VAMC’s potential to achieve cost savings by operating a pilot location, and (3) the perceived level of dialysis-related clinical expertise available at each VAMC. Dialysis Workgroup officials told us that the final four pilot locations in Durham, Fayetteville, Philadelphia, and Cleveland were all ultimately selected because they had a high number of veterans receiving dialysis treatments and, in some cases, had access to high-quality clinical expertise. However, no documentation was provided by VA discussing how these criteria were applied to all 153 medical centers and why Durham, one of the final four pilot locations, was omitted from the list of 13 potential pilot locations when it clearly met these selection criteria.addition, VA officials from the Dialysis Workgroup whom we spoke with could not recall complete details regarding the pilot location selection process that occurred in 2009, including whether additional VAMCs were assessed against the various criteria. As a result, it is not possible for VA or an external party to definitively determine if there were any other VAMCs that could have been viable pilot locations beyond the 13 considered by the Dialysis Workgroup. The transparency of the pilot site selection process was further compromised by the manner in which the Durham VAMC was selected as a pilot location. Dialysis Workgroup officials did not document their rationale for selecting this VAMC—a site not included in the original 13 potential pilot locations—as one of the four final pilot locations. According to Dialysis Workgroup officials, the VAMC in Salisbury was originally selected as one of the four final pilot locations; however, this VAMC was undergoing managerial changes at that time and the VAMC in Durham, located in the same VISN, was selected as a replacement pilot location. In April 2012, Dialysis Workgroup officials reported that the Durham VAMC’s high number of veterans receiving dialysis and nephrology expertise also contributed to its selection as one of the four final pilot locations. However, VA did not document either the initial selection of the VAMC in Salisbury as a pilot location or the rationale for why the VAMC in Durham was a better final selection for the Dialysis Pilot than one of the other 13 potential pilot locations that was not selected. The lack of documentation on this particular selection further reduced the transparency of the decision-making process. VA Central Office officials do not have complete information about how or why pilot locations were selected because key decisions and the rationale behind them were not documented. Such documentation of decision- making processes is necessary to ensure that VA decision makers have access to relevant, reliable, and timely information and could follow a rigorous and fair decision-making process for this critical aspect of the Dialysis Pilot. The lack of documentation related to a key planning decision—such as the complete process used to select pilot locations— limits VA’s ability to access this information in the future, evaluate the success of the Dialysis Pilot, and make decisions about how best to expand the pilot to additional locations. It is not possible to determine whether pilot locations completed reliable cost estimations because these estimates are not consistent and comparable. This will limit VA’s ability to determine if the Dialysis Pilot has met its mission to reduce the cost of dialysis treatments paid for by VA. Reliable cost estimates are necessary to ensure that pilot location costs Generating reliable and are comparable across the four pilot locations.comparable cost estimates prior to opening the pilot locations was critical to the early implementation of the Dialysis Pilot in order to ensure that appropriate site-specific baseline cost estimates were generated that would allow VA to evaluate the cost of the Dialysis Pilot and ensure that any cost savings generated by the pilot locations could be accurately calculated. The importance of thorough and reliable cost estimation processes was included in VA’s own business analysis of the Dialysis Pilot, which stated that pilot locations were intended to use the same cost estimation methodology to facilitate uniformity and ensure that all pilot locations produced reliable information. To its credit, the Dialysis Workgroup worked with VA systems redesign engineers to develop a sophisticated cost estimation model to help VISN and VAMC officials estimate costs for their pilot locations. VA systems redesign engineers built the cost estimation model using validated research as the foundation for its general baseline cost estimates. The cost estimation model included information on several aspects of establishing and operating an outpatient dialysis clinic—including equipment costs, leased-space costs, staff costs, and veteran travel costs. (See fig. 1.) Despite this effort to build a robust model for estimating Dialysis Pilot costs, VA did not maintain proper control over the cost estimation model following its release for use by VISNs and VAMCs. While there were designated areas in the cost estimation model for each pilot location to enter its specific cost inputs, a VA systems redesign engineer we spoke with explained that the formulas in the cost estimation model should not be customized by pilot locations. These formulas were validated by VA systems redesign engineers and were meant to remain constant across all pilot locations to ensure that comparable and consistent cost estimates were produced. However, VA Central Office requested that the cost estimation model be fully customizable—including all cost inputs and formulas—in order to encourage pilot locations to use the model. Because the model was fully customizable, some pilot locations both appropriately altered their pilot location-specific inputs and inappropriately altered the formulas that were intended to remain constant. As a result of the inappropriately altered formulas, final cost estimates for the four pilot locations are inconsistent and do not include comparable information. We found several inconsistencies in pilot locations’ use of the cost estimation model, including the following: Not all pilot locations used validated formulas for developing cost estimates. We found that not all pilot locations used the validated formulas developed by VA systems redesign engineers to calculate their cost estimates. For example, Philadelphia’s cost estimation model did not use the validated formula for calculating the pilot location’s equipment costs. While the validated equipment cost formula included a patient transport cardiac monitor for each dialysis pilot location, the Philadelphia pilot location’s model omitted this equipment from its calculation. Also, instead of using the validated formulas, the Cleveland pilot location deleted some formulas related to annual patient demand and leased-space costs and replaced them with specific numeric values. As a result of these changes, some of Cleveland’s cost estimations cannot be confidently compared with those from the other three pilot locations because it is unclear what the location used to calculate these numeric values. Consistent use of the cost estimation model and its validated formulas is necessary to ensure that the cost estimations of each pilot location can be compared and evaluated. Pilot location capacity changes. Two pilot locations increased the capacity of their outpatient dialysis clinics, despite the fact that the Dialysis Workgroup specifically established clear capacity limits and the cost estimation model was developed for these specific capacity limits. According to the Dialysis Workgroup and the Dialysis Pilot approval document signed by the Secretary of Veterans Affairs, each pilot location’s capacity was limited to 12 dialysis stations that could provide up to 48 veterans with dialysis treatments each week. However, the Fayetteville pilot location increased its capacity from 12 to 16 dialysis stations and the Cleveland pilot location increased its capacity from 12 to 20 dialysis stations. These capacity increases were not validated by VA systems redesign engineers, and as a result, it is unclear how these changes may affect the efficiency of these pilot locations. In addition, it is unclear whether these capacity increases were approved by VA Central Office since the size of these two pilot locations is larger than what was originally approved by the Secretary of Veterans Affairs. According to Dialysis Workgroup officials, pilot location-specific baseline cost estimates were included in VA’s own business analysis of the Dialysis Pilot. However, the baseline cost estimates included in this document are unreliable for the following reasons: The baseline cost estimates included in VA’s business analysis of the Dialysis Pilot are based on the assumption that all pilot locations will be limited to 12 dialysis stations. However, the Cleveland and Fayetteville pilot locations currently have considerably more dialysis stations, with 20 and 16 dialysis stations, respectively. The baseline cost estimates included in VA’s business analysis of the Dialysis Pilot were generated prior to the cost estimation model’s distribution to pilot locations for customization. Therefore, these estimates do not account for the pilot location-specific customization of several model inputs, such as actual leased-space costs. During the site-specific customization process, several of the costs associated with these customizable inputs increased significantly due to either changes in pilot location size or other factors. For example, the Cleveland pilot location’s customized model includes about $400,000 in annual lease expenses, while the baseline cost estimate for Cleveland’s lease expenses from VA’s business analysis is only about $220,000. VA Central Office officials did not provide VISN and VAMC officials with clear and timely written guidance or instructions on how to pay back start-up funds or how to calculate cost savings from the Dialysis Pilot. VA Central Office provided a total of approximately $10 million in start-up funding for the Dialysis Pilot to the three VISNs associated with the pilot locations. Each pilot location received about $2.5 million in start-up funding to establish its outpatient dialysis clinic. Pilot locations are expected to achieve cost savings through the outpatient dialysis clinics and to repay their start-up funding. Specifically, the memorandum approving the Dialysis Pilot signed by the Secretary of Veterans Affairs and the Under Secretary for Health states that pilot location start-up funds are to be repaid in two equal payments in fiscal year 2012 and fiscal year 2014. A lack of communication—including ongoing discussions, reporting, and guidance—regarding the repayment of pilot location start-up funds could make it difficult for VA and external parties to determine if the pilot locations are making reasonable progress toward repaying these funds and realizing cost savings from the Dialysis Pilot.officials reported a lack of clarity about how Dialysis Pilot start-up funds must be repaid. Specifically, officials from two of the three VISNs associated with pilot locations—VISNs 4 and 10—told us that they have not discussed the repayment of their pilot locations’ start-up funding with VA Central Office. Similarly, officials from the VAMCs associated with the two operational pilot locations acknowledged their understanding that start-up fund repayment would likely be included as part of their cost VISN and VAMC savings calculations, but told us that they were not aware of any specific agreements or plans for repayment. In addition, while the Dialysis Workgroup provided pilot locations with 5-year cost savings projections and articulated plans for calculating actual cost savings in a document that was published prior to the approval of the Dialysis Pilot, VISN officials we spoke with were uncertain about how cost savings would be calculated. According to VA’s own business analysis of the Dialysis Pilot, actual cost savings will be calculated by comparing the cost per treatment at each pilot location to actual fee basis per-treatment rates for each pilot location’s corresponding VAMC.states that cost saving calculations will be a collaborative effort between pilot location leadership, VA researchers, and the VHA Chief Business Office. However, VISN officials stated that while they expect that the cost savings from their pilot locations will be examined, they did not receive written guidance about how cost savings will be calculated. Given the lack of specific and timely guidance to VISN and VAMC officials on the calculation of cost savings, officials at the pilot locations may not use the same methodology to track these savings. VA Central Office has not yet determined how it will define success for the Dialysis Pilot or created clear performance measures linked to the four Dialysis Pilot goals. When other leading public sector organizations are engaged in efforts to improve their performance and help their organizations become more effective—similar to VA’s goals for the Dialysis Pilot—we found that these organizations commonly take three steps: (1) define a clear mission and goals, (2) measure performance to gauge progress toward achieving goals, and (3) use performance information as a basis for decision making. Step 1—Defining a clear mission and goals. VA has completed the first of these steps by defining a clear mission and goals for the Dialysis Pilot. Specifically, the Dialysis Workgroup noted that the Dialysis Pilot would allow VA to develop a cost-effective business model that could be used to optimize VA’s resources and increase its capacity to provide dialysis treatment to veterans. This workgroup also outlined four goals of the Dialysis Pilot: (1) improved quality of care, (2) increased access for veterans, (3) additional dialysis research opportunities, and (4) cost savings for VA-funded dialysis treatments. Through the participation of its membership in developing this mission and these goals, the Dialysis Workgroup was able to incorporate the input of several VA internal stakeholders—including VA Central Office representatives, VISN leadership, clinical experts with experience treating veterans with ESRD, and VA systems redesign engineers. This process included a thorough assessment of VA’s options for providing dialysis treatments to veterans—including the resources, equipment, and staffing needed to operate a cost-effective outpatient dialysis clinic. Step 2—Measuring performance to gauge progress. Despite its success in defining a clear mission and goals for the Dialysis Pilot, VA has not developed a clear plan for evaluating the pilot. Specifically, while two pilot locations (Durham and Fayetteville) began treating veterans in June 2011, VA has not yet begun an evaluation of the establishment and management of the pilot locations—including causes for opening delays, operating challenges, or the sufficiency of start-up funding. Previously, we found that developing sound evaluation plans before a pilot program is implemented can increase confidence in results and facilitate decision making about broader applications of the pilot. In March 2012, the VHA Chief Business Office reported that VA is in the early stages of establishing an agreement with a leading university research center to conduct an evaluation of the Dialysis Pilot; however, no target dates were provided for when this evaluation would begin or what aspects of the Dialysis Pilot beyond cost-effectiveness it would evaluate. In addition, VA Central Office has not developed a cohesive strategy for evaluating the Dialysis Pilot and has not yet formally defined its criteria for measuring the performance of the pilot locations or the success of the Dialysis Pilot in general. Several potential performance measures could be used to measure the pilot locations’ progress toward the achievement of each Dialysis Pilot goal: Improved quality of care. Officials from the Dialysis Workgroup told us that quality assurance outcomes, specifically those used by CMS to certify outpatient dialysis clinics, could be used to assess pilot locations. These metrics would likely help VA assess the quality of dialysis care provided by the pilot locations. Increased access for veterans. Dialysis Workgroup officials told us that patient satisfaction information could be used to assess pilot locations. This potential metric could help determine if the pilot locations increased veterans’ access to dialysis care. Additional dialysis research opportunities. In its business analysis of the Dialysis Pilot, the Dialysis Workgroup recommended that VA fund a 4-year research study to evaluate the quality of care at all pilot locations and identify best practices in veteran dialysis care. According to this business analysis, the findings of this study would enable VA to develop an evidence-based strategy for veteran dialysis care that ensures veterans receive the highest quality of care. Cost savings for VA-funded dialysis treatments. In its business analysis of the Dialysis Pilot, the Dialysis Workgroup suggested that pilot locations could use the cost estimation model to calculate cost savings generated by the pilot locations by comparing the cost of providing dialysis at each pilot location to the cost of providing this treatment through fee basis providers. However, this potential performance metric may be limited by VA’s failure to maintain control over the cost estimation model or provide sufficient guidance to pilot locations about how to properly use it. Because VA has not yet developed an evaluation plan or formally defined performance measures for pilot locations, it does not have access to consistent and reliable information on the performance of the pilot locations and may not have this information accessible when it is time to either make midcourse corrections for the Dialysis Pilot or decide whether and how to open additional VA-operated outpatient dialysis clinics. Step 3—Using performance as a basis for decision making. Despite not having fully developed performance measures for assessing the pilot locations, VA has already begun planning for the expansion of the Dialysis Pilot, which should not occur until after VA has defined clear performance measures for the existing pilot locations and evaluated their success. Specifically, a member of the VHA Dialysis Steering Committee told us that the committee has already developed a limited plan for expansion of the Dialysis Pilot. However, this plan does not incorporate the results of a performance assessment for the existing four pilot locations. In addition, VA systems redesign engineers have begun developing three additional cost estimation models despite not having fully evaluated the effectiveness of the cost estimation model used in the Dialysis Pilot. Taken together, these two actions indicate that VA is beginning to make decisions about the future of the Dialysis Pilot and the cost estimation model, even though VA decision makers currently lack critical performance information on the existing four pilot locations. To its credit, VA developed the Dialysis Pilot as a potential way of addressing the rising cost and utilization of fee basis dialysis treatments among veterans. Through the Dialysis Pilot, VA intends to test the viability of increasing its capacity to provide dialysis treatments in VA-operated outpatient dialysis clinics. VA set four goals for its Dialysis Pilot: (1) improve quality of care, (2) increase access for veterans, (3) provide additional dialysis research opportunities, and (4) achieve cost savings for VA-funded dialysis treatments. While these are commendable goals, there were weaknesses in VA’s planning and early implementation of the Dialysis Pilot that if not corrected, will make it difficult to determine whether the Dialysis Pilot has met its goals and will provide cost-effective care if expanded. Specifically, VA did not conduct a transparent and well- documented pilot location selection process, provide clear and timely guidance to participating VISNs and VAMCs on key financial aspects of the Dialysis Pilot, or articulate clear performance measures for pilot locations. We believe that VA can rectify these weaknesses, but must act prior to full implementation of the four pilot locations to ensure that the Dialysis Pilot is not compromised and can serve as an effective demonstration effort. Moreover, until VA has reliable data, we believe it would be unwise for VA to expand the Dialysis Pilot beyond the current four pilot locations, as doing so may risk investing resources inappropriately. Moving forward, we believe four critical areas should be addressed. First, VA must clearly document the selection process it used to identify the existing four pilot locations and may use to identify any future pilot locations. VA relied on a decentralized and ad hoc selection process to choose the existing four pilot locations and failed to properly document the results of this key decision-making effort. Such inattention to documenting critical decisions results in a lack of transparency and weakens the credibility of the Dialysis Pilot. Second, VA needs to ensure that changes to its cost estimation model are reviewed by knowledgeable staff. This step is necessary to ensure that this model produces comparable data for all pilot locations that can serve as an accurate basis for evaluating the financial success of the Dialysis Pilot. To date, pilot locations altered existing formulas and model assumptions, which resulted in cost estimation data with questionable reliability that may limit VA’s ability to compare results consistently across the pilot locations. Third, VISN and VAMC officials need specific guidance for the repayment of Dialysis Pilot start-up funds and the calculation of cost savings realized from the pilot locations. To date, VA Central Office has not articulated its expectations regarding these two critical aspects of the Dialysis Pilot. Without a clear understanding of the terms for start-up fund repayment, it is difficult for VA or external entities to determine if the four pilot locations are making significant progress toward repaying these funds and generating cost savings that can be used to offset the cost of treating the projected increased number of veterans who will need dialysis treatment in the coming years. Finally, VA must develop clear and measurable performance criteria that can be consistently applied to evaluate the Dialysis Pilot. Despite defining the mission and goals for the Dialysis Pilot, VA has not developed a plan for evaluating its success or developed performance measures to track pilot locations’ progress toward meeting its stated mission and goals. An effective evaluation plan and clear performance measures are needed to help ensure that the Dialysis Pilot operates in an environment of accountability. In order to increase VA’s attention to planning, implementation and performance measurement of the Dialysis Pilot we are making five recommendations. To improve VA’s communication related to the Dialysis Pilot, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to ensure that key decisions made regarding pilot location selection and efforts to continue or expand the Dialysis Pilot are clearly documented. To ensure that reasonable cost estimates are created for the Dialysis Pilot and other similar programs, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to restrict or evaluate changes made to cost estimation models at the VISN and VAMC levels that affect pilot development and analysis. To ensure that start-up funds are repaid and cost savings are accurately calculated for the Dialysis Pilot, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to develop written guidance about expectations for the repayment of start-up funds and how the cost savings generated by the four pilot locations should be calculated. To ensure that VA Central Office effectively evaluates the Dialysis Pilot, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following two actions: Develop an evaluation plan that outlines how the Dialysis Pilot will be assessed and provides target dates for the completion of this assessment. Develop clear measures for assessing the performance of the four Dialysis Pilot locations in key areas—including quality, access, and cost. VA provided written comments on a draft of this report, which we have reprinted in appendix II. In its comments, VA generally agreed with our conclusions, concurred with our recommendations, and described the department’s plans to implement each of our five recommendations. VA did not provide any technical comments. In its general comments, VA noted that it has established a comprehensive strategic plan for chronic kidney disease and dialysis services; however, a copy of this strategic plan was not provided as part of VA’s response to the draft report. According to VA, this plan incorporates aspects of several of our recommendations. In addition, VA stated that it is in the process of developing longer-range plans for the expansion of dialysis services, including establishing additional freestanding outpatient dialysis clinics similar to the current four pilot locations. We support VA’s efforts to carefully analyze its delivery of dialysis services to veterans, including the most cost-effective method of providing these life-saving medical treatments, and make reasoned decisions based on a thorough evaluation of its current pilot locations. In this regard, we continue to believe that it is unwise to establish additional freestanding outpatient dialysis clinics until the current four pilot locations are fully evaluated and VA rectifies the weaknesses we identified in this report. In its plan for addressing our recommendations, VA stated that it is developing a plan for the Dialysis Pilot that will address three of our recommendations related to (1) the documentation of Dialysis Pilot key decisions, including the selection of future pilot locations; (2) the creation of reasonable cost estimates for pilot locations; and (3) guidance for the repayment of start-up funding and cost savings calculations. According to VA, this plan will ensure better communication and documentation of decisions for future pilot site selections; more rigorous oversight of cost estimation tools and analysis of financial and clinical outcomes; and a thorough analysis of start-up fund repayment, including whether VA will reverse its decision to require repayment of these funds. VA’s anticipated completion date for these actions is July 1, 2012. Finally, to address our remaining two recommendations, VA intends to develop a detailed evaluation plan for the Dialysis Pilot by July 1, 2012. According to VA, this plan will include specific criteria, target dates, and activities that must occur throughout the remainder of the pilot. VA intends to use this plan to periodically review and evaluate the Dialysis Pilot. In addition, VA described its efforts to significantly enhance the decision-making tools used for the Dialysis Pilot, including the cost estimation model. VA reported that these enhancements will include more rigorous accounting for facility costs, such as those for staffing and equipment. In addition, VA plans to task its systems redesign engineers with assessing pilot locations’ performance using metrics for cost, access, and quality. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Table 1 provides additional information on the current operating status of the four Dialysis Pilot locations at the Department of Veterans Affairs (VA) medical centers (VAMC) in Durham, North Carolina; Fayetteville, North Carolina; Philadelphia, Pennsylvania; and Cleveland, Ohio. In addition to the contact named above, Marcia A. Mann, Assistant Director; Kathleen Diamond; Katherine Nicole Laubacher; Rebecca Rust; and Malissa G. Winograd made key contributions to this report. Lisa Motley provided legal support.
Veterans diagnosed with end-stage renal disease—a condition of permanent kidney failure—represent one of the most resource-intensive patient populations at VA. These veterans are often prescribed dialysis, which is a life-saving and relatively expensive medical procedure that removes excess fluids and toxins from the bloodstream. VA began developing its Dialysis Pilot in 2009 with four goals: (1) improved quality of care, (2) increased veteran access, (3) additional medical research opportunities, and (4) cost savings. Through this pilot, VA will establish four VA-operated outpatient dialysis clinics in communities surrounding select VA medical centers by the end of fiscal year 2012 using start-up funding provided by VA Central Office. Pilot locations are expected to achieve cost savings and to repay their start-up funding. GAO examined VA’s planning and early implementation efforts for the Dialysis Pilot, and how VA plans to evaluate the pilot. GAO reviewed relevant VA documents, including those related to pilot location selection and cost estimation, and spoke with VA officials responsible for overseeing the Dialysis Pilot and representatives from all pilot locations. GAO found a number of weaknesses in the Department of Veterans Affairs’ (VA) execution of the planning and early implementation phases of the Dialysis Pilot. These weaknesses involved pilot location selection, cost estimation practices, and cost savings calculations that could collectively limit the achievement of the pilot’s goals. Specifically, VA did not do the following: Appropriately document its pilot location selection process. VA did not maintain a clear and transparent pilot location selection process; it did not document how its criteria for pilot location selection were applied to all 153 VA medical centers (VAMC) or why substitutions in pilot locations were made. However, VA officials reported that several criteria, including dialysis patient prevalence and average treatment costs, were used to select the pilot locations in Durham and Fayetteville, North Carolina; Philadelphia, Pennsylvania; and Cleveland, Ohio. Produce consistent and comparable cost estimates for pilot locations. VA did not complete consistent and comparable cost estimates for the four pilot locations. Specifically, GAO found several cases where pilot locations did not complete reliable cost estimates because they made changes to formulas and assumptions of the Dialysis Pilot cost estimation model, which was developed by VA systems redesign engineers. Provide clear and timely guidance on start-up fund repayment and cost savings calculations. VA did not provide Veterans Integrated Service Network and VAMC officials with clear and timely written guidance or instructions on how to pay back start-up funds, or how to calculate cost savings generated by the pilot locations. VA Central Office has not yet determined how it will achieve its goals for the Dialysis Pilot or created clear performance measures for the pilot locations. Previously, GAO found that leading public sector organizations take three steps to improve their performance and help their organizations become more effective: (1) define a clear mission and goals, (2) measure performance to gauge progress toward achieving goals, and (3) use performance information as a basis for decision making. While VA has defined a clear mission and goals for the Dialysis Pilot, it has only made limited progress in the remaining two steps. In March 2012, VA reported that it was in the early stages of establishing an agreement with a leading university research center to conduct an evaluation of the Dialysis Pilot; however, no target dates were provided for when this evaluation would begin or what aspects of the Dialysis Pilot it would evaluate. Because VA has not yet developed an evaluation plan or defined performance measures for pilot locations, it is not collecting consistent and reliable information on the performance of the pilot locations and thus may not have this information available when it is time to either make midcourse corrections to the Dialysis Pilot or decide whether and how to open additional VA-operated outpatient dialysis clinics. VA officials also told GAO they have developed a limited plan for expanding the Dialysis Pilot despite not having access to performance information on the existing four pilot locations. Among other actions, GAO recommends that VA improve its Dialysis Pilot by providing guidance for start-up fund repayment, as well as developing an evaluation plan that includes performance measures for the pilot locations. VA concurred with GAO’s recommendations and provided an action plan to address them.
Our updates were limited to reviewing certain publically available information, such as the most recent strategic plan released by the IACC. spectrum of adults with autism. Funding autism research on the same topic may be appropriate and necessary—for example, for purposes of replicating or corroborating results—but in some instances, funding similar autism research may lead to unnecessary duplication and inefficient use of funds. Most agency officials we spoke with said that they consider the research funded by their agencies to be different than autism research funded by other agencies; however, we found that each research area included projects funded by at least four agencies. For example, the diagnosis research area included projects funded by seven different agencies. The most commonly funded projects were in the area of biology (423 projects), followed by treatment and interventions (253 projects), and causes (159 projects). NIH funded a majority of the autism research projects in five of the seven research areas. (See fig. 1.) Five agencies that funded non-research autism-related activities from fiscal years 2008 through 2011—Administration for Community Living (ACL), CDC, Department of Defense (DOD), Department of Education (Education), and the Health Resources and Services Administration (HRSA)—funded activities that were not duplicative. HRSA and Education both funded training activities related to autism. HRSA’s activities included training health care professionals, such as pediatric practitioners, residents, and graduate students, to provide evidence- based services to children with autism and other developmental disabilities and their families. The activities also included training specialists to provide comprehensive diagnostic evaluations to address the shortage of professionals who can confirm or rule out an autism diagnosis. Education’s training activities focused on the education setting; for example, to prepare personnel in special education, related services, early intervention, and regular education to work with children with disabilities, including autism. Additionally, DOD and ACL both funded a publicly available website to provide information on services available to individuals with autism. DOD’s website was developed for military families to provide them with information on the educational services that are close to specific military installations in select states, while the ACL website is broader by focusing on all individuals with autism and other developmental disabilities, their families, and other targeted key stakeholders concerned with autism. Finally, we determined that CDC is the only agency funding an awareness campaign on autism and other developmental disabilities. CDC’s Learn the Signs. Act Early. campaign promotes awareness of healthy developmental milestones in early childhood, the importance of tracking each child’s development, and the importance of acting early if there are concerns. We noted in our November 2013 report that the IACC and federal agencies may have missed opportunities to coordinate federal autism activities and reduce the risk of duplication of effort and resources. Although the CAA requires the IACC to coordinate HHS autism activities and monitor federal autism activities, OARC officials stated that the prevention of duplication among individual projects in agency portfolios is not specified in the CAA as one of the IACC’s statutory responsibilities and therefore is not a focus of the IACC. OARC officials stated that it was up to the individual federal agencies to use the information contained in the IACC’s strategic plan and portfolio analysis to prevent duplication. Officials from three federal agencies—CDC, DOD, and NIH—told us that they use the strategic plan and portfolio analysis, which are key documents used by the IACC to coordinate and monitor federal autism activities, when setting priorities for their autism programs and to learn of autism activities conducted by other agencies. OARC officials acknowledged that the IACC could choose to use data from the portfolio analysis as the basis for specific recommendations regarding areas where interagency coordination could be increased, but to date this has not occurred. OARC officials stated that they do not consider it to be their responsibility to review the data that they collect on behalf of the IACC for duplication or for coordination opportunities. Instead, they said that they fulfill their role in assisting the IACC in its cross-agency coordination activities in other ways, such as by facilitating interagency communication and gathering information. In our November 2013 report, we recommended that the Secretary of Health and Human Services direct the IACC and NIH, in support of the IACC, to identify projects through their monitoring of federal autism activities— including OARC’s annual collection of data for the portfolio analysis, and the IACC’s annual process to update the strategic plan—that may result in unnecessary duplication and thus may be candidates for consolidation or elimination; and identify potential coordination opportunities among agencies. HHS did not concur with our recommendation. The agency stated that such an analysis by the IACC to identify duplication would not likely provide the detail needed to determine actual duplication, and that the role of the IACC should not include identification of autism-related projects for elimination. We agree that further analysis would be needed to identify actual duplication. While the strategic plan objectives, which represent broad and complex areas of research, are useful to identify the potential for unnecessary duplication, we believe that such identification is worthwhile as it can effectively lead to further review by the funding agencies to ensure funds are carefully spent. Agencies can review specific project information to confirm whether research projects associated with an objective are, for example, necessary to replicate prior research results. While funding more than one study per objective may often be worthwhile and appropriate, this type of analysis by agencies would help provide assurance that agencies are not wasting federal resources due to unnecessary duplication of effort. Further, such an analysis could help identify research needs—such as research that is needed to complement or follow-up prior research, or research that requires further corroboration—and move autism research forward in a coordinated manner. We also question the purpose of using federal resources to collect data, if the data are not then carefully examined to ensure federal funds are being used appropriately and efficiently. Further, we found that the IACC’s efforts to coordinate HHS autism research and monitor all federal autism activities were hindered due to limitations with the data it collects. For example, the guidance and methodology for determining what projects constitute research, and therefore should be included in the portfolio analysis, has changed over the years. As a result, the projects included in the portfolio analysis have varied. Such inconsistency makes it difficult to accurately determine how much an increase in the funding of autism research was due to an actual increase in research versus the inclusion of more projects in the analysis. Additionally, the portfolio analysis and strategic plan contain limited information on non-research autism-related activities, and the IACC did not have a mechanism to collect information on such activities. In our November 2013 report, we made recommendations that the Secretary of Health and Human Services direct the IACC and NIH, in support of the IACC, to provide consistent guidance to federal agencies when collecting data for the portfolio analysis so that information can be more easily and accurately compared over multiple years; and create a document or database that provides information on non- research autism-related activities funded by the federal government, and make this document or database publicly available. HHS did not concur with these recommendations. HHS emphasized that, when collecting data for the portfolio analysis, it has balanced the need for consistency with the need to be responsive to feedback from the IACC and from those participating in the portfolio analysis. While we agree with HHS that it is important to be responsive to feedback and make adjustments to guidance as necessary to improve data collection, we believe that annual changes of the type we observed are not productive. Guidance should be developed so that accurate, consistent, and meaningful comparisons of changes in federal funding of autism research can be made over time and used to inform future funding decisions. Additionally, HHS commented that information on non-research autism- related activities was publicly accessible through a report to Congress that the CAA, and its reauthorization in 2011, required of HHS. While this document could be a starting point from which the IACC could begin to regularly catalog non-research autism-related activities, we believe that having a document or database that contains current and regularly- updated information on these activities is an important aspect of fulfilling the IACC's responsibility to monitor all federal autism activities, not just research. We also reported in November 2013 that the data used by the IACC was outdated and not tracked over time, and therefore not useful for measuring progress on the strategic plan objectives or identifying gaps in current research needs. Although the IACC did not examine research projects over time, our analysis found that, when looking across multiple years, some agencies funded more autism research projects than were suggested in the associated strategic plan objective, whereas other objectives were not funded by an agency. Recently, in April 2014, the IACC released an update of its strategic plan. This plan included the number of research projects funded from fiscal years 2008 through 2012 under each objective, and the corresponding funding amounts, which may help identify those objectives that have received more funding than others. Although OARC collected specific information on the more recently funded projects—those funded in fiscal years 2011 and 2012— this information was not included in the plan. Detailed project information is needed to effectively coordinate and monitor autism research across the federal government and avoid duplication. Principal investigators are typically individuals designated by the applicant organization, such as a university, to have the appropriate level of authority and responsibility to direct the project or program to be supported by the award. not provide information indicating that NIH has policies requiring program officials to actually search this database before awarding each research grant. Several agency officials also stated that they rely on their peer reviewers, other experts, and project officers to have knowledge of the current autism research environment. As established in our recent duplication work, it is important for agencies that fund research on topics of common interest, such as autism, to monitor each others’ activities. Such monitoring helps maximize effectiveness and efficiency of federal investments, and minimize the potential for the inefficient use of federal resources due to unnecessary duplication. To promote better coordination among federal agencies that fund autism research and avoid the potential for unnecessary duplication before research projects are funded, we recommended that the Secretary of Health and Human Services, the Secretary of Defense, the Secretary of Education, and the Director of the National Science Foundation (NSF) each determine methods for identifying and monitoring the autism research conducted by other agencies, including by taking full advantage of monitoring data the IACC develops and makes available. DOD concurred with our recommendation to improve coordination among federal agencies, and comments from Education, HHS, and NSF suggested that these agencies support improving the coordination of federal autism research activities. However, Education, HHS, and NSF disputed that any duplication occurs. We agree that more information on the specific projects funded within each objective would need to be assessed in order to determine actual duplication. However, the fact that research is categorized to the same objectives suggests that there may be duplicative projects being funded. During the course of our work, Education, HHS, and NSF did not provide any information to show that they had reviewed research projects to ensure that they were not unnecessarily duplicative. Chairman Mica, Ranking Member Connolly, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Geri Redican-Bigott, Assistant Director; Deirdre Brown; Sandra George; Drew Long; and Sarah Resavy. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Autism—a developmental disorder involving communication and social impairment—is an important public health concern. From fiscal years 2008 through 2012, 12 federal agencies awarded at least $1.4 billion to support autism research and other autism-related activities. The Combating Autism Act directed the IACC to coordinate HHS autism activities and monitor all federal autism activities. It also required the IACC to develop and annually update a strategic plan for autism research. This plan is organized into 7 research areas that contain specific objectives. This statement is based on GAO's November 2013 report, GAO-14-16 , with selected updates. It discusses federal autism activities, including (1) the extent to which federal agencies fund potentially duplicative autism research, and (2) the extent to which IACC and agencies coordinate and monitor federal autism activities. GAO analyzed agencies' data and documents, and interviewed federal agency officials. Eighty-four percent of the autism research projects funded by federal agencies had the potential to be duplicative. Of the 1,206 autism research projects funded by federal agencies from fiscal years 2008 through 2012, 1,018 projects were potentially duplicative because the projects were categorized to the same objectives in the Interagency Autism Coordinating Committee's (IACC) strategic plan. Funding similar research on the same topic is sometimes appropriate—for example, for purposes of replicating or corroborating results—but in other instances funding similar research may lead to unnecessary duplication. Each agency funded at least 1 autism research project in the same strategic plan objective as another agency and at least 4 agencies funded autism research in the same research area. The IACC and federal agencies may have missed opportunities to coordinate and reduce the risk of duplicating effort and resources. GAO found that the IACC is not focused on the prevention of duplication, and its efforts to coordinate the Department of Health and Human Services' (HHS) autism research and monitor all federal autism activities were hindered by limitations with the data it collects. Apart from federal agencies' participation on the IACC, there were limited instances of agency coordination, and the agencies did not have robust or routine procedures for monitoring federal autism activities. GAO recommended in November 2013 that HHS improve IACC data to enhance coordination and monitoring. HHS disagreed and stated its efforts were already adequate. GAO also recommended that DOD, Education, HHS, and NSF improve coordination. The agencies supported improved coordination, but most disputed that duplication occurs. GAO continues to believe the recommendations are warranted and actions needed.
NQF is a nonprofit organization established in 1999 that fosters agreement on national standards for measurement and public reporting of health care performance data. Its membership includes more than 400 organizations that represent multiple sectors of the health care system, including providers, consumers, and researchers. NQF uses a consensus development process to evaluate and endorse consensus standards, including quality measures, best practices, frameworks, and reporting guidelines. NQF has endorsed over 600 quality measures in 27 areas, such as cancer and diabetes. NQF endorses quality measures developed by other organizations, such as the Joint Commission, the National Committee for Quality Assurance, and the American Medical Association, rather than developing quality measures itself. HHS has used a number of NQF-endorsed measures in initiatives to promote quality measurement, and NQF continues to endorse quality measures separate from this contract. MIPPA established five duties related to the use of quality measures. See table 1 for a description of the duties. For the NQF contract, HHS selected a cost-plus-fixed-fee contract—NQF’s first cost-reimbursement contract. Under the cost-plus-fixed-fee contract, HHS will reimburse NQF for costs incurred under the contract in addition to a fixed fee that is paid regardless of other costs. Cost-plus-fixed-fee contracts are used for efforts such as research, design, or study efforts where cost and technical uncertainties exist and it is desirable to retain as much flexibility as possible in order to accommodate change. However, this type of contract provides only a minimum incentive to the contractor to control costs. As we reported in 2009, these contracts are suitable when the cost of work to be done is difficult to estimate and the level of effort required is unknown. Under the FAR, cost-reimbursement contracts may only be used when the contractor’s accounting system is adequate for determining costs under the contract to help prevent situations where contractors bill the government for unallowable costs. One method an agency can use to determine if an accounting system is adequate is to perform a preaward survey of a potential contractor’s accounting system prior to awarding a contract. This review serves as a key control to determine whether the potential contractor has an adequate accounting system in place to accurately and consistently record costs and submit invoices for costs. HHS conducted two preaward surveys of NQF’s accounting system. HHS’s initial review, in November 2007, found that NQF’s accounting system was inadequate because the system could not identify and separate unallowable costs, among other issues. NQF subsequently replaced its accounting system, and a second HHS review in November 2008 found that the system was adequate. Under the FAR, contracts are to contain provisions for agency approval of a contractor’s subcontracts. HHS’s contract with NQF contains this provision and also requires the approval of consultants. This review requires appropriate support documentation provided by the contractor to the agency, including a description of the services to be subcontracted, the proposed subcontract price, and a negotiation memo that reflects the principal elements of the subcontract price negotiations between the contractor and subcontractor. Two HHS components are principally responsible for administering the NQF contract: the office of the Assistant Secretary for Planning and Evaluation (ASPE) and the Centers for Medicare & Medicaid Services (CMS)—an operational division within HHS. To conduct oversight of the NQF contract, HHS assembled staff in these two units with experience in acquisitions, contract management, and program management. Specifically, the project officer for the NQF contract, responsible for program management and performance assessment, is a representative of ASPE. The contracting officer for the NQF contract, responsible for administering the contract, is a representative of CMS. The contracting officer and project officer should perform a comprehensive review of contractor invoices to determine if the contractor is billing costs in accordance with the contract terms and applicable government regulations. As of January 13, 2010—the end of the first year of HHS’s 4-year contract with NQF to implement the MIPPA duties—NQF had begun work for each of the five duties required by MIPPA related to health care quality measures: (1) make recommendations on a national strategy and priorities; (2) endorse quality measures; (3) maintain endorsed quality measures; (4) promote electronic health records; and (5) report annually to Congress and the Secretary of HHS. While NQF began work for each of the duties in the first contract year, HHS determines on an annual basis the specific work NQF will be expected to perform under the five MIPPA duties in each contract year. Recommendations on a National Strategy and Priorities for Quality Measurement. NQF has taken steps to begin the duty of making recommendations on a national strategy and priorities for quality measurement. In October 2009, NQF established a committee of stakeholders that is expected to develop recommendations about a national strategy and priorities for quality measurement. NQF published the recommended priorities in May 2010. The committee’s recommendations are expected to be based on a synthesis of evidence that NQF has collected, using a subcontractor, on 20 conditions that account for the majority of Medicare’s costs. The subcontractor collected evidence on existing quality measures for these conditions and identified gaps where quality measures did not exist. The subcontractor also collected evidence related to each condition, such as information on each condition’s prevalence, treatment costs, variability in providers’ treatment of the condition, disparities in treatment for patients with the condition, and potential to improve quality of care for the condition. The committee is expected to consider this evidence when developing recommendations on a national strategy and priorities for quality measurement. Under PPACA, NQF’s recommendations on a national strategy and priorities must be considered by HHS when it develops a national strategy for quality improvement, which HHS is required to submit to Congress by January 1, 2011. Endorsement of Measures. NQF has taken steps to provide for the endorsement of quality measures. Prior to its contract with HHS, NQF established a process for endorsing quality measures. Under this process, organizations that develop quality measures submit them to NQF for consideration, in response to specific solicitations by NQF. NQF forms a committee of experts from its member organizations as well as other organizations and agencies to review these quality measures against NQF- established criteria, such as the usability and feasibility of the measure. After this committee evaluates the measures against these criteria, NQF’s process allows for a period during which its member organizations and the public may comment on the committee’s recommendation for each measure. The process also provides for a period for its member organizations to vote on whether the measures should be endorsed by NQF as a national standard. Ultimately, NQF’s board of directors makes a final decision on whether NQF should formally endorse the measures. (See app. I for a detailed description of NQF’s endorsement process.) In order to provide for the endorsement of quality measures under this duty, NQF has taken several steps. Specifically, NQF initiated projects and solicited measures to be endorsed using its process for each of these projects. These projects relate to quality measurement in nursing homes, patient safety, and patient outcomes, and are scheduled to be completed between December 2010 and May 2011. In addition to endorsing measures, NQF also hired a subcontractor to evaluate its endorsement process and recommend ways to improve its efficiency and effectiveness. The subcontractor’s report and NQF’s approval of proposed enhancements to the process are due January 2011. Maintenance of Endorsed Quality Measures. NQF has taken steps to ensure that endorsed measures are maintained—that is, updated or retired. Prior to its contract with HHS, NQF established a process for maintenance of measures. According to NQF, once a quality measure has been endorsed, updated information on the measure’s specifications should be submitted to NQF annually and the measure should be comprehensively reviewed under the maintenance process every 3 years. NQF’s maintenance process is similar to NQF’s endorsement process, in that it involves a review of measures against NQF-established criteria, a period for public comment, and a final decision by NQF’s board of directors. In order to implement this process under its contract with HHS, NQF began maintenance reviews for 191 measures in 14 areas such as diabetes and cardiovascular care. The measures were identified by HHS as being of interest to, or actually used by, HHS programs. By the end of the first contract year, NQF had not determined completion dates for maintenance of the 191 measures. As of May 2010, maintenance of the 191 measures identified by HHS is scheduled to be completed by the end of 2012. Promotion of the Development and Use of Electronic Health Records. NQF has taken steps towards completing the duty of promoting the development and use of electronic health records for use in quality measurement. As of January 13, 2010, NQF had begun to implement a framework that defines a standardized set of data that should be captured in patients’ electronic health records. The framework, known as the Quality Data Set (QDS), is intended to allow data from electronic health records to be collected and used in quality measurement. Implementation and maintenance of the QDS is scheduled to continue through the end of the 4-year contract, which ends January 13, 2013. To further promote the development and use of electronic health records in quality measurement, NQF began additional activities. For example, NQF established a panel of experts to recommend additional capabilities to measure utilization. According to NQF officials, efforts under this duty are scheduled for completion between March 2010 and January 2013. Annual Report to Congress and the Secretary of HHS. NQF submitted its first annual report to Congress and the Secretary of HHS on March 1, 2009. HHS published this report, with its comments, in the Federal Register on September 10, 2009. NQF submitted its second annual report, which also covers activities it performed during the first contract year, to Congress and the Secretary on March 1, 2010. While NQF has begun work for each of the duties in the first contract year, HHS determines on an annual basis the specific work NQF will be expected to perform under the five MIPPA duties each contract year. Specifically, HHS gives direction for and then approves annual plans that NQF develops. These plans can include work begun in prior contract years that has not been completed. HHS can adjust work in the annual plans in support of each of the five duties. For example, HHS officials told us that in future contract years, they may select additional projects for the endorsement of quality measures, and additional measures for maintenance reviews. NQF reported costs and fixed fees totaling approximately $6.5 million for the first year of its contract with HHS, which ended January 13, 2010. The amount NQF reported included direct and indirect costs, as well as fixed fees. Direct costs, which are costs incurred specifically for this contract, represented the largest percentage—about $3.2 million, or 49 percent—of the amount NQF reported (see fig. 1). NQF’s reported direct costs were largely labor costs for NQF employees and payments to subcontractors and consultants. In addition to direct costs, NQF reported about $2.9 million in indirect costs for the first contract year. Indirect costs cover additional items, such as employee benefits, overhead, and administrative costs. NQF calculates its indirect costs based on a formula that takes into account an indirect-cost rate approved by HHS and the amounts of certain direct costs. For example, the formula estimates indirect costs such as employee benefits by multiplying an indirect-cost rate by the amount of direct costs for labor. Finally, in addition to its direct and indirect costs, NQF reported fixed fees of approximately $360,000 during the first contract year. HHS pays these fixed fees to NQF in addition to reimbursing the organization for its costs. Of the approximately $6.5 million in costs and fixed fees NQF reported for the first contract year, most were incurred in the second half of the contract year. Costs and fixed fees in the second half of the contract year, from July 1, 2009, to January 13, 2010, totaled over $5 million. NQF staff told us that costs in the first half of the contract year were primarily for activities such as development of solicitations for subcontractors. Costs in the second half of the contract year were primarily for activities related to quality measurement, such as endorsement of quality measures and promotion of electronic health records for use in quality measurement. NQF reviews invoices and carries out other activities prior to submitting them to HHS in order to help ensure that reported costs are proper. HHS requires its officials to follow certain procedures when reviewing these invoices. NQF officials told us their organization has several ways to help ensure that the contract costs it reports to HHS are proper. According to NQF officials, invoices are electronically generated using NQF’s accounting system and then reviewed before submitting the invoices to HHS for payment. These reviews are conducted by two senior staff—the NQF Project Director, who manages the contract, and the Chief Financial Officer. These officials meet to review costs reported in each month’s invoice. NQF officials told us that as part of their reviews, the two officials compare the current month’s invoice to the previous month’s invoice to identify discrepancies or cost trends that seem unusual and that the officials investigate such discrepancies or trends when necessary. After this review, the Chief Financial Officer signs the invoice. During our review of NQF’s invoices for the first contract year, we found that the Chief Financial Officer signed the invoices as the officials described to us. In addition to the review of invoices, NQF officials described other ways the organization helps to ensure that the costs it reports to HHS are proper. In particular, NQF officials told us NQF uses an electronic timesheet system in order to track employee labor hours. NQF officials told us that the timesheet system allows NQF employees to track their labor hours by project and have their labor hours reviewed and approved by the appropriate NQF officials. In addition to the timesheet system, NQF officials told us that their organization established a written procurement policy in August 2009 and revised it in January 2010 to guide how they track other direct costs—specifically, payments to subcontractors and consultants—that are reported in NQF’s invoices. NQF officials told us that under its procurement policy, NQF officials are to obtain the appropriate approval signatures for payments on invoices as well as other payments for subcontractors and consultants once the services have been received. Furthermore, according to the policy, NQF officials are to document how key procurement decisions are made, such as the basis for setting an award cost or price for a subcontractor or consultant. Having a well-designed procurement policy can help reduce the risk of inappropriate payments or pricing related to subcontractors and consultants. During our review of NQF subcontractor and consultant files for the period prior to January 2010—before NQF revised its procurement policy—we found that NQF did not always document approvals for subcontractor payment and did not document that it had determined that its consultant pricing was reasonable. Like NQF, HHS relies on reviews of NQF’s invoices in order to help ensure that reported costs are proper. Two HHS officials assigned to oversee the NQF contract, the project officer and the contracting officer, are responsible for these reviews. When conducting their reviews, the two officials are required to follow certain procedures established in HHS policies. For example, under these policies, the project officer is required to review NQF’s invoices to determine whether billed services were actually provided and are supported with adequate documentation. Similarly, the contracting officer is required to review the invoices to determine whether NQF’s reported costs are consistent with its contract, accurately calculated, and have adequate documentation. Both officials are required to document when they approve invoices for payment to NQF. When we reviewed HHS documentation and interviewed HHS officials during the course of our work, we found that the contracting officer and project officer had generally followed the review procedures required by HHS policy. Table 2 provides more detailed information on the procedures that the project and contracting officers are required to follow when reviewing NQF invoices. Table 2 also provides information we obtained from HHS officials on how they implemented these requirements. While NQF has begun work in the first year of its contract for the five duties related to quality measurement established by MIPPA, it is too early for us to assess whether, or to what extent, NQF will be successful in carrying out these duties. This report describes NQF’s work for the first of 4 contract years, and HHS has flexibility to determine on an annual basis the specific work it expects NQF to perform for each of the MIPPA duties. Therefore, it is not yet known exactly what work NQF will be expected to complete during the remainder of the contract period. In addition, other events related to quality measurement, such as the completion of HHS’s national strategy for quality improvement, are expected to occur before the end of the 4-year contract period and may have some influence on NQF’s specific work for the five MIPPA duties. Our second report will provide another opportunity to review NQF’s performance and costs. We provided drafts of this report to HHS and NQF for comment. Both HHS and NQF provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix II. The National Quality Forum (NQF) established its endorsement process in 2000. NQF’s process includes the nine steps described in table 3 below. The table also provides information on the endorsement process as applied to a project to endorse a number of measures related to home health care, such as measures on education provided to patients and caregivers on medications for care and increases in the number of pressure ulcers. This project was initiated prior to the NQF contract with the Department of Health and Human Services (HHS) that was required by the Medicare Improvements for Patients and Providers Act of 2008. NQF announced a call for nominations for steering committee members for this project in August 2008 and the final set of 20 endorsed measures was announced on March 31, 2009. In addition to the contact named above, Will Simerl, Assistant Director; La Sherri Bush; Helen Desaulniers; Krister Friday; Natalie Herzog; Carla Lewis; Lisa Motley; Ruth S. Walk; Rasanjali Wickrema; and William T. Woods made key contributions to this report.
The Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) directed the Department of Health and Human Services (HHS) to enter into a 4-year contract with an entity to perform five duties related to health care quality measurement and authorized $40 million from the Medicare Trust Funds for the contract. In January 2009, HHS awarded a contract to the National Quality Forum (NQF), under which HHS will reimburse NQF for its costs and pay additional fixed fees. Established in 1999, NQF is a nonprofit member organization that fosters agreement on national standards for measuring and public reporting of health care performance data. This is the first of two reports MIPPA requires GAO to submit on NQF's contract with HHS. In this report, which covers the first contract year--January 14, 2009, to January 13, 2010--GAO describes (1) the status of NQF's work on the five duties under MIPPA; (2) the costs and fixed fees NQF has reported; and (3) what NQF and HHS do in order to help ensure that NQF's reported costs are proper. GAO reviewed relevant MIPPA provisions and reviewed HHS and NQF documents, such as HHS's contract with NQF, monthly progress reports and invoices for the first contract year, and policies and other documents that describe how HHS and NQF review invoices. GAO also interviewed NQF and HHS officials responsible for implementing and overseeing the contract. NQF has begun work for each of five duties required by MIPPA related to quality measures: (1) make recommendations on a national strategy and priorities; (2) endorse quality measures, which involves a process for determining which ones should be recognized as national standards; (3) maintain--that is, update or retire--endorsed quality measures; (4) promote electronic health records; and (5) report annually to Congress and the Secretary of HHS. As of January 13, 2010--the end of the first contract year--NQF's work for four MIPPA duties was in progress and it had completed its first annual report for the fifth duty. For example, NQF had begun the duties related to endorsement and maintenance by initiating the endorsement process for three projects HHS selected and by starting maintenance reviews for a set of measures of interest to or used by HHS. While NQF began work for each of the duties in the first contract year, HHS determines on an annual basis the work NQF will be expected to perform under the five duties each contract year. NQF reported costs and fixed fees totaling approximately $6.5 million for the first contract year, including direct and indirect costs as well as fixed fees. Specifically, NQF reported about $3.2 million in direct costs, or 49 percent of the total. These were costs specifically incurred for the NQF contract, such as direct labor for NQF employees. NQF also reported about $2.9 million in indirect costs, which cover additional items such as employee benefits and overhead. Finally, NQF reported about $360,000 in fixed fees for the first contract year. Over $5 million of the reported costs and fixed fees were incurred in the second half of the contract year. NQF and HHS rely on reviews of NQF invoices in order to help ensure that NQF's reported costs are proper. At NQF, officials told us that they review the invoices prior to submitting them to HHS and carry out other activities, such as using an electronic system to track labor hours, in order to help ensure that the costs they report in the invoices are proper. Like NQF, HHS relies on reviews of NQF invoices in order to help ensure NQF's reported costs are proper. These reviews are governed by HHS policies and procedures and by requirements applicable to federal contracts generally. While NQF has begun work under the MIPPA contract, it is too early for GAO to assess whether, or to what extent, NQF will be successful in carrying out the five MIPPA duties. This report describes NQF's work for the first of 4 contract years. In the remaining 3 years of the contract, HHS will determine on an annual basis specific work for NQF to complete under each of the five MIPPA duties. Therefore, it is not yet known exactly what work NQF will be expected to complete during the remainder of the contract period. GAO's second report, which is due in January 2012, will provide another opportunity to review NQF's performance and costs. HHS and NQF reviewed a draft of this report and provided technical comments, which GAO incorporated as appropriate.
Historically, several DOE offices—including Defense Programs and the Office of Energy Research, as well as EM—have funded projects to develop innovative technologies for cleaning up nuclear waste. Within EM, innovative technology projects have been funded by OST, the Office of Waste Management, the Office of Environmental Restoration, and the Office of Nuclear Material and Facility Stabilization. In August 1994, we reported that insufficient coordination and integration of technology development activities across EM’s program offices, and between headquarters and the field, had limited the use of innovative cleanup technologies. In response to our concerns and the concerns of others, in January 1994, EM restructured its technology development program around five high-priority problems, or “focus areas”: radioactive tank waste remediation (Tanks); characterization, treatment, and disposal of mixed waste (Mixed Waste); containment and remediation of contaminant plumes (Plumes); stabilization of landfills (Landfill Stabilization); and decontamination and decommissioning (D&D). Within each focus area, the restructuring created teams of technology developers, users, and other stakeholders, including members from both headquarters and the field, to increase the likelihood that new technologies would be used to clean up the contamination at DOE’s sites. In addition, EM made OST responsible for centrally managing technology development to ensure the coordination of activities and the elimination of unnecessary duplication across all of EM’s program offices. Within the research and development community, experts agree that some duplication in projects is useful to provide the competition that results in the best science. However, EM officials and peer review experts we spoke with generally agreed that several projects competing in a specific area of technology would be sufficient. Our August 1994 report said that although OST’s mission was to manage EM’s nationwide technology development program, other program offices within EM conducted their own projects, which often overlapped and conflicted with OST’s activities. We also found that DOE did not have a comprehensive needs assessment for ranking and funding technology development projects as effectively as possible. Although EM originally established the focus area approach to coordinate technology development activities across its program offices, we found that only OST was evaluating the projects that it funded to identify areas of possible overlap and excessive duplication. EM directed its other program offices to support the focus area approach by appointing “user” representatives to serve on focus area management teams, but some of these offices did not inventory their projects, and their projects did not receive the same level of scrutiny as OST’s. As a result, no comprehensive list of EM’s technology development projects had been compiled. We were able to determine that, apart from OST, only the Office of Waste Management funded technology development at field sites during fiscal years 1995 and 1996. We were unable to verify the extent of the possible overlap and duplication between the two offices, since no comprehensive list of the Office of Waste Management’s projects was available. Partial lists had, however, been prepared for the Mixed Waste and Tanks focus areas. The Office of Waste Management did not formerly require its sites to describe their technology development projects because it viewed technology development as an integral part of the sites’ waste management activities. However, the office plans to begin collecting this information in support of its fiscal year 1998 work plan. In a preliminary review of projects funded by OST and the Office of Waste Management, we found that these offices had funded a large number of melter projects and that several projects had received funds from other DOE program offices as well. At our request, OST compiled a comprehensive list of all DOE-funded melter projects. This list revealed that DOE had contributed funds for 60 different melters at various sites across the country and fully funded 52 of them. According to a DOE official, a melter costs between $15 million and $30 million to develop fully. OST’s list indicated that most of the funding for these melters came from Energy Research and certain EM program offices but some also came from Defense Programs. OST has no summary information on the total amount of funding dedicated to melter projects; however, in 1996, EM funded melter projects totaling more than $40 million. In November 1995, concerned about possible overlap and duplication, the managers from the Mixed Waste and Landfill Stabilization focus areas convened a group of experts in melter technology from outside the agency to determine whether the number of melter projects should be reduced. The experts concluded that although some duplication is useful, DOE was sponsoring far more melter projects than were needed. The experts characterized DOE’s technology development effort as “a proliferation of melter systems” and recommended that the Department reduce the number of melter projects significantly because many of the technologies, such as joule-heated melters, are already available in the commercial sector. The experts noted that when enough vendors are available to bid competitively on cleaning up a site using a particular type of technology, DOE should say “enough is enough” and cease to support the research and development of that technology. When EM first conceived the focus area approach, OST was responsible for managing the technology development program centrally at headquarters. However, as this approach evolved, EM shifted the program’s leadership to the field as part of a Department-wide effort to decentralize. Between July 1994 and February 1995, EM delegated the leadership for the five focus areas to the following locations: Tanks: Richland, Washington; Mixed Waste: Idaho Falls, Idaho; Plumes: Savannah River, South Carolina; Landfill Stabilization: Savannah River, South Carolina; D&D: Morgantown, West Virginia OST chose three of the lead sites through a competitive process, considering each site’s experience in an area and the strength of the management team described in the site’s proposal. Thus, OST chose Richland for Tanks, Idaho Falls for Mixed Waste, and Savannah River for Plumes. Subsequently, OST selected Savannah River, without competition, to lead the Landfill Stabilization focus area. Because the Landfill Stabilization and Plumes focus areas are interrelated, OST did not consider competition necessary. Finally, OST chose Morgantown to lead the D&D focus area because its staff had expertise in contracting—an important consideration, since many D&D technologies are available in the private sector. OST gave the lead sites the responsibility for managing the nationwide program for their respective focus areas. Their responsibilities included (1) making nationwide funding decisions among potential technology development projects and (2) ensuring that the needs of customers across all DOE sites and EM offices, as well as various stakeholder groups nationwide, were met. However, OST provided the lead sites with no specific guidelines for selecting projects. We found that by delegating the lead responsibility for the focus areas to field locations and by not providing any guidelines for selecting projects, EM created an organizational structure that allows certain lead sites to favor their own projects. Within each focus area, the funding for projects has begun to be concentrated at the lead sites. For fiscal year 1996, each lead site received more dollars for projects in its focus area than it had received for fiscal year 1995, before the restructuring (see table 1). The concentration of funding at certain lead sites may, in part, reflect an extended history of work in a particular area, yet in some instances it also represents a dramatic shift in funding away from the nonlead sites. At Idaho Falls, for example, the increase in funding for Mixed Waste projects evolved from this lead site’s long-term work on buried waste. At Savannah River, however, the increase in funding for Landfill Stabilization projects—from 8 percent in 1995 to 27 percent in 1996—may have occurred, to some extent, because management wanted to secure support for researchers at the lead site. According to researchers and field representatives at Savannah River, one reason for the increase in funding at Savannah River was to provide support for researchers on-site whose work had previously been funded through DOE’s Defense Programs office. Meanwhile, the percentage of funding for Landfill Stabilization projects at Idaho Falls, for example, dropped from 46 percent in 1995 to 20 percent in 1996. Such shifts in workload have led to expressions of concern by nonlead sites that their proposals are not being treated fairly because their focus area’s management has a vested interest in selecting proposals submitted from the lead site. To ensure that proposals are selected fairly on the basis of their scientific merits, the National Academy of Sciences’ National Research Council recommends that agencies use some form of peer review to judge the quality of proposals. The Council defines “peer reviewers” as established working scientists or engineers from diverse institutions who are deeply knowledgeable about a field of study and who provide disinterested technical judgments as to the scientific significance of a proposed work, the competence of the researchers, the soundness of the research plan, and the likelihood of success. We found, however, that although the lead sites used significantly different systems to select projects, none of them used disinterested reviewers to determine the technical merit of the proposed work. For example, in the Plumes focus area, the members of Savannah River’s lead team decided which projects should receive funding; no peer reviewers evaluated the proposals’ technical merit. Although the Landfill Stabilization and Mixed Waste focus areas did use peer reviewers, most were associated with the local leadership team and, therefore, were not independent. The Tanks focus area used an elaborate system of technical review, but many of the reviewers were not independent. Finally, the D&D focus area did not use peer reviewers for fiscal year 1996 because the large demonstration projects upon which the fiscal year 1996 D&D program is based were competitively selected. During a 1995 review of EM’s technology development program, the National Research Council noted that EM’s process for selecting projects should incorporate a review of proposals by a knowledgeable independent review group comprising individuals from outside the agency with no vested interests in the outcome. According to the Council, this independent peer review system should (1) exclude those reviewers who might be considered to have a conflict of interest and (2) be carefully implemented to ensure equity. Starting in December 1995, OST began taking actions independently to improve the technology development program’s management within its own office and within EM as a whole. To eliminate duplication and overlap and to promote coordination across EM’s programs, OST developed a strategy in February 1996 that will coordinate and rank technology development projects funded by EM’s various program offices. To eliminate overlap among focus areas within its own office, OST scheduled a comprehensive review of all ongoing work in each focus area to clarify which projects each focus area should be funding. OST’s review is scheduled to be completed by the end of June 1996. In addition, in February 1996, OST combined the Plumes and Landfill Stabilization focus areas into the Subsurface Contamination focus area. Responding to the recommendations of the OST-sponsored melter review panel, the focus areas began to close out melter projects in December 1995, and in April 1996, the Deputy Assistant Secretary of OST told us that OST had decided to stop funding melter projects because most melter technologies are now available commercially. To help ensure that the funding for projects is not being concentrated at the focus areas’ lead sites unless warranted by the projects’ technical merits, senior OST officials told us that they plan to direct the focus areas’ managers to use independent peer reviewers in selecting projects. OST indicated that this system will be in place for the fiscal year 1997 selection process. Reviewers are to be “external, independent, and technically qualified” to determine the technical and scientific merits of specific projects and to ensure that projects are selected on the basis of their merits without regard to the location of the work. We provided a copy of our report to DOE for its review and comment. The offices of Science and Technology, Environmental Restoration, and Nuclear Material and Facility Stabilization did not provide comments. A senior technical adviser in the office of the Deputy Assistant Secretary for Waste Management commented on our statement that, despite the promising steps taken to improve the management of technology development, it is not clear that OST can effectively coordinate technology development across EM’s program offices without EM’s leadership and support. According to the Office of Waste Management, EM has given OST leadership and support to coordinate technology development. Specifically, the Office of Waste Management cited the former Assistant Secretary for Environmental Management’s strategic goals, and the newly confirmed Assistant Secretary for Environmental Management’s guiding principles, for focusing EM’s technology development efforts. While we agree that such goals and principles are important as guides to DOE’s technology development efforts, we note that they do not provide specific direction for eliminating duplication and promoting coordination across EM’s programs. Accordingly, we have not changed this portion of our report. We conducted our review from May 1995 through June 1996 in accordance with generally accepted government auditing standards. Appendix I provides a detailed discussion of our scope and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Secretary of Energy; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please call me at (202) 512-3841 if you or your staff have any questions about the information provided in this report. Major contributors to this report are listed in appendix II. At the request of the Ranking Minority Member, Senate Committee on Governmental Affairs, we examined the Office of Environmental Management’s (EM) current management practices in selecting innovative projects for funding. Specifically, we determined whether EM is managing its program to prevent (1) excessive duplication and unnecessary overlap and (2) an unwarranted concentration of projects at certain field offices. To determine whether excessive duplication and unnecessary overlap existed within EM’s program, we obtained the opinions of experts on duplication in research projects. Specifically, we attended the 3-day melter review panel, which was sponsored by EM’s Mixed Waste and Landfill Stabilization focus areas in November 1995, and we spoke with other researchers who have served as peer reviewers for the National Academy of Sciences. We requested descriptions of all technology development projects from each EM program office for fiscal years 1995 and 1996. After determining that the Office of Environmental Restoration and the Office of Nuclear Material and Facility Stabilization were not currently developing technology, we limited our review to information received on projects funded by the Office of Science and Technology’s (OST) focus areas and the Office of Waste Management. To determine whether there was an unwarranted concentration of projects at certain field sites, we compared the distribution of projects among sites for fiscal years 1995 and 1996. We also reviewed the process each focus area used to select projects for funding, after the focus areas’ leadership was moved to the field. In the course of our work, we interviewed the Deputy Assistant Secretaries of Environmental Restoration, Waste Management, Nuclear Material and Facility Stabilization, and Science and Technology, representing each of the EM program offices that have historically funded technology development activities. We also interviewed the leaders of each of the five focus areas. In addition, we attended several of the Technology Development Council’s meetings, as well as the February 1996 meeting of the Focus Area Board of Directors, which OST convened to address the concerns we noted during our review. We obtained and reviewed pertinent documents, including copies of the proposals received by each of the focus areas for fiscal year 1996, as well as descriptions of the projects funded in fiscal 1996. We performed our review from May 1995 through June 1996 in accordance with generally accepted government auditing standards. Bernice Steinhardt, Associate Director Duane Fitzgerald, Assistant Director Ruth-Ann Hijazi, Evaluator-in-Charge Margie K. Shields, Adviser Karen D. Wright, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined how the Department of Energy's (DOE) Office of Environmental Management (EM) is managing its technology development program. GAO found that: (1) EM has not coordinated its technology development activities among its program offices; (2) there is no comprehensive listing of EM technology development projects; (3) several DOE offices have funded 60 different melter projects at various locations; (4) there is a significant increase in technology development projects at certain field sites designated as lead sites for particular focus areas; (5) DOE does not use independent reviewers to ensure that project proposals receive equal treatment; (6) DOE has scheduled a comprehensive review of all technology development projects, combined two focus areas into one, and begun closing out melter projects to reduce duplication and overlap; and (7) DOE can not coordinate technology development projects without EM leadership and support.
In 1935, the Rural Electrification Administration was created by executive order to make loans to electrify rural America. RUS was established by the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 to replace this agency and now administers the electricity program. It is located in USDA’s Rural Development mission area. RUS’ loans for electricity purposes are made primarily to nonprofit cooperatives. Cooperatives are organizations owned by their customers and operated for the benefit of those using their services. The customers elect boards of directors responsible for policy and operations. Most RUS- financed utility systems have a two-tiered structure covering electricity distribution and power supply. Retail customers are members of the distribution cooperative that provides electricity directly to their homes and businesses. Most distribution cooperatives, in turn, are members of power supply cooperatives, which generate and transmit electricity to their members. Currently, RUS makes three types of direct loans for electricity purposes. These direct loans are (1) hardship rate loans with a 5 percent interest rate made to borrowers that have a relatively high cost of providing service, as indicated by a high average revenue per kilowatt-hour sold, and that serve customers with below-average income, or at the discretion of RUS’ Administrator; (2) municipal rate loans with an interest rate tied to an index of municipal borrowing rates, resulting in interest rates ranging from 1.1 percent to 4.6 percent during the first quarter of calendar year 2004; and (3) Treasury rate loans with an interest rate matching the government’s cost of money, which ranged from 1.2 percent to 4.4 percent in mid-March 2004. In addition to making direct loans, RUS places a USDA 100 percent repayment guarantee on loans made by the Treasury’s Federal Financing Bank, which makes loans at an interest rate equal to the Treasury’s cost of money plus one-eighth of 1 percent, as well as on loans made by CFC and by CoBank—a member bank of the Farm Credit System, which is a government-sponsored enterprise. Most borrowers seeking a loan guaranteed by RUS choose to have the loan made by the Federal Financing Bank because of lower interest rates than those available from the other lenders. The outstanding principal owed by borrowers with RUS direct and guaranteed loans totaled $28.3 billion as of September 30, 2003: $9.5 billion in direct loans, $15.3 billion in guaranteed loans made by the Federal Financing Bank, $0.4 billion in guaranteed loans made by CFC, $0.2 billion in guaranteed loans made by CoBank, and $2.9 billion in restructured loans. During fiscal years 1999 through 2003, RUS made or provided guarantees on 936 electricity loans, which totaled more than $14.3 billion. Table 1 shows the level of loans for each type of electricity loan. As we have reported in the past, RUS has had problems with some borrowers. During fiscal years 1999 through 2003, RUS wrote off more than $3.2 billion on loans to three borrowers—$3 billion and $73.2 million for two borrowers under bankruptcy liquidation and $159.3 million for a borrower with unsecured debt. Also, it wrote off $7.2 million for another borrower that had been restructured. The Rural Business-Cooperative Service, which like RUS, is in USDA’s Rural Development mission area, operates loan and grant programs that are intended to assist in the business development of the nation’s rural areas and the employment of rural residents. Among these programs is the rural economic development program, which is authorized by section 313 of the RE Act, 7 U.S.C. § 940c. Under the program, the Rural Business- Cooperative Service makes direct loans to entities that have outstanding RUS electricity or telecommunications loans or to former RUS borrowers that repaid their electricity loans early at a discount. Rural economic development loans are not available to former RUS borrowers that repaid their loans with scheduled payments. All rural economic development loans are made for relending, and the loan funds are targeted to specific projects. Rural economic development loan funds are deposited into a fund that a RUS borrower has established, and the RUS borrower then relends the money to other borrowers, which may be any public or private organization or other legal entity, for an economic development and job creation project. Such projects include new business creation, existing business expansion, community improvements, and infrastructure development. Rural economic development loan funds, however, cannot be used for certain purposes, including the RUS borrowers’ electricity or telecommunications operations or a community’s television system or facility, unless tied to an educational or medical project. The Rural Business-Cooperative Service also provides rural economic development grants to RUS utility borrowers to establish revolving loan funds to promote economic development in rural areas. The revolving loan funds provide capital to nonprofit entities and municipal organizations to finance community facilities that promote job creation or education and training to enhance a marketable job skill or that extend or improve medical care. An unusual source of funding is available for rural economic development loans and grants. The RE Act provides that RUS’ electricity and telecommunications borrowers can make advance payments on their RUS loans, referred to as “cushion-of-credit” payments, and earn interest at a rate of 5 percent on the advance payments. The Rural Business- Cooperative Service is allowed to use the differential between the earnings on these advance payments and the 5 percent interest to cover the subsidy costs of rural economic development loans and the cost of rural economic development grants. During fiscal years 1999 through 2003, the Rural Business-Cooperative Service made 233 rural economic development loans, which totaled $82.5 million. Also, 117 rural economic development grants were made, which totaled $24.6 million. On average, the loan amounts were about $354,000 and the grant amounts were about $211,000. The outstanding principal owed by borrowers with rural economic development loans totaled $155.2 million as of September 30, 2003. Although the RE Act requires that borrowers serve rural areas, RUS borrowers serve not only rural areas but also highly populated metropolitan areas. This situation results from RUS applying its “once a borrower, always a borrower” standard, which allows borrowers to continuously receive RUS assistance regardless of the extent of population increases within their service territories. Since the electricity program began in the 1930s, substantial population growth has occurred in the areas served by many RUS borrowers. We analyzed the areas served by the 530 distribution borrowers that received RUS loans or guarantees on loans between October 1, 1998, and September 30, 2003. These borrowers serve customers in part or all of 1,988 counties in 46 states, and they received 864 RUS loans or guarantees on loans during this period valued at almost $9 billion. Overall, RUS distribution borrowers provide service in more than half the counties in the country that are classified as metropolitan. In general, these metropolitan areas contain a substantial core population, together with adjacent communities having a high degree of social and economic integration with the core. About 29 percent, or 581 of the 1,988 counties served partly or completely by RUS borrowers, are in metropolitan areas; and, in fact, 9.4 percent, or 187 of the 1,988 counties, are in metropolitan areas with populations of 1 million or more. The following examples illustrate cooperatives whose service territories include highly populated areas. Three cooperatives that received loans during the fiscal year 1999 through 2003 period provide electricity in the immediate vicinity of Atlanta, Georgia. These three borrowers received a total of more than $400 million of loans during this period. A Maryland electric distribution cooperative that serves approximately 115,000 residential customers in four counties in the vicinity of Washington, D.C., received over $25 million in loans in fiscal years 1999 and 2001. Three of these counties are in a metropolitan area with a population of more than 1 million people. A Florida cooperative that serves roughly 150,000 customers in parts of five counties that are located to the north of Tampa received RUS loans in fiscal years 2000 and 2002 totaling $66 million. Four of these counties have a population of more than 100,000 residents, including two with a population of more than 300,000. On the other hand, about 24 percent, or 485 of the 1,988 counties served by RUS borrowers are completely rural or have only nominal urban populations. The remaining counties are in nonmetropolitan areas, but with urban populations of 2,500 or more. Table 2 shows the classifications of the counties being served partly or completely through RUS electricity loans. RUS officials pointed out that many metropolitan areas contain rural sections. In addition, they agreed that its borrowers now provide service to a mix of areas including rural areas and heavily populated areas, and that many of its borrowers would not meet the population criterion of the RE Act if it were applied. RUS officials also told us that they had drafted legislation consistent with the President’s fiscal year 2005 budget, which would require borrowers to recertify that they are serving rural areas, rather than urban or suburban areas. RUS estimated that guarantees on lenders’ debt under the 2002 Farm Bill provision could result in losses of up to $1.5 billion on guarantees of $3 billion, although RUS does not expect such losses. RUS officials believe that while risks are involved, losses are unlikely given the past stability of both the electricity market and the lender that might receive the guarantees. In return for taxpayers assuming the risks of guaranteeing payment on $3 billion of debt, we estimated that the fees paid on the guarantees would only fund $15 million in rural economic development loans and grants annually. The one cooperative lender that is currently qualified and interested in obtaining a guarantee on its debt generally has had a favorable financial history going back over 30 years. However, the lender faces risk associated with the electricity and telecommunications markets. Recognizing risks to taxpayers, RUS proposed to add certain risk mitigation requirements, but the lender commented that these requirements would make the guarantees unworkable. Financial Losses Estimated by RUS. Under the debt guarantee program, taxpayers would be at risk for the value of guaranteed debts. RUS estimated this value at $3 billion in an economic analysis of the program. The estimate was based on the act specification that the full guarantee level is the amount of principal owed on loans that eligible lenders had made concurrently with RUS’ electricity and telecommunications loans. Although taxpayers would be at risk for the full amount, RUS estimated that in the event of a default, likely maximum losses could be as much as $1.5 billion. This maximum is based on the expectation that the government could recover at least one-half of defaulted amounts. The $3 billion amount is approximately the amount of concurrent loans that RUS has made in conjunction with CFC, the only lender currently qualified and interested in participating in the program. RUS identified CoBank as the only other lender that would be eligible for the guarantees. However, CoBank is part of a government-sponsored enterprise, and CoBank does not need the guarantees and does not plan to participate in the program, according to CoBank officials. Although RUS does not believe CFC will default on the guaranteed bonds or notes, there would be a subsidy cost, according to Congressional Budget Office and RUS officials. However, RUS has not completed a subsidy cost estimate for the program. In addition, RUS’ economic analysis did not discuss CFC’s financial history, its current condition, or the risks in the electricity and telecommunication markets in which CFC operates. Fees on Guarantees Could Provide about $15 Million Annually in Rural Economic Development Loans and Grants. We estimated that in return for the risk to taxpayers, fees on the guarantees could provide about $15 million annually of additional loans and grants through the Rural Business-Cooperative Service’s rural economic development program. Our calculation is based on the $3 billion guarantee level RUS identified, the details provided in the act about the annual fees that would be paid by a lender receiving a guarantee, and the use of the funds generated by the fee. The act provides that a lender receiving a RUS guarantee would pay an annual fee of 30 basis points (three-tenths of 1 percent) based on the amount of unpaid principal on the bonds or notes that are guaranteed, and that at least two-thirds of the funds collected are to be used for rural economic development loans and grants. The other one-third can be used for the cost associated with providing guarantees. On the basis of a $3 billion guarantee level, 30 basis points would yield fees of $9 million, of which $6 million would be available for rural economic development loans and grants. Of this, we assumed that $4 million would be used for additional grants, which is equal to the amount of grants in the Rural Business-Cooperative Service’s fiscal year 2004 budget. We assumed the remaining $2 million would be used to subsidize additional loans. Based on the fiscal year 2004 subsidy rate for the rural economic development program of 18.6 percent, a $2 million level would provide about $11 million in additional rural economic development loans. Lender That Likely Would Receive Guarantees Has Successful History but Faces Some Risks. CFC has had a solid operating record for over 30 years, a high rating, and CFC officials said that CFC does not require federal guarantees on debt to raise capital for lending. While recognizing CFC’s financial strength, in February 2004, its president noted that CFC had possibly faced some of the most difficult times in its history. In early 2002, CFC’s long-term debt ratings had been downgraded by three credit rating services (Moody’s Investors Service, Fitch Ratings, and Standard & Poor’s) and the services also rated CFC as having a negative outlook. Subsequently, these services raised CFC’s outlook to stable because CFC had taken various positive actions including restructuring $1 billion of loans for its largest borrower, which was emerging from bankruptcy; reducing its exposure to speculative-grade telecommunications loans; reducing its reliance on short-term debt; and increasing its loan loss reserves to $565 million. Even as the rating services raised CFC’s outlook, they cautioned about certain risks. For example, one rating service stated that half of CFC’s 10 largest borrowers exhibit speculative-grade characteristics. Each also expressed concern about CFC’s concentration in the electricity and telecommunications markets. One service cited the probability that natural gas prices will be volatile, and another stated that cooperatives operating in service territories adjacent to lower-cost systems might eventually be forced to compete. While most cooperatives have avoided competition, CFC has a fund to help defend its member cooperatives against territorial threats. In addition, one ratings service stated that competition from wireless carriers is a longer-term threat to rural telecommunications systems. CFC officials recognized that there are business risks in CFC’s loan portfolio that they continually address but said they believe the risks of the loan guarantee program are very low given CFC’s stable financial history, its access to capital markets, the restriction preventing lenders from using the proceeds of their guaranteed debt to fund electricity generation, and the relatively small portion of CFC’s overall loan portfolio that the guarantee would cover. In its proposed regulations, RUS included certain risk mitigation measures including requirements for a bankruptcy trust, pledges of collateral, a 5 percent limit on cash patronage refunds, and the use of certain standards that apply to depository financial institutions. CFC, the National Rural Electric Cooperative Association, and others commented that these proposals would make the program unworkable, and that the only requirement authorized by the act is that the securities of the lender receiving a guarantee be investment grade. Also, CFC stated that the ratings of the nationally recognized financial ratings services should be sufficient to assure its credit quality, and that if its financial rating becomes downgraded below investment grade, then that event could reasonably trigger RUS to partially limit its distribution of patronage capital. As of mid-June 2004, RUS officials said they were awaiting the completion of the Office of Management and Budget’s review of the proposed final regulations for the program. We developed an alternative approach that could provide funding for rural economic development loans and grants without added risk to taxpayers. Specifically, if Congress amended the RE Act to provide for RUS to charge a loan-origination fee on its direct and guaranteed electricity and telecommunications loans, and repealed the new lender debt guarantee requirement, the resulting funds from the loan-origination fee could be targeted to the rural economic development loan and grant program. Doing so would accomplish the stated purpose of the debt guarantee program—that is, to provide an alternative funding source for the Rural Business-Cooperative Service’s rural economic development loans and grants—while avoiding additional risk to taxpayers. RUS’ fiscal year 2005 budget request is for slightly more than $3.1 billion in electricity and telecommunications loans, and the Rural Business- Cooperative Service’s request is for $25 million in rural economic development loans and $4 million in grants. At RUS’ 2005 lending level, if RUS started charging a loan-origination fee of 25 basis points (one-fourth of 1 percent), the fee could result in an additional $7.8 million in funds to support rural economic development loans and grants, which, we estimate, could amount to an additional $20.4 million in loans and $4 million in grants. In effect, such an increase would be more than an 80 percent increase in the level of rural economic development loans and a doubling in the level of rural economic development grants that the Rural Business-Cooperative Service proposes making in fiscal year 2005, recognizing that different electricity and telecommunications loan levels would result in varying amounts of funds. The appropriate fee level would, in part, be based on amounts that are needed to fund rural economic development loans and grants. Since enactment of the 2002 Farm Bill, millions of dollars have become available for this purpose through interest earnings on the cushion-of-credit payments on loans by RUS electricity and telecommunications borrowers. While the Rural Business-Cooperative Service had $6.9 million at the start of fiscal year 2002 to cover the subsidy costs of rural economic development loans and the cost of grants, by the start of fiscal year 2004, the amount had increased to $40.2 million—roughly six times the estimated cost for the program in fiscal year 2004. The impact of a loan-origination fee would likely be relatively minor for many of the customers of the distribution borrowers that receive RUS’ loans. For example, during fiscal year 1999 through 2003, RUS made or guaranteed 864 electricity loans to distribution borrowers; 264 borrowers received one loan and 266 borrowers received more than one loan over this period. If a 25 basis point fee had been charged on these distribution loans and fully passed on to the borrowers’ customers, we estimate that the average one-time cost for the customers would have been approximately $2.39. Such a fee would be consistent with the fees charged on some other USDA loans. In comparison, USDA charges a loan- origination fee of 2 percent on guaranteed business and industry loans, 1 percent on guaranteed water and waste disposal loans, and 1 percent on most guaranteed farm ownership and operating loans. Although this alternative does not provide for guarantees on CFC’s debt, CFC’s access to capital for financing projects would not be jeopardized. CFC’s history and financial reports show that CFC is capable of raising the capital required for financing projects. In CFC’s 2003 annual report, CFC reported that it had about $21 billion in loans outstanding, including $16.4 billion in electricity loans and $4.9 billion in telecommunications loans. CFC stated that despite significant short-term financing risk in energy trading and power marketing, it has continued to be successful in securing long-term sources of capital. For example, CFC reported that it had sold bonds in Australia, which demonstrates its ability to raise capital in major money centers of the world. CFC also reported that just after the 2003 fiscal year ended, it had access to $3.9 billion through revolving credit lines. We discussed the guarantee provision and our alternative option with RUS officials. RUS’ Administrator and officials commented that they had originally viewed the provision to guarantee lenders’ debt as unnecessary because appropriations could be made available for funding the rural economic development program. Nevertheless, they stated that they are now engaged in implementing the guarantee provision. They agreed that the alternative option we raised is consistent with the loan-origination fees USDA places on some other loans, would be a feasible way to fund rural economic development loans and grants, and would likely have a very small effect on the customers of borrowers that receive RUS loans. The rural electricity program is no longer operated in a manner fully consistent with the concept of service to rural areas. RUS policies allowing loans and guarantees to be provided to borrowers whose customer base has grown significantly and that provide service in urban metropolitan areas go beyond the original intent of the program. Consequently, the program’s focus on service to rural residents has been blurred, and the federal goals now being served by the program are not fully transparent. Better targeting of loans to borrowers that provide service in rural areas would result in more consistent use of RUS’ funds and reduce the government’s lending costs. Such targeting could be accomplished by recognizing that there have been population increases in previously rural areas and applying a population criterion to both initial and subsequent loans, thereby ensuring that lending remains focused on rural areas. We are also concerned about the proposed guarantee of lenders’ debt because it would unnecessarily increase taxpayer risk. Guarantees on lenders’ debt are not needed to raise capital for lending to electricity service providers. In addition, the stated purpose of the debt guarantee program—raising funds for rural economic development loans and grants—could be accomplished through a no-risk alternative that we have identified. We are presenting three matters for congressional consideration. To better target RUS’ lending to borrowers serving rural areas, Congress may wish to consider specifying that the program criterion for rural areas applies to both an initial loan and any subsequent loans that borrowers seek. In addition, to provide additional funds for rural economic development loans and grants without risk to taxpayers, Congress may wish to consider amending the RE Act to authorize a small loan-origination fee on RUS’ electricity and telecommunication loans and direct that fees collected on such loans be used for rural economic development loans and grants, and simultaneously repeal the new lender debt guarantee requirement. We provided a draft of this report to USDA for review and comment. We received written comments from the Acting Under Secretary for Rural Development, which are presented in appendix III together with our detailed responses. USDA did not express agreement or disagreement with our matters for congressional consideration. USDA commented that the report challenges the long-established RUS practice of determining the rural or nonrural nature of areas at the time RUS made the loan for initial service, but not doing so for subsequent loans. In this regard, USDA also said that this issue has been raised with Congress many times before, and that while Congress has revised the RE Act, it has not accepted previous recommendations from us and others to address RUS’ lending practices. However, USDA pointed out that the President’s budget recommends that the rural status of borrowers be recertified. According to USDA’s budget summary for fiscal year 2005, RUS’ borrowers would be asked to recertify that they are serving areas that are rural, rather than urban or suburban areas. In addition, RUS officials told us that they drafted legislation along these lines, but that USDA has not sent this proposal to Congress for consideration. Given this apparent recognition by USDA of the need to address RUS’ lending practices, there may be an opportunity for improving the focus of RUS’ program. USDA also commented that it believes the methodology used in the report does not accurately portray the extent to which its borrowers serve consumers who are not in rural areas. USDA referred to the definition of rural in the RE Act and said it believes that any methodology used to characterize a borrower’s service territory should be based directly on Bureau of the Census data as applied to the service territory maps of its borrowers. USDA also stated that our use of the Economic Research Service’s county classification system is inappropriate and that the Office of Management and Budget has said that it is not correct to use statistical information about metropolitan areas for determining eligibility for federal programs. The service territory maps of the 530 distribution borrowers included in our analysis were not available at RUS; collecting these maps and applying census data to each one would have precluded us from providing a timely response to our requester. While RUS does not collect comprehensive data on the areas served by its distribution borrowers, nor maintain current service territory maps of its borrowers, RUS identified for us the counties each borrower serves. This information enabled us to use the Economic Research Service’s rural-urban classification system to characterize the areas served by RUS borrowers. Also, our report makes no specific determinations about the eligibility of any RUS borrower to participate in the program. We disagree with USDA’s objection to the use of the rural-urban classification method developed by the Economic Research Service. USDA’s Economic Research Service classification system is based on Bureau of the Census data, and it classifies areas, including counties, by degree of rurality. According to the Economic Research Service, its system captures the diversity of rural America in ways that are meaningful for developing public policies and programs. We agree that these classifications are not the criteria of the RE Act. Our purpose, however, was to describe the characteristics of areas served by RUS electricity distribution borrowers, and the Economic Research Service’s classification system is useful for that purpose. We believe our analyses, taken together, provide insight into the extent of service provided by borrowers in counties with large urban populations within metropolitan areas, which we have emphasized in our results. Furthermore, the population has grown in many areas served by RUS distribution borrowers that originally qualified for loans under the requirement that they serve sparsely populated rural areas. During our review, RUS officials agreed that many of its borrowers would no longer meet the RE Act population test for service to rural areas if that criterion were applied. USDA also discussed the general location of places where rural residents reside and stated that the majority live in metropolitan counties. Accordingly, USDA said that the report would classify service to these consumers as evidence that a distribution borrower was serving nonrural areas. We did not use our results in this manner and our leading observation, based on several analyses and our previous reports, is that RUS distribution borrowers serve not only rural areas but also highly populated areas. At the same time, we reported that about 24 percent of the counties served by RUS distribution borrowers are completely rural or have only nominal urban populations. However, it should be recognized that, according to USDA’s Economic Research Service, rural areas, particularly those rich in natural resources, have experienced economic transformation and rapid population growth, while other areas face declining job opportunity and population loss. We believe that suggestions to better target RUS lending could respond to these changed conditions. Finally, USDA commented that it is important to recognize, and not criticize, RUS’ efforts to implement the 2002 Farm Bill provision to guarantee the bonds and notes that lenders could use to raise funds for making loans for electricity and telecommunications services. USDA asked that the report be revised to distinguish between criticism of the legislation and RUS’ efforts to implement it. We believe that our report properly describes RUS’ efforts to implement the legislation in a factual manner and supports the purpose rather than criticizes the legislation in the 2002 Farm Bill calling for the new guarantee program or RUS’ efforts to implement the legislation. It does, however, provide an alternative for funding rural economic development that avoids risk. Our report notes RUS’ view that guaranteeing bonds and notes may not result in losses, although providing such guarantees would include some risks to taxpayers and, in a worst-case scenario, could result in potential losses of $1.5 billion. We stated that, recognizing the potential risks, RUS included in its proposed regulation certain risk-mitigation requirements not specified in the 2002 Farm Bill. However, we also noted that CFC commented that these proposals would make the program unworkable. In addition, we stated that RUS’ economic analyses do not include a discussion of risks facing CFC in the electricity and telecommunications markets. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. We will then send copies to interested congressional committees; the Secretary of Agriculture; the Administrator of RUS; the Director, Office of Management and Budget; officials at CoBank and CFC; and other interested parties. We will make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix IV. The Chairman of the Subcommittee on Energy Policy, Natural Resources and Regulatory Affairs, House Committee on Government Reform asked that we report to him on (1) the extent to which RUS distribution borrowers provide electricity service to nonrural areas and (2) the potential financial risk to taxpayers of the 2002 Farm Bill requirement to guarantee lenders’ debts, and the amount of rural economic development loans and grants that could be funded by fees on the guarantees. In addition, we identified an alternative that could provide funds for rural economic development loans and grants. In the overall course of our work, we reviewed the basic statutory authority for RUS programs—the Rural Electrification Act of 1936, as amended (RE Act); USDA’s Budget Explanatory Notes for Committee on Appropriations for fiscal years 1999 through 2005; prior GAO reports; and RUS reports and publications. To provide relatively current information on RUS’ electricity program, we focused on the loans RUS made, guaranteed, and wrote-off in fiscal years 1999 through 2003. We interviewed RUS officials, including the Administrator and Assistant Administrator for the electricity program. For the Rural Business-Cooperative Service’s rural economic development loan and grant program, we used similar sources, including agency publications and reports, its annual financial report containing information on loans and grants made from fiscal year 1999 through 2003, the budget explanatory notes, and our prior reports. We also interviewed USDA’s Deputy Administrator for the business programs. We did not verify the accuracy of the financial information contained in the Rural Business-Cooperative Service’s annual financial report. To address the extent to which RUS distribution borrowers provide electricity service to rural and nonrural areas, we obtained information about RUS’ lending policies by reviewing provisions of the RE Act; RUS regulations; and our prior reports; we also interviewed RUS officials. We obtained automated financial reports from RUS that covered all direct and guaranteed electricity loans made between fiscal years 1999 and 2003. We took steps to verify the accuracy of the information contained in the automated financial reports, and performed some data reliability testing and found that the data were reliable enough for our purposes. We also obtained from RUS a list of the counties served by its active electricity borrowers, which we compared to the Economic Research Service’s 2003 rural-urban continuum codes. These codes classify all U.S. counties along a 9-point scale that distinguishes metropolitan counties by the population size of their metropolitan area and nonmetropolitan counties by the degree of urbanization and adjacency to a metropolitan area. The metropolitan and nonmetropolitan classifications are based on the Office of Management and Budget’s June 2003 groupings. Metropolitan counties are distinguished by the population size of the metropolitan statistical area of which they are part. Nonmetropolitan counties are classified according to the aggregate size of their urban population and by whether they are adjacent to a metropolitan area. Using this information, we coded the counties where RUS’ electricity borrowers that received loans between fiscal years 1999 and 2003 provide service. To avoid overstating the number of counties served by RUS borrowers, we did not code the same county twice, in the event that two different borrowers served customers in the same county. We then analyzed county-level data from the 2000 census. Specifically, we analyzed the number of residents in counties that the Bureau of the Census classifies as residing in rural and urban areas. In general, the Bureau of the Census historically defined rural areas as cities, villages, boroughs, or towns with fewer than 2,500 inhabitants. The Bureau of the Census revised the definition for the 2000 census to focus on population density within areas while retaining the 2,500 population criterion. Thus, for this part of our analysis, we used counties served by the distribution borrowers as an indicator of areas being served by borrowers that obtain RUS electricity loans. We also analyzed the service area maps of selected RUS borrowers. We also cross-referenced the loan information we obtained from RUS against data that the distribution borrowers report to the agency annually on the number of customers that they serve. We then categorized the borrowers that received loans by various incremental ranges of residential customers served. These ranges generally correspond with the population criteria for various USDA rural development programs—for example, a population of less than 2,500 for electricity loans, 10,000 or less for water and waste disposal loans and grants, and 20,000 or less for community facility loans and grants. We used the most recently available customer data at the time a loan was approved for our analysis. Thus, if a loan was approved in calendar year 2000, we used customer data as of December 31, 1999. We took this approach because the agency does not collect data on the number of customers in each county that the borrowers serve. We recognize that most borrowers serve multiple areas, which could result in their having a high number of customers. However, we noted that the residential customer data are counted as individuals responsible for paying the electricity bills; a household is generally counted as one customer. Thus, the customer count data would be less than the number of inhabitants. To address the potential financial risk to taxpayers of the 2002 Farm Bill requirement to guarantee lenders’ debts, and the amount of rural economic development loans and grants that could be funded by fees on the guarantees, we reviewed the relevant portion of the act and its legislative record. During the initial portion of our review, RUS had not issued a proposed or implementing regulation. Because the Rural Business-Cooperative Service’s Deputy Administrator for the business programs told us the agency had not yet developed a program-level estimate of the additional loans and grants that could be funded under the new program, we made such an estimate using RUS’ estimated level of guaranteed debt and the resulting available fee proceeds, if that level were achieved, and the Rural Business-Cooperative Service’s fiscal year 2004 budget figures for rural economic development loans and grants. To obtain information on RUS’ efforts and plans to implement the new guarantee program, we interviewed RUS officials, including the Assistant Administrator for the electricity program. A proposed regulation was published in the Federal Register on December 30, 2003. We reviewed this document to determine how the agency was proposing to implement the new program and the agency’s description of the program’s risk, impact, and benefits. We interviewed officials of CFC and CoBank to obtain their views on the proposed new program, and reviewed financial and business reports on these entities and the electric and telecommunications industries. To determine whether an alternative mechanism might be available to fund the rural economic development program with less risk, we analyzed RUS’ fiscal year 2005 budget request for electricity and telecommunications loans and the Rural Business-Cooperative Service’s request for rural economic development loans and grants to determine what level of fees would be needed to cover the costs of the Rural Business-Cooperative Service’s program. For this part of our analysis, we focused on a fee level that could result in a level of funds to support rural economic development loans and grants that approximately doubles the Rural Business-Cooperative Service’s fiscal year 2005 requested program levels. Moreover, we obtained from the Rural Development mission area’s finance office information on the level of funds in the cushion-of-credit account and available to cover the subsidy costs of rural economic development loans and the cost of rural economic development grants. We conducted our review from October 2003 to June 2004 in accordance with generally accepted government auditing standards. This appendix contains two tables: table 3 provides information about the numbers of customers served by RUS distribution borrowers included in our analysis; table 4 provides information about 12 counties with substantial urban populations that are served entirely or predominately by RUS electricity borrowers. These counties are located in the vicinity of Atlanta, Georgia; Charlotte, North Carolina; Tampa, Florida; and Washington, D.C. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated May 24, 2004. 1. We do not criticize the legislation in the 2002 Farm Bill calling for the new guarantee program, and offer Congress an alternative for funding rural economic development loans and grants that does not provide added risk exposure to the nation’s taxpayers. Our report recognizes that a key feature of the new program, as specified in the bill report of the Senate Committee on Agriculture, Nutrition, and Forestry and in the conference report, is to provide an additional funding mechanism for rural economic development loans and grants. 2. The report recognizes that RUS has taken steps to implement the new guarantee program and does not criticize the agency’s actions. Moreover, we recognize in the report that RUS proposed steps to mitigate risk in the new program, including the requirements for a bankruptcy trust, pledges of collateral, a 5 percent limit on cash patronage refunds, and the use of certain standards that apply to depository financial institutions. 3. We reviewed a January 2004 Bearing Point report prepared for RUS that lays out credit subsidy rate options, which suggest some risks with the new guarantee program. This Bearing Point report does not state or otherwise show that the probability of incurring any loss under the program is unlikely; it does, however, contain various estimated subsidy rates assuming defaults. We also reviewed a December 2002 Bearing Point report prepared for RUS on credit subsidies; this report also has no statement about losses being unlikely. These two Bearing Point reports are Guarantee Program for Bonds and Notes Issued for Electrification or Telephone Purposes, Credit Subsidy Input and Output Sheets for 15-Year Bond Scenarios (January 9, 2004), and Bond and Note Guarantee Program, Credit Subsidy Research Final (December 16, 2002). In addition, our report states that RUS estimated, in the economic analysis section of its proposed program regulations, maximum potential losses at $1.5 billion, but that RUS does not expect losses to occur. 4. Contrary to USDA’s assertion, we do not challenge RUS’ practice of determining eligibility when a borrower first applies for a loan to provide electricity service in a rural area. In our opinion, this practice is called for and meets the provisions of the RE Act. We do question, however, RUS’ practice of providing subsequent loans for service to areas that are no longer rural. 5. We agree with USDA that there has been prior reporting on electricity loans being made by RUS to borrowers that have experienced population growth in their service territories and on RUS’ policy of allowing such lending. However, we have an obligation to report on the continuation of these conditions because we were specifically requested to do so. We believe it is important to highlight these conditions for Congress given the purpose of the RE Act—that is, providing loans to assist in the electricity infrastructure development of sparsely populated rural areas. 6. In discussing the results of our analyses with RUS officials, they told us that legislation had been drafted that is consistent with the President’s fiscal year 2005 budget to require borrowers to recertify that they are serving rural areas. We added this statement to the report. 7. We used various methodologies to characterize the areas served by RUS distribution borrowers because the agency does not collect comprehensive data on the areas they serve. For example, RUS does not maintain up-to-date service territory maps or current population data within those service areas. We recognize that urban-rural continuum codes of USDA’s Economic Research Service are not the criteria the RE Act specifies RUS use to determine program eligibility. We also acknowledge that all the metropolitan counties served by RUS electricity distribution borrowers have at least some parts that the 2000 census classifies as rural, and that many of these counties are only partially served by a RUS borrower. Our point, however, is to generally describe the characteristics of areas served by RUS electricity distribution borrowers. Moreover, our analysis of the Economic Research Service’s system was one of various methodologies we used; the others were our analyses of specific borrowers serving highly populated areas, counties with substantial urban populations served by RUS’ distribution borrowers, and the numbers of customers served by these borrowers. More specifically, as USDA’s letter acknowledges, the report provides information on five borrowers that serve highly populated areas. In addition, table 4 in appendix II lists the total population, urban population, and rural population based on the 2000 census of 12 counties that are exclusively or predominantly served by RUS electricity distribution borrowers. This table shows conclusively that urban populations are benefiting from RUS electricity loans. Furthermore, table 3 in appendix II disaggregates RUS electricity borrowers by the number of customers served. 8. Neither the draft reviewed by USDA, nor this report, suggests that metropolitan statistical areas be used as eligibility criteria for participating in the electricity loan program. Rather, we state in appendix I of our report that we used counties served by the distribution borrowers as an indicator of areas being served by borrowers that obtain RUS electricity loans. Our purpose in providing information on the metropolitan counties served by RUS borrowers was to illustrate how some borrowers now provide service to largely populated areas, rather than providing service solely to sparsely populated rural areas. 9. We agree that the electricity loan program involves relatively little subsidy cost. Our concern with the actual and potential cost of the program stems from the fact that RUS has experienced a high level of losses in recent years. Specifically, the background section of this report notes that RUS wrote off more than $3.2 billion during fiscal years 1999 through 2003. Our 1998 report noted that RUS wrote off more than $1.7 billion during fiscal year 1994 through June 30, 1997. In addition, it is likely RUS will incur additional losses in the near future. For example, at the end of fiscal year 2003, the assets of two borrowers that owed a total of more than $22 million were being liquidated by bankruptcy trustees, and the agency’s officials told us they anticipate losses. 10. We disagree. This report contains new information highlighting that loans are being made to borrowers providing service in highly populated metropolitan areas; it provides examples of specific counties in highly populated areas that are served by borrowers; and it contains a nationwide analysis of counties that are served by borrowers that obtained loans from RUS in recent years. The report also contains in appendix II updated information on loans made to borrowers that have a high number of customers. 11. We revised the report to recognize that hardship rate loans are made to borrowers that have a relatively high cost of providing service, as indicated by a high average revenue per kilowatt-hour sold, and that serve customers with below-average income, or at the discretion of RUS’ Administrator. However, we note that in the current period of low interest rates, the rate charged on hardship rate loans, which is set at 5 percent, has been higher than the rates charged by RUS on its municipal rate loans and Treasury rate loans. Specifically, as the report states, the interest rate on municipal rate loans ranged from 1.1 percent to 4.6 percent during the first quarter of calendar year 2004, and on Treasury rate loans ranged from 1.2 percent to 4.4 percent in mid-March 2004. In addition to the individuals named above, Jonathan C. Altshul, Vondalee R. Hunt, Cynthia C. Norris, Patrick J. Sweeney, and Amy E. Webbink made key contributions.
The Agriculture Department's Rural Utilities Service (RUS) makes loans and provides loan guarantees to improve electric service to rural areas. Beyond guaranteeing loans, under a yet-to-be-implemented provision of the 2002 Farm Bill, RUS is also to guarantee the bonds and notes that lenders use to raise funds for making loans for electric and telecommunications services. Fees on these latter guarantees are to be used for funding rural economic development loans and grants. GAO was asked to examine (1) the extent to which RUS' borrowers provide electricity service to nonrural areas and (2) the potential financial risk to taxpayers and amount of loans and grants that the guarantee fees will fund. GAO also identified an alternative for funding rural economic development. While the Rural Electrification Act authorizes RUS' lending only in rural areas, borrowers that receive RUS loans and loan guarantees serve not only rural areas but also highly populated metropolitan areas. This condition stems from RUS' loan approval practices. RUS requires that borrowers serve rural areas when they apply for their first loans, but it approves subsequent loans without applying this criterion. Thus, RUS applies a "once a borrower, always a borrower" standard. Since the 1930s when the program began, substantial population growth has occurred in areas served by many RUS borrowers; 187 of the counties in which RUS borrowers provide service are in metropolitan areas with populations of 1 million or more. For example, three borrowers that received over $400 million in loans in fiscal years 1999 through 2003 distribute electricity in the immediate vicinity of Atlanta, Georgia. In contrast, about 24 percent of the counties served by RUS borrowers are completely rural, while the remainder have a mix of rural and urban populations. RUS estimates, in a worst-case scenario, that the requirement to guarantee lenders' debt could lead to taxpayer losses of $1.5 billion--and GAO estimated that in return for this risk, fees on the guarantees would add about $15 million per year in rural economic development loans and grants. RUS officials believe that while risks are involved, losses are unlikely given the past stability of both the electricity market and the lender that might receive the guarantees. Only one lender is both qualified and interested in obtaining these guarantees. According to financial rating services, that lender is well regarded, but worked through financial concerns in 2002 and 2003, and faces longer-term risks associated with the changes taking place in the electricity and telecommunications markets that it serves. Recognizing the risks of guaranteeing this lender's debt, RUS proposed certain risk mitigation requirements, such as a reserve against losses. However, the lender's officials have stated that RUS' proposed requirements would make the program unattractive. GAO identified an alternative with no additional taxpayer risk to add funds for rural economic development loans and grants. If RUS were authorized to charge borrowers a small loan-origination fee of one-fourth of 1 percent on loans it expects to make and guarantee in fiscal year 2005, $24 million in rural economic development loans and grants might be made available. This amount is almost equal to the level provided by USDA's 2005 budget request for rural economic development loans and grants, and would likely have a minimal cost impact on customers of distribution borrowers. This alternative would not include guarantees of lenders' debt. Furthermore, the lender expected to use the guarantees has indicated that, even without such guarantees, it expects to continue being very successful at accessing capital for lending.
DOD annually spends about $15 billion—or about 6 percent of its $243 billion fiscal year 1996 budget—on depot maintenance work that involves the repair, overhaul, modification, and upgrading of aircraft, ships, ground vehicles, and other equipment. Over $4 billion is spent on Air Force systems and equipment. Most of the Air Force’s depot maintenance work is performed at five depots that are located at its five air logistics centers. Since the early 1970s, we and others have reported on the redundancies and excess capacity that exist in DOD depots and the need to downsize and improve the operational efficiency of these depots. These problems have been exacerbated in recent years by reductions in military force structure and related weapon system procurement; changes in military operational requirements due to the end of the Cold War; increased reliability, maintainability, and durability of military systems; increased maintenance performed in operational units; and increased privatization of depot maintenance workloads. Beginning in the late 1980s, DOD—primarily through the BRAC process—reduced some of its excess capacity by closing a number of depots and transferring most workloads to remaining depots and some to the private sector. Altogether, the first three BRAC rounds (1988, 1991, and 1993) resulted in recommendations to close nine Army and Navy depots and the Air Force’s Aerospace Guidance and Metrology Center. Despite major force structure reductions and significant excess capacity in the Air Force depot maintenance system, none of the Air Force’s five air logistics centers or the large, multi-commodity depots contained within them were recommended for closure during the first three BRAC rounds. As shown in table 1, for fiscal year 1996, the five centers reported approximately 57.2 million direct labor hours of depot maintenance capacity and accomplished about 31.5 million hours of work—leaving about 25.8 million hours of excess capacity, or about 45 percent. DOD’s February 1995 report to the BRAC Commission recommended reducing excess Air Force depot maintenance capacity and costs by downsizing all five air logistics centers, including their depots. DOD estimated that this downsizing would require one-time costs of $183 million and would result in net savings of $138.7 million during the 6-year implementation period. The downsizing was to be accomplished through consolidating similar workloads, mothballing or disposing of plant equipment, and tearing down buildings. The Commission also estimated that annual savings would be $89 million after the implementation period and that the net present value of all costs and savings over a 20-year period would be $991.2 million. The 1995 BRAC Commission concluded that DOD’s downsizing approach would not adequately reduce infrastructure and overhead costs. It recommended closing the Sacramento and San Antonio centers and transferring their workloads to the remaining depots or private sector commercial activities. In making its closure and workload transfer recommendations, the Commission considered the effects on the local communities, workload transfer costs, and potential effects on readiness. It concluded that the savings and benefits outweighed the potential drawbacks. The Commission’s report noted that given the significant amount of excess depot capacity and limited DOD resources, closure is a necessity. Further, closing these activities would improve the use of the remaining centers and substantially reduce DOD operating costs. The specific Commission recommendations were as follows: Realign Kelly Air Force Base, including the air logistics center; disestablish the defense distribution depot; consolidate the workloads to other DOD depots or to private sector commercial activities as determined by the Defense Depot Maintenance Council; and move the required equipment and personnel to the receiving locations. Close McClellan Air Force Base, including the air logistics center; disestablish the defense distribution depot; move the common-use ground communication electronics to Tobyhanna Army Depot, Pennsylvania; retain the radiation center and make it available for dual use and/or research, or close as appropriate; consolidate the remaining workloads with other DOD depots or private sector commercial activities as determined by the Council; and move the required equipment and any required personnel to receiving locations. All other activities and facilities at the base will close. The Commission estimated that these recommendations would require one-time implementation costs of $822.6 million but would yield net savings of $151.2 million during the 6-year implementation period. Further, they would yield annual savings of $338.2 million after the implementation period, about $70 million of which represented depot maintenance savings. The projected depot maintenance savings were developed by assuming that the number of depot maintenance personnel could be reduced by 15 percent. The Commission estimated that the net present value of savings over 20 years, including the 6-year implementation period, would be $3.5 billion. The Commission’s savings projections did not include savings from moving workloads to depots with lower labor rates, consolidating workloads in underused facilities and reducing excess capacity, increasing efficiency, and reducing the overhead rates at receiving depots. In considering the BRAC recommendations to close the two centers, the President and the Secretary of Defense expressed concerns about the near-term costs and potential effects on local communities and Air Force readiness. In response to these concerns, the President, in forwarding the Commission’s recommendations to Congress for approval, indicated that the air logistics centers’ work should be privatized in place or in the local communities. He also directed the Secretary of Defense to retain 8,700 jobs at McClellan Air Force Base and 16,000 jobs at Kelly Air Force Base until 2001 to further mitigate the closures’ impact on the local communities. Additionally, the size of the workforce remaining in the Sacramento and San Antonio areas through 2004 was expected to remain above 4,350 and 11,000, respectively. McClellan has about 2,600 personnel and San Antonio has about 3,100 personnel assigned to organizations whose mission is to provide various base support services (such as security; maintenance of buildings, roads, and grounds; and medical clinic services) to the logistics center and all tenants. Depot maintenance employees comprise 66 percent of the Sacramento center personnel and 47 percent of the San Antonio center personnel. The centers’ maintenance depots perform about $1.65 billion of depot maintenance work annually, about $400 million of which belongs to the other services and is done through interservicing. They also perform other logistics functions such as engineering support and weapon system and item management. Kelly Air Force Base had about 19,500 employees at the time of the 1995 BRAC process—of which 12,850 were air logistics center personnel, including 6,000 involved in depot maintenance. In addition to the air logistics center, Kelly has several other tenant activities, including the Air Intelligence Agency, Defense Information Systems Agency, Defense Distribution Depot, and guard and reserve units. McClellan Air Force Base employed about 14,000 people—of which 7,314 were center employees, including 4,853 depot maintenance personnel. In addition to the air logistics center, McClellan has a defense distribution depot, Coast Guard air station, and reserve units. Several statutes influence the allocation of depot maintenance workloads between the public and private sectors. According to 10 U.S.C. 2464, a “core logistics capability” is to be identified by the Secretary of Defense and maintained by DOD, unless the Secretary waives DOD performance as not required for national defense. Further, 10 U.S.C. 2466 and 2469 limit the extent to which depot-level workloads can be converted to private sector performance. Section 2466 specifies that not more than 40 percent of the funds allocated in a fiscal year for depot-level maintenance or repair can be spent on private sector performance—the so-called “60/40” rule. Section 2469 prohibits DOD from transferring in-house maintenance and repair workloads valued at not less than $3 million to another DOD activity without using “merit-based selection procedures for competitions” among all DOD depots or to contractor performance without the use of “competitive procedures for competitions among private and public sector entities.” Privatizing defense depot activities in place could yield cost savings if other public and private activities were more fully utilizing their maintenance repair capacity. Because substantial excess capacity exists in both the public and private sectors, privatizing Sacramento and San Antonio workloads in place will result in missed opportunities to reduce the overall cost of Air Force depot maintenance operations. In recent years, depot maintenance rates have increased sharply. One of the major reasons for this increase is that as requirements have declined, the large fixed overhead costs for both the depots and the bases on which they are located must be allocated to a smaller depot maintenance workload base. As we noted previously, about 3,100 military and civilian personnel are involved in base support operations at Kelly Air Force Base and 2,600 at McClellan Air Force Base. These operations support center activities, including the depots, as well as other base tenant activities. It is estimated that a military depot with several thousand employees incurs fixed overhead costs of from $50 million to $100 million annually. By closing bases, DOD can eliminate base support and depot maintenance infrastructure and achieve substantial future savings. Additionally, consolidating workloads from closing depots with workloads of underused Air Force depots could yield additional savings at the receiving depots by increasing their efficiency, spreading their fixed overhead costs over a larger workload base, and lowering their average costs. Privatizing-in-place does not substantially reduce infrastructure and excess capacity. It just moves some of it to the private sector. Private sector manufacturing and repair facilities also have extensive excess capacity. The privatization-in-place of the Sacramento and San Antonio depots will not reduce the large amount of excess capacity in the Air Force depot system and the private sector or their associated costs, unless additional facilities are closed or other cost-reduction means are successfully implemented. The Air Force’s planning has not progressed far enough to compare precisely the cost of privatizing depot workloads in place with the cost of transferring the work to other underused depots. However, because privatization-in-place will have little effect on excess capacity at the remaining depots, it is unlikely that any savings would offset the cost of maintaining excess depot capacity. The Navy’s experience closing naval aviation depots and consolidating workloads at remaining Navy facilities provides useful insights regarding the benefits of this closure option. According to Navy officials, consolidating workloads from three closing naval aviation depots and quickly moving most of this workload to the three remaining depots was projected to reduce excess capacity and decrease the overhead rates at remaining naval aviation depots. Capacity utilization was projected to increase 35 percent as a result of this consolidation. The utilization rate for each depot varies depending on the specific workload transfers and other variables, such as the rate of decline of their pre-consolidation workload base. Overall, the economy and efficiency improvements were projected to decrease the overhead rates by 18 percent between fiscal years 1994 and 1997. Based on a 10-million hour workload program, the consolidation could save an additional $100 million annually. Navy officials stated that because of the continuous decline in depot maintenance workloads and other factors, anticipated savings cannot be measured precisely. However, they noted that closing naval aviation depots and consolidating workloads in the Navy’s remaining three depots has clearly improved efficiency and lowered the cost of the Navy depot maintenance program. Our analysis of BRAC Commission estimates indicates that the closure of the depots at the San Antonio and Sacramento logistics centers were expected to save about $70 million annually. Based on our analysis of Air Force data, the actual savings could be as much as $206 million annually, if the closing depots’ work is transferred to the remaining military depots. The Commission’s estimate assumed that about 15 percent of the maintenance personnel at the closing centers could be eliminated. This evaluation did not attempt to measure economy and efficiency improvements that resulted from workload consolidation. However, our analysis indicates that transferring about 8.2 million hours of work from the closing Air Force depots to the three remaining depots would (1) reduce these three depots’ excess capacity from about 46 percent to about 8 percent, (2) lower the hourly rates by an average of $6.00 at receiving locations, and (3) save as much as $182 million annually as a result of economies of scale and other efficiencies. This estimate was based on a workload redistribution plan that would only relocate 78 percent of the available hours to Air Force depots. This reallocation plan was developed by the Joint Depot Cross Service Group during the BRAC 1995 process. If our analysis had included a plan for redistributing all 10.5 million available hours of work, then our projected annual recurring savings would have been higher. Similarly, the Army estimates that the Commission-mandated transfer of about 1.2 million hours of ground communications workload from the Sacramento depot to the Tobyhanna Army Depot will save an additional $24 million. According to financial management officials at the three remaining centers, it will cost about $475 million to absorb all of the 10.5 million direct labor hours of the Sacramento and San Antonio depot work currently available for reallocation. Comparing this cost estimate to our $206 million projected annual savings indicates that net savings would occur within 2-1/2 years of the consolidation. Transition costs for moving only 78 percent of that workload would be less; therefore, net savings would occur in even less time if all of the 10.5 million direct labor hours available for redistribution to Air Force depots were moved. Moreover, $318 million of the projected $475 million are associated with the release or movement of depot maintenance personnel, and the costs are about the same for either option. DOD will incur these costs regardless of whether the workload is moved to other military depots or privatized-in-place. It could recoup the remaining $157 million in less than 1 year if the workloads were consolidated at other Air Force depots. Additionally, over $110 million of the remaining $157 million cost is required to support the movement of the C-5 aircraft workload. A decision to privatize the C-5 workload in place or at a contractor facility would reduce transition costs to between $50 million and $100 million, including the additional costs estimated to be required for the privatization. Thus, the cost of moving all but the C-5 is estimated to have a payback period of less than 8 months. Finally, the potential $200 million annual savings that could result from the consolidation is in addition to the BRAC Commission’s $268-million savings estimate for eliminating base support operations and non-depot maintenance personnel at the McClellan and Kelly Air Force Bases. On the other hand, if the remaining depots do not receive additional workload, they are likely to continue to operate with significant excess capacity and become more inefficient and expensive as workloads dwindle due to downsizing and privatization initiatives. If a depot is not closed, excess capacity and costs could still be reduced by downsizing-in-place and implementing various economy and efficiency initiatives. However, the amount of the reduction could be minimal without further workload consolidations and depot closures. Additionally, the remaining Air Force depots are located on large multi-functional bases that support other missions and optimum savings could not be achieved unless the entire base is closed. Subsequent to our analysis, the Air Force Materiel Command analyzed potential savings from workload consolidation, including how increasing the efficiency of underused military depots would lower fixed overhead rates. This analysis showed that annual savings of $367 million can be achieved through consolidation of workloads in remaining DOD depots. Further, an additional $322 million could also be saved by relocating workload to depots that already have lower hourly rates. Based on the Air Force analysis, payback of the projected $476 million workload relocation cost would occur in less than 1 year. Excess capacity in the private sector is particularly acute for fixed-winged aircraft; communications, electronics, and avionics equipment; and engines. For example, in March 1996, we reported that private sector contractors indicate they have enough excess capacity to accomplish all of the projected depot workloads for six military engines: TF39, TF33, T63, F108/CFM56, 501K, and LM2500. These contractors also said they have enough capacity to perform 75 percent of the military’s T56 engine workload and 73 percent of its T700 engine workload. “inhibits the realization of cost savings intended from base closures and the performance goal improvements that privatization is intended to achieve. Privatization-in-place, therefore, does not solve the excess capacity problem within either the public or the private sector of the defense industrial base.” According to industry representatives, this approach to downsizing will not achieve the intended objectives and is likely to be the most costly option of all. In August 1996, the Air Force announced its most recent strategy for allocating the depot workloads at Sacramento and San Antonio, but details are still evolving. The strategy indicates the workloads will be competed and that one of the remaining public depots will be allowed to compete with the private sector for each of the three large workload packages that are being developed. However, this strategy may limit public and private activities’ ability to compete and favor privatizing the workloads in place. On August 16, 1996, the Air Force announced that it was revising its strategy for allocating the workloads at Sacramento and San Antonio. The Air Force’s plans initially focused on privatizing five prototype workloads—three at Sacramento and two at San Antonio. The BRAC Commission report specified that the Defense Depot Maintenance Council should determine where depot maintenance workloads from the closing Air Force depots should be moved. The Council approved the Air Force’s plan for the five prototype workloads on February 1, 1996. Table 2 shows the proposed prototype program, including the estimated annual value of the workloads and the number of workers involved. The prototype workloads involved about 11 percent of the San Antonio depot’s maintenance personnel and about 27 percent of the Sacramento depot’s personnel. Request for proposals were to be issued during the third quarter of fiscal year 1996 for the software and C-5 paint/depaint workloads and during the fourth quarter of fiscal year 1996 for the hydraulics, electrical accessories, and fuel accessories. Contract awards were projected for the first quarter of fiscal year 1997 for the hydraulics, software, C-5 paint/depaint, and fuel accessories workloads, and the second quarter of fiscal year 1997 for electronic accessories. while privatization is desirable and will deliver more for the defense dollar, the more restrictive Privatization-in-Place is counterintuitive. The Department of Defense is expecting the private sector, which already carries significant excess capacity due to defense downsizing, to be willing to take on workload to be performed at a closing base. Success is only possible if offerings are structured as viable business opportunities with ongoing potential and if the opportunities are strongly supported by the communities involved. Implementation of the prototype concept was put on hold in May 1996 as the Air Force considered various options. Shortly thereafter, Air Force planners began to focus on a concept that would involve several large consolidated work packages, essentially one at Sacramento and two at San Antonio (one for the C-5 aircraft and one for engines). The Defense Depot Maintenance Council approved the revised acquisition strategy in August 1996. However, the 10 U.S.C. 2466 provision that limits private sector depot maintenance performance to no more than 40 percent—the 60/40 rule—constrains the Air Force’s ability to privatize depot maintenance workloads. DOD has requested that Congress repeal this provision and other statutes affecting the allocation of depot maintenance workloads between the public and private sectors. Air Force planners project that about $600 million of the two depots’ $1.65 billion workload will be available to transfer to the private sector. If the 60/40 provision is not repealed in the future, the remaining $1.05 billion workload will be transferred to other military depots. This will substantially increase the use of these depots and reduce their labor hour rates for all workloads. If the 60/40 provision is repealed, DOD will need to eliminate substantial excess capacity at the military depots to reduce the cost impact of further privatization. In addition to the 60/40 provision, Air Force officials stated that they intend to comply with the 10 U.S.C. 2469 provision requiring public/private competitions before transferring depot-level workloads valued at not less than $3 million to the private sector. Under the revised strategy, one request for proposals will be issued for all Sacramento workloads that are proposed to be privatized-in-place, including A-10 and KC-135 aircraft, hydraulics, instruments and electronic accessories, and software. Air Force planners estimated the value of the Sacramento workload to be about $220 million annually, with a projected 2,200 workers involved. According to Air Force officials, the Sacramento work package can be privatized without breaching the 60/40 provision. The Air Force is following a three-phased approach for competing the Sacramento workload. The first phase began on November 8, 1996, with the issuance of a request for proposals for a study to analyze the Sacramento depot workload, explore approaches for process improvement, and make recommendations for the maintenance contract solicitation. The Air Force anticipates awarding study contracts to one or two offerors for about $750,000 in January 1997. The second phase, the contract study, is scheduled to run until September 1997, although the Air Force plans to issue a request for proposals for the maintenance contract in July 1997. The final phase, the award of the maintenance contract, is expected in January 1998. A two-pronged acquisition strategy has been proposed for San Antonio. One request for proposals is for the C-5 aircraft business area, which has an annual value of $155 million and involves about 1,200 workers. The Air Force issued a draft request for proposals for the C-5 workload in November 1996. The strategy for San Antonio’s engine business area is uncertain, largely because of the 60/40 provision. According to Air Force officials, since the 60/40 provision has not been repealed, only a portion of the $700 million San Antonio engine workload can be privatized along with the other proposed packages. Air Force planners estimated that about $240 million of San Antonio’s engine workload could be privatized without breaching the 60/40 threshold. This limitation would likely allow privatization of only one, or portions of all three, of San Antonio’s three large engine workloads. Because of the 60/40 provision, preliminary Air Force plans provide for some workloads to be transferred to the three remaining Air Force depots. These transfers would include electrical components, automatic test equipment hardware, F-15 workload and gas turbine engines, and engine workloads over the 60/40 limitation. San Antonio and Sacramento local manufacturing workloads would also be transferred to one of the three remaining Air Force depots in 2001. Personnel associated with the weapon system and item management functions at the closing centers are also scheduled to be transferred in 2001 to one of the three remaining air logistics centers or to one of the Air Force product centers that manage the acquisition of Air Force systems and equipment. Further, the Air Force microelectronics facility located at the Sacramento depot will transfer to the Defense Logistics Agency. This approximately 140-person operation will continue to function in its current location, with the Defense Logistics Agency likely to assume ownership of the depot plant equipment and lease the building from the local reuse authority. The Air Force’s revised strategy of competing large consolidated workloads may limit competition from military depots. According to Air Force officials, the solicitation requires that all the work be performed at a single location, does not allow military depots to jointly compete for the work, and only allows one depot to compete for each of the three large workload packages. The competing Air Force depot will not be allowed to subcontract work to a private contractor or to propose maintaining any of the workloads in place. Since no single military depot can currently perform all the work for two of the consolidated packages, the ability of the military depots to compete may be limited. The single location requirement also prohibits contractors from moving individual workloads to multiple underused private sector facilities. Also, the plan to privatize the Defense Logistics Agency supply depots at the Sacramento and San Antonio locations provides an incentive to perform the work at the same location. According to 10 U.S.C 2464, DOD activities must maintain a core logistics capability, including personnel, equipment, and facilities sufficient to provide the technical competence and resources necessary for effective and timely response to a mobilization or other national defense emergency. DOD facilities are to retain core capability unless the Secretary waives DOD performance as not being required for national defense. Air Force data developed and certified during the 1995 BRAC process indicated that about 77 percent of Sacramento’s and 70 percent of San Antonio’s projected fiscal year 1996 depot maintenance workload represented core capability. As a result, the Air Force cannot fully implement privatization-in-place plans for Sacramento and San Antonio without executing a waiver and reporting to Congress. In April 1996, DOD provided Congress a depot maintenance policy report that included a new process for evaluating core. The report said that the military services would conduct a risk assessment before privatizing mission-essential workloads, which previously would have been identified as core. As we reported on May 21, 1996, DOD’s policy report described a model for making these assessments, but did not provide criteria for evaluating private sector capabilities, establishing risk thresholds, and making best value determinations. Additionally, we noted that (1) such criteria is critical to both implementing the model and determining whether mission-essential workloads previously determined to be core capability and performed in military depots can be outsourced at acceptable levels of risk and (2) until this guidance and criteria are established and implemented, the core requirements that will result from the new policy cannot be predicted with any precision. Also, DOD has not yet developed a standardized process for assessing core capability, including a documented risk assessment that evaluates private sector repair capability. As a result, each military service was independently planning and developing its own risk assessment process. While the Air Force is developing a process for reassessing core requirements, evaluations of the Sacramento and San Antonio depot maintenance workloads proposed for privatization have not yet been completed. It is uncertain to what extent the Air Force will determine that mission-essential workloads previously defined as core should be privatized. However, the Air Force has issued draft request for proposals involving workloads previously identified as core without obtaining waivers or redefining the workload as noncore. To reduce the closures’ effect on the Sacramento and San Antonio communities, DOD plans to delay closing McClellan Air Force Base and parts of the Kelly Air Force Base until 2001, thereby retaining 8,700 jobs at McClellan and 16,000 jobs at Kelly. This delay will eliminate much of the $973.8 million savings estimated by the BRAC Commission to result from reduced personnel and operating costs beginning in 1997. These savings were to offset one-time closure costs of $822.6 million. As shown in table 3, the BRAC Commission projected a net savings of $151.2 million during the 6-year implementation period after projected implementation costs of $822.6 million are deducted from the projected total savings. The savings were to be achieved by reducing personnel requirements and operating costs. However, these savings were expected to be partially offset by one-time implementation costs for such things as the transfer of personnel and equipment to new sources of repair. The cost impact of the decision to delay the closures depends on how long they are delayed. For example, a 1-year delay would reduce the BRAC Commission’s projected savings by about $90 million, whereas a 4-year delay would reduce savings by about $796 million. The BRAC Commission expected $845.6 million, or 86.8 percent, of the $973.8 million implementation period savings to be achieved through personnel reductions. It expected that closing the Sacramento and San Antonio centers would eliminate 6,316 military and civilian positions. The personnel reductions were to start in fiscal year 1997 and be completed by the end of fiscal year 2000. The financial benefit of eliminating positions as early as possible becomes readily apparent when the impact is tracked into subsequent years. For example, the BRAC Commission projected that 1,378 positions would be eliminated in fiscal year 1997, which would save $31.8 million during the first year and $63.5 million every year thereafter. Eliminating these positions was expected to save $285.8 million during the implementation period. Table 4 shows how the estimated savings will be affected. Table 5 shows the impact that implementation delays will have on the Commission’s net savings estimates, assuming that all costs and savings remain the same and are simply delayed. The BRAC Commission’s recommendation to transfer common-use ground communication-electronics workload from the Sacramento depot—about 1.2 million direct labor hours of work—to the Tobyhanna Army Depot would increase Tobyhanna’s capacity utilization from 49 percent to 65 percent, reduce Tobyhanna’s hourly labor rate by $6 (from $64 to $58), and save about $24 million annually. This workload includes repairing and overhauling such items as radar, radio communications, electronic warfare, navigational aids, electro-optic and night-vision devices, satellite sensors, and cryptographic security equipment. The Air Force, with the approval of the Defense Depot Maintenance Council, is delaying the transfer of this workload to Tobyhanna until 2001 to support personnel retention goals in the Sacramento area. As we recently reported, this delay will decrease savings and result in interim personnel reductions at Tobyhanna. According to Army officials, delaying all of the workload transfers to Tobyhanna until 2001 could require the depot to undergo a reduction-in-force, followed by a costly rehiring and retraining situation when it does eventually receive the Air Force workloads. Because its workload has been declining, Tobyhanna has already voluntarily separated about 250 of its personnel during 1996. Army officials said that about 800 personnel may be involuntarily separated in fiscal years 1997 or 1998 if no additional workloads are transferred to Tobyhanna. This reduction would include the loss of personnel having critical skills and competencies needed to work on the ground communications workload. As we previously reported, various statutory restrictions may affect how much of the depot-level workloads can be transferred to the private sector—through privatization-in-place or otherwise. They include 10 U.S.C. 2464, 10 U.S.C. 2466, and 10 U.S.C. 2469. While each of these statutes has some affect on the allocation of DOD’s depot-level workload, 10 U.S.C. 2469 constitutes the primary impediment to privatization in the absence of a public-private competition. Competition requirements of 10 U.S.C. 2469 have broad application to all changes to the depot-level workload valued at not less than $3 million that is currently performed at DOD installations, which include Kelly and McClellan. They require DOD to give other public depots the opportunity to compete for the closing depots’ workloads. The statute does not provide any exemptions from its competition requirements and, unlike most of the other laws governing depot maintenance, does not contain a waiver provision. Further, there is nothing in the Defense Base Closure and Realignment Act of 1990—the authority for the BRAC recommendations—that, in our view, would permit the implementation of a recommendation involving privatization outside of the competition requirements of 10 U.S.C. 2469. The determination of whether any single conversion to private sector performance conforms to the requirements of 10 U.S.C. 2469 depends upon the facts applicable to the particular conversion. While DOD has stated that it will structure these conversions to comply with existing statutory restrictions, details of its privatization plans for Kelly and McClellan are still evolving. Sufficient information regarding the detailed procedures for conducting the competitions for the Sacramento and San Antonio workloads is not available for us to assess whether the planned conversions will comply with the requirements of existing law. Further, the planned privatizations at Sacramento and San Antonio are now the subject of litigation. In March 1996, the American Federation of Government Employees filed a lawsuit challenging these privatization initiatives, contending that they violate the public-private competition requirements of 10 U.S.C. 2469 and other depot maintenance statutes. We recommend that the Secretary of Defense require the Secretary of the Air Force to take the following actions: Develop required capability in military depots to sustain core depot repair and maintenance capability for Air Force systems and conduct and adequately document a risk assessment for mission-essential workloads being considered for privatization at the San Antonio and Sacramento depots. Before privatizing any Sacramento or San Antonio workload, complete a cost analysis that considers the savings potential of consolidating the San Antonio and Sacramento depot maintenance workloads at other DOD depots, including savings that can be achieved for existing workloads by reducing overhead rates through more efficient capacity utilization of fixed overhead at underused military depots that could receive this workload. Use competitive procedures, where applicable, for determining the most cost-effective source of repair for workloads at the closing Air Force depots. Reconsider plans to delay (1) the transfer of the ground communications and electronics workload from the Sacramento Depot to Tobyhanna and (2) other delays in transferring workload to the public or private sector that are reducing savings estimated by the BRAC Commission to be achieved from closure and consolidation. DOD provided oral comments on our draft report. It concurred with each of our recommendations and made several technical comments that have been incorporated where appropriate. DOD also noted several actions it is taking to respond to our recommendations. Specifically: DOD and the Secretary of the Air Force will develop the required capability in military depots to sustain core depot repair and maintenance capabilities for Air Force systems. Furthermore, the Air Force is in the process of conducting and documenting a risk assessment of the capability for those workloads being considered for public-private competition at the San Antonio and Sacramento depots. DOD agreed to consider potential savings from consolidating the San Antonio and Sacramento depot maintenance workloads at other DOD depots as a part of its planned evaluation of the public-private competitions for these workloads. DOD plans to use public-private competitions for determining the most cost-effective source of repair for depot workloads at Sacramento and San Antonio. DOD and the Air Force are working closely to finalize plans for transferring the ground and electronics workload from the Sacramento Depot to Tobyhanna. DOD’s response did not address its position regarding delays in transferring other Sacramento and San Antonio workloads not expected to be included in the public-private competitions. Air Force officials stated that they are considering various options for these workloads that include both private and public sector sources. We obtained information from and interviewed officials at the Office of the Secretary of Defense; Air Force Headquarters, Washington, D.C.; Air Force Materiel Command Headquarters, Wright-Paterson Air Force Base, Ohio; the Sacramento Air Logistics Center, McClellan Air Force Base, California; the San Antonio Air Logistics Center, Kelly Air Force Base, Texas; the Warner Robins Air Logistics Center, Robins Air Force Base, Georgia; the Ogden Air Logistics Center, Hill Air Force Base, Utah; the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the Aerospace Guidance and Metrology Center and Defense Contract Management Office, Newark Air Force Base, Ohio; the Joint Depot Maintenance Analysis Group, Gentile Station, Dayton, Ohio; the Army Industrial Operations Command, Rock Island, Illinois; the Naval Air Systems Command, Washington, D.C.; and the Naval Aviation Depot Operations Center, Naval Air Station, Patuxent River, Maryland. We also discussed privatization-in-place issues with Sacramento and San Antonio community leaders and the Defense Contract Audit Agency. This work was part of a broad-based review of depot maintenance requirements, capability, and workload distribution issues. A list of related products is provided at the end of this report. To evaluate DOD’s rationale for (1) delaying the closure of McClellan Air Force Base and the realignment of Kelly Air Force Base and (2) privatizing the Sacramento and San Antonio air logistics centers’ depot maintenance workloads, we reviewed various documents, including the Secretary of Defense’s July 13, 1995, letter to the President and a July 13, 1995, White House press release. We then discussed this rationale with DOD, Air Force Material Command, and air logistics center officials. Because Air Force officials have not yet determined how or when privatization-in-place will be implemented at the two closing depots, neither we nor the Air Force can develop precise costs and savings estimates. As a result, we (1) reviewed the BRAC Commission’s cost and savings estimates and (2) estimated the “economy of scale” savings that could be achieved by using the closing depots’ workloads to reduce excess capacity in the remaining depots. To estimate the potential from transferring the closing depots’ workloads to the remaining depots, we allocated 8.2 million hours of work, or about 78 percent of the projected fiscal year 1999 workload, to the three remaining centers. We used a scheme developed for BRAC 1995 by the Joint Cross Service Group for Depot Maintenance, but modified it slightly. Based on input from Air Force Materiel Command and Center officials, we assumed that the C-5 workload would be transferred to the Warner Robins Air Logistics Center rather than the Oklahoma City Air Logistics Center. We provided each center with a breakout of the transferring workload they would receive by commodity group. We then asked center personnel to estimate how additional workloads would affect their hourly rates by analyzing fixed- and variable-cost categories, excluding material, which we assumed would not change. The three centers used the approach and assumptions developed by executive business planners from all five centers to develop the downsize-in-place Air Force proposal developed during the 1995 BRAC round as an alternative to closing depots. We discussed the methodology with workload and privatization officials at the Air Force Materiel Command. They agreed that our approach was sound for assessing the impact of additional workload on a depot’s rate structure. We also provided the closing centers with an opportunity to comment on our methodology. San Antonio center officials agreed with the general approach, but commented that increases in variable costs were subjective. Sacramento center officials chose not to comment. To determine whether Air Force plans for privatizing the closing depots is consistent with (1) laws relating to the allocation of depot maintenance workloads to the private sector and (2) the BRAC Commission’s recommendations, we identified the applicable requirements and determined their impact on DOD’s plans to privatize depot-level maintenance workloads. We conducted our review between October 1995 and October 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretary of the Air Force; the Director, Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you have any questions, please contact me at (202) 512-8412. Major contributors to this report are listed in appendix I. Army Depot Maintenance: Privatization Without Further Downsizing Increases Costly Excess Capacity (GAO/NSIAD-96-201, Sept. 18, 1996). Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place the Louisville, Kentucky Depot (GAO/NSIAD-96-202, Sept. 18, 1996). Defense Depot Maintenance: Commission on Roles and Mission’s Privatization Assumptions Are Questionable (GAO/NSIAD-96-161, July 15, 1996). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-146, Apr. 16, 1996 ) and (GAO/T-NSIAD-96-148, Apr. 17, 1996). Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Personnel, and Workload Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Navy Maintenance: Assessment of the Public-Private Competition Program for Aviation Maintenance (GAO/NSIAD-96-30, Jan. 22, 1996). Depot Maintenance: The Navy’s Decision to Stop F/A-18 Repairs at Ogden Air Logistics Center (GAO/NSIAD-96-31, Dec. 15, 1995). Military Bases: Case Studies on Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-139, Aug. 15, 1995). Military Base Closure: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Aerospace Guidance and Metrology Center: Cost Growth and Other Factors Affect Closure and Privatization (GAO/NSIAD-95-60, Dec. 9, 1994). Navy Maintenance: Assessment of the Public and Private Shipyard Competition Program (GAO/NSIAD-94-184, May 25, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). Depot Maintenance (GAO/NSIAD-93-292R, Sept. 30, 1993). Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military (GAO/T-NSIAD-93-13, May 6, 1993). Air Logistics Center Indicators (GAO/NSIAD-93-146R, Feb. 25, 1993). Defense Force Management: Challenges Facing DOD As it Continues to Downsize its Civilian Workforce (GAO/NSIAD-93-123, Feb. 12, 1993). Navy Maintenance: Public/Private Competition for F-14 Aircraft Maintenance (GAO/NSIAD-92-143, May 20, 1992). Military Bases: Information on Air Logistics Centers (GAO/NSIAD-90-287FS, Sept. 10, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Air Force's plans to privatize-in-place depot maintenance workloads at the Sacramento and San Antonio air logistics centers, focusing on: (1) the impact on excess depot capacity and operating costs at the remaining defense depots; (2) the cost-effectiveness of planned privatization initiatives, including the impact of delaying base closures until the year 2001; and (3) compliance with statutory requirements. GAO found that: (1) privatizing-in-place rather than closing and transferring the depot maintenance workloads at the Sacramento and San Antonio air logistics centers will leave a costly excess capacity situation at remaining Air Force depots that a workload consolidation would have mitigated; (2) although the Air Force's privatization initiative for the Sacramento and San Antonio depots has not progressed far enough for GAO to estimate precisely costs and savings, consolidating depot maintenance workloads at remaining underused depots could result in a net savings in 2 years or less; (3) GAO's work shows that transferring the depot maintenance workloads to other depots could yield additional economy and efficiency savings of over $200 million annually in addition to the $268 million annual savings the Base Realignment and Closure (BRAC) Commission estimated could be achieved by eliminating the McClellan and Kelly infrastructures and downsizing nonmaintenance personnel; (4) if the workload consolidation does not occur, the remaining Air Force depots are likely to become more inefficient and more costly, unless other workloads are added, costly excess capacity is eliminated, or other efficiency and economy initiatives are successfully implemented; (5) plans to delay many closure-related actions until 2001 will substantially reduce future savings envisioned by the BRAC Commission and could result in a net loss of $644.4 million between 1997 and 2001 for the Air Force and $24 million for the Army; and (6) the Department of Defense (DOD) stated that it will structure the San Antonio and Sacramento privatizations to comply with existing statutory restrictions, but DOD's privatization plans are still evolving and sufficient information is not available for GAO to assess whether the conversion plans will comply with existing law.
CMS, an agency within HHS, is responsible for much of the federal government’s multi-billion-dollar payments for health care, primarily through the Medicare and Medicaid programs. Medicare—the nation’s largest health insurance program—covers about 40 million elderly and disabled beneficiaries. Medicaid is a state-administered health insurance program, jointly funded by the federal and state governments, that covers eligible low-income individuals including children and their parents, and aged, blind, and disabled individuals. Each state administers its own program and determines—under broad federal guidelines—eligibility for, coverage of, and reimbursement for, specific services and items. Most Medicare beneficiaries purchase part B insurance, which helps pay for certain physician, outpatient hospital, laboratory, and other services; medical supplies and durable medical equipment (such as oxygen, wheelchairs, hospital beds, and walkers); and certain outpatient drugs. Medicare part B pays for most medical equipment and supplies using a series of fee schedules. Medicare pays 80 percent, and the beneficiary pays the balance, of either the actual charge submitted by the supplier or the fee schedule amount, whichever is less. Generally, Medicare has a separate fee schedule for each state for most categories of items, and there are upper and lower limits on the allowable amounts that can be paid in different states to reduce variation in what Medicare pays for similar items in different parts of the country. The fee schedules specify a Medicare-allowable payment amount for each of about 1,900 groups of products. Each product group is identified by a Healthcare Common Procedure Coding System (HCPCS) Level II code, and all products grouped under a code are intended to be items that are alike and serve a similar health care function. For example, one code (E1130) describes a standard wheelchair with fixed arms. Many different brands can be billed under this code, so long as they fit the basic description. Medicare part B also covers roughly 450 outpatient drugs—generally those that cannot be self-administered and are related to physicians services, such as cancer chemotherapy, or are provided in conjunction with covered durable medical equipment, such as inhalation drugs used with a nebulizer. In addition, Medicare part B covers selected immunizations and certain outpatient drugs that can be self-administered, such as blood clotting factors and some oral drugs used in association with cancer treatment and immunosuppressive therapy. To administer Medicare part B fee-for-service claims, CMS contracts with insurance companies, referred to as carriers, who review and pay claims that have been submitted by physicians and other outpatient providers and suppliers. To ensure appropriate payment, carriers conduct claims reviews that determine, for example, whether the services claimed are covered by Medicare, are reasonable and necessary, and have been billed with the proper codes. Medicare’s size and complexity make it extremely challenging to develop payment methods that prudently reimburse providers while promoting beneficiary access to items and services. As Medicare’s steward, CMS cannot passively accept what providers want to charge the program. However, because of its size, Medicare profoundly influences health care markets. Medicare is often the dominant payer for services and products, and in such cases, it cannot rely on market prices to determine appropriate payment amounts because Medicare’s share of payments distorts the market. In addition, Medicare has had difficulty relying on competition to determine prices. Because of constraints on excluding any qualified provider from participating in the program, Medicare traditionally includes all such providers who want to participate. Finding ways of encouraging competition without excluding some providers—a normal leverage that purchasers use to make competition work—has been problematic. As a result, Medicare has had to administratively set payment amounts for thousands of services and items, trying to do so in ways that encourage efficient delivery, while ensuring beneficiary access to them. Adding to the complexity of setting payment amounts is Medicare’s status as a highly visible public program with certain obligations that may not be consistent with efficient business practices. For example, CMS is constrained from acting swiftly to reprice services and supplies even when prevailing market rates suggest that payments should be modified. When making substantive changes, Medicare’s enabling legislation generally requires public input. This minimizes the potential for actions to have unintended consequences. However, seeking and responding to public input from various provider and supplier groups can be a time-consuming process that can sometimes thwart efficient program management. Prior to 1987, Medicare payments for medical equipment and supplies were based on supplier charges, subject to some limitations. As part of their responsibilities to administer Medicare claims, individual Medicare carriers raised or lowered payments to suppliers in their local areas to align them with market prices. When carriers sought to adjust payments on this basis, they employed a process that involved gathering relevant pricing data from local area markets, determining new payment levels on the basis of the price information obtained, and notifying area suppliers of the changes. Although HCFA monitored carriers’ performance in carrying out these steps, it did not evaluate the appropriateness of the new payment levels established. In 1987, the Congress and HCFA began the process of moving the Medicare program from paying on the basis of individual providers’ charges for medical equipment and supplies and covered outpatient drugs, to developing payment methods intended to pay more prudently through use of program-determined amounts. Specifically, the Congress introduced fee schedules for medical equipment and supplies in 1987. Statewide fees were determined on the basis of average supplier charges on Medicare claims allowed in each state in 1986 and 1987, and were updated for inflation in some years. However, the agency lacked mechanisms to otherwise adjust fees to reflect marketplace changes. As a result, disparities between fee schedule amounts and market prices developed over time, and Medicare significantly overpaid for some medical equipment and supplies. In recent years, we and the HHS OIG reported on instances where Medicare payments for certain medical equipment and supplies and outpatient drugs were excessive compared with retail and other prices. One notable example of excessive Medicare payments is included in our 1995 report on surgical dressings. We estimated that Medicare could have saved almost $20 million in 1995 if it had paid the lowest wholesale prices available in a national catalog for 44 types of surgical dressings. Although Medicare’s fee schedule for surgical dressings was based on medians of retail prices found in supply catalogs when the schedule was set, Medicare’s statute did not permit HCFA to lower the fee schedule when retail prices for dressings decreased. Another instance of excessive Medicare payment was for home oxygen equipment and supplies provided to patients with pulmonary insufficiency. Medicare fee schedule allowances for home oxygen were significantly higher than the rates paid for almost identical services by the Department of Veterans Affairs (VA), which in fiscal year 1995 paid for home oxygen benefits for over 23,000 patients. In 1997, we estimated that Medicare could have saved over $500 million in fiscal year 1996 if it had paid rates for home oxygen comparable to those paid by VA. Medicare’s payments for outpatient drugs have been similarly excessive, although the methodology used to determine payment amounts is somewhat different and attempts to tie Medicare’s payments to market prices. In 1989, the Congress required that physician services be paid based on fee schedules beginning in 1992. The fee schedules developed by HCFA to comply with this requirement provided for all outpatient drugs furnished to Medicare beneficiaries not paid on a cost or prospective payment basis to be paid based on the lower of the estimated acquisition cost or the national average wholesale price (AWP). Manufacturers report AWPs to organizations that publish them in drug price compendia, which are typically updated annually, and Medicare carriers base providers’ payments on these published AWPs. In concept, such a payment method has the potential to be market-based and self-adjusting. The reality is, however, that AWP is neither an average nor a price that wholesalers charge. Because the term AWP is not defined in law or regulation, there are no requirements or conventions that AWP reflect the price of any actual sale of drugs by a manufacturer. Given the latitude manufacturers have in setting AWPs, Medicare’s payments are often not related to market prices that physicians and suppliers actually pay for the products. A June 1997 House Budget Committee report accompanying the bill that became the BBA, in explaining the reason for specifying a 5-percent reduction from AWP, cited a report by the HHS OIG regarding Medicare payments for outpatient drugs. Among the OIG findings were that Medicare payments ranged from 20 percent to nearly 1,000 percent of certain oncology drugs’ commercially available prices. Our recent work found that Medicare payments in 2001 for part B-covered outpatient drugs remained significantly higher than prices widely available to physicians and pharmacy suppliers. For example, most physician- administered drugs had widely available discounts ranging from 13 to 34 percent below AWP. Two other physician-administered drugs had discounts of 65 and 86 percent. Pharmacy suppliers—the predominant billers for 10 of the high-expenditure and high-volume drugs we analyzed—also purchased drugs at prices considerably lower than Medicare payments. For example, two inhalation drugs accounting for most of Medicare payments to pharmacy suppliers had widely available discounts averaging 78 percent and 85 percent from AWP. Despite such dramatic illustrations of disparities between Medicare payments and prices widely available to others acquiring medical equipment and supplies and covered outpatient drugs, Medicare has not had the tools to respond quickly in such instances. Carriers used to adjust payment amounts as part of their responsibility to appropriately pay Medicare claims, but in 1987, the Congress effectively prohibited use of this process to lower Medicare payment rates until 1991. In 1988, the Congress required use of a more formal “inherent reasonableness” process that could be accomplished only by HCFA, not by the carriers. In other reports, we have described this process as slow and cumbersome and have noted that it is not available for some items, such as surgical supplies. Since 1991, when HCFA was first permitted to use the inherent reasonableness process to adjust payments for medical equipment and supplies, it successfully did so only once—for blood glucose monitors— and in that instance took almost 3 years to adjust the maximum allowable Medicare payment from $185.79 to $58.71. In 1997, in response to concerns about HCFA’s difficulties in adjusting payment rates determined to be excessive, the Congress included a provision in the BBA that gave HCFA authority to use a streamlined inherent reasonableness process to adjust payments for medical equipment and supplies and covered outpatient drugs by up to 15 percent a year. Subsequent legislation required that a final regulation taking into account public comments be published before the agency could use any inherent reasonableness authority. Because the agency has not issued the final regulation, it cannot adjust Medicare’s fee schedules to respond to market price information. The BBA also provided HCFA with opportunities to test an alternative to setting rates administratively that could be more responsive to market prices. This alternative is competitive bidding—a process allowing suppliers to compete for the right to supply their products on the basis of established criteria, such as quality and price. The BBA gave HCFA authority to use a streamlined inherent reasonableness process for part B services (excluding physician’s services). Under this authority, HCFA can adjust payments by up to 15 percent per year using a streamlined process, or can use its original process with formal notice and comment to make larger adjustments. In January 1998, the agency published an “interim final rule with comment period” for the streamlined inherent reasonableness process that became effective 60 days after it was published. This was a departure from the usual practice of first responding to public comments before issuing a final regulation. Under the interim final rule, HCFA delegated authority to use the streamlined process to the Medicare carriers that process claims for medical equipment and supplies, with final action on payment adjustments to be approved by the agency. The carriers attempted to lower maximum payment rates for eight groups of products, gathering information on retail prices through surveys conducted in at least 16 states. In September 1998, the carriers notified suppliers of proposed adjustments for eight groups of products and solicited comments. Industry groups representing various medical equipment and supply manufacturers and suppliers expressed serious concerns about how the inherent reasonableness process was implemented and whether the surveys were conducted properly. The Congress requested that we review the appropriateness of implementing the streamlined inherent reasonableness authority through an interim final rule and the soundness of the carriers’ surveys. Pending the results of our review, HCFA suspended the carrier-proposed payment reductions in March 1999. In November 1999, the Congress passed legislation prohibiting HCFA or the carriers from using any inherent reasonableness authority until we issued our report and the agency issued a final rule taking into account our findings and public comment. In our July 2000 report, we concluded that, while the carriers could have conducted their surveys more rigorously, the surveys and other evidence sufficiently justified the carriers’ proposed payment reductions for five of eight product groups. In our report, we recommended that HCFA clarify criteria for using its inherent reasonableness authority, strengthen agency or carrier survey methodology in the future, collect additional data on prices for the other three product groups before adjusting their payment amounts, and monitor beneficiary access after any payment changes. Although our report is almost 2 years old, CMS has not issued a final regulation that would allow it to use either its streamlined or original inherent reasonableness processes to adjust Medicare payment amounts for part B supplier-billed services. Thus, the agency lacks a tool to adjust its fee schedules, short of statutory changes. In order to experiment with other ways of setting Medicare’s payments for medical equipment and supplies and outpatient drugs, the BBA provided authority for HCFA to conduct demonstration projects using competitive bidding and to include home oxygen in at least one of the demonstrations. Evidence from two competitive bidding projects suggests that, for most of the items selected, competition might provide a tool that facilitates setting more appropriate payment rates and result in program savings. In its first competitive bidding demonstration, conducted in Polk County, Florida, HCFA set rates for oxygen, hospital beds, surgical dressings, enteral nutrition and supplies, and urological supplies through competitive bidding. HCFA reported that the new rates set by this competitive process in the Florida demonstration saved Medicare an average of 17 percent on the cost of these medical equipment and supply items without compromising beneficiary access to these items. In a second demonstration in San Antonio, Texas, the agency included oxygen; hospital beds; manual wheelchairs; noncustomized orthotic devices, including “off-the-shelf” items such as braces and splints; and albuterol sulfate and other nebulizer drugs. Preliminary CMS information on the San Antonio competitive bidding demonstration identified an average savings of 20 percent, without any negative effects on beneficiary access. Whether attempting to adjust payments administratively or through competitive bidding, CMS can only be effective if it has a defensible process for doing so and accurate information upon which to base action. Any change to Medicare’s payments, particularly a reduction in fees for medical equipment and supplies or covered outpatient drugs, should be accompanied by an ongoing assessment of whether the new payments adequately support Medicare beneficiaries’ access to such items and services and properly reimburse providers and suppliers. Such monitoring needs to examine current experience so that prompt fee adjustments can be made if access problems are found. Efforts to lower excessive payment rates through the inherent reasonableness process illustrate the difficulties CMS has in making even minor adjustments, as the agency’s actions can have wide ramifications for providers, suppliers, and beneficiaries. When HCFA tried to use its streamlined inherent reasonableness authority in 1998 to reduce payment rates for various medical equipment and supply items and outpatient drugs, it attempted to take action before responding to public comment, thereby leaving the effort open to criticism. In addition, we concluded that the carriers’ survey methodology was not rigorous enough to provide a basis to adjust fees nationally for all of the products under review. What the agency lacked was sufficient information on market prices. Such information, along with current local, as well as national, data on beneficiaries’ use of services and program expenditures, is key to setting rates administratively. Because HCFA did not have reliable acquisition cost information, its carriers engaged in a very labor-intensive information-gathering effort. One major problem CMS has when going to the marketplace to collect information is that it cannot determine the specific products Medicare is paying for when carriers process claims for medical equipment and supplies. Carriers pay claims on the basis of billing codes indicating that the supplied items belong to a particular product group. These groups can cover a broad range of product types, quality, and market prices. As a result, products that differ widely in properties, use, performance, and price are billed under the same code and the program pays the same amount. For example, we reported in 1998 that catheters belonging to a single product category varied in type and price, from about $1 to $18, with Medicare’s maximum fee payments ranging across states from $9.95 to $11.70. However, HCFA had no information on which catheters were being provided to beneficiaries. To address the problem of insufficient specificity, we recommended in the 1998 report that suppliers be required to include universal product numbers (UPN) as well as current billing codes on claims. UPNs and associated bar codes are increasingly used to identify specific medical equipment and supplies, similar to the way universal product codes are used in supermarkets. Manufacturers can use bar codes for each product to identify characteristics such as the manufacturer, product type, model, size, and unit of packaging. Using UPNs—or some other mechanism— incorporated into claim forms to bring more specificity to what is provided to beneficiaries could help CMS better determine appropriate payments. Under provisions in the Health Insurance Portability and Accountability Act of 1996 (HIPAA), HHS has adopted standards for coding medical services, procedures, and equipment and supplies. These provisions were aimed at simplifying data reporting and claims processing requirements across all public and private payers. Under the standards, HCPCS Level II was designated as the code set for medical equipment and supplies. Its limitation in specificity argues for evaluating whether the current code set can be adjusted to better distinguish between various products currently grouped within a single HCPCS Level II code. Lack of specificity has been a similar problem for the codes used to define inpatient hospital procedures. The HIPAA standard code set for reporting hospital inpatient procedures is the International Classification of Disease, 9th Edition, Clinical Modification, Volume 3 (ICD-9 CM Vol. 3). The inadequacy of this code set is widely recognized, as it lacks both the specificity to accurately identify many key aspects of medical procedures as well as the capacity to expand in order to appropriately incorporate codes in response to new technology. In fact, HHS recognized that in adopting the ICD-9-CM Vol. 3 as a HIPAA standard, the agency would need to replace it, given the code set’s limitations. As a consequence, CMS plans to implement a new code set, the International Classification of Disease, 10th Edition, Procedural Coding System (10 PCS), which would provide much greater specificity. Our work on payments for covered outpatient drugs, which identified strategies used by other payers to obtain prices closer to acquisition costs, underscores the value of accurate information for determining appropriate payments. For example, the VA uses the leverage of federal purchasers to secure verifiable information on actual market transactions by private purchasers—specifically, the prices that drug manufacturers charge their “most-favored” private customers. To enable the VA to determine the most-favored-customer price, by statute, manufacturers who wish to sell their products to the federal agencies involved are required to provide information on price discounts and rebates offered to domestic customers and the terms and conditions involved, such as length of contract periods and ordering and delivery practices. The manufacturers provide this information and agree to offer the VA and other government purchasers drugs at these prices, subject to VA audit of their records, in order to have state Medicaid programs cover their drugs. This type of information could be helpful in setting payment amounts for certain Medicare drugs. It is already available to CMS, but for use only in the Medicaid—not the Medicare—program. With congressional approval, CMS could use the information provided to Medicaid to determine appropriate prices for Medicare that would be based on actual prices being paid in the market. One key step would be to determine the formula to use to calculate payments based on the price data. Most likely, Medicare would not set payments to match the prices paid by most favored customers but would need to pay closer to average market prices to ensure access for all beneficiaries and adequate payments to providers. Results from the competitive bidding demonstrations suggest that competition can also serve as a tool to obtain more appropriate prices for medical equipment and supplies and outpatient drugs. By competing a small number of products and limiting the geographic area of competition, CMS took steps to manage the process, which included monitoring of beneficiary access and product quality. In its fiscal year 2003 budget, the Administration proposed expanding competitive bidding for medical equipment and supplies nationally, which it estimates could save $240 million in fiscal year 2003 and $5 billion over 10 years. The Administration’s expansion proposal to translate these limited demonstrations into a competition involving a larger number of products nationally would be a substantial undertaking and may not be practical or appropriate for all products. CMS would require new authority to begin to use competitive bidding outside of a demonstration. A key element to the new authority would be the extent to which and the basis whereby providers could be excluded from Medicare. While Medicare normally allows any qualified provider to participate in the program, competitive bidding may be most effective only by limiting the number of providers or suppliers who could provide items or services. For example, in the Polk County demonstration, only 16 out of the 30 bidders were selected to participate. Limiting the number of participating suppliers obviously has an effect on both beneficiaries and suppliers. While provider participation is not an entitlement, the effects of exclusion—in terms of numbers of providers and the volume of services affected—need to be identified and assessed. Similarly, for some products, who the provider is may be of little consequence for the beneficiary, but for others, maintaining greater beneficiary choice and direct access to the provider could be important. Whether payment rates are set or adjusted through competitive bidding or administrative fee-setting, monitoring to ensure that beneficiaries continue to have access to the items or services is a critical component of such efforts. For example, when the Congress reduced Medicare home oxygen payment rates by 25 percent effective January 1, 1998, and an additional 5 percent effective January 1, 1999, it wanted assurance that beneficiaries could continue to receive satisfactory service. To evaluate the impact of the home oxygen payment reduction on access and quality, the BBA required studies conducted by us and HHS. Neither study found any significant access problems with the payment reduction. In addition, home oxygen was included in both competitive bidding demonstrations, and through those demonstrations, prices were reduced further. HCFA estimated that Medicare’s home oxygen payments were reduced by 16 percent in the Polk County demonstration, without beneficiary access problems. Such monitoring is important, not just when required by statute but as part of an ongoing effort to ensure the Medicare program is effectively serving its beneficiaries. Unfortunately, such studies to review the effects of payment reductions on access are the exception. As we have reported before, CMS has not been able to generate data that are timely, accurate, and useful on payment and service trends essential to effective program monitoring. One of the principal lessons to be drawn from the many BBA payment reforms is that newly implemented policies need a thorough assessment of their effects. Policy changes, particularly those that constrain payment, almost inevitably spark calls for revisions. Considerations of such revisions need to be based on sufficient information so that, at one extreme, policies are not unduly affected by external pressures and premature conclusions as to their impact, and at the other extreme, policies do not remain static when change is clearly warranted. CMS has not been well-positioned to collect and analyze data regarding beneficiaries’ use of services—information that is essential to managing the program effectively. This year’s 5.4 percent reduction of physicians’ fees from what was paid in 2001 raised concerns about beneficiaries’ access. While prior information available on physicians’ willingness to see Medicare beneficiaries did not indicate access problems, this information is somewhat dated. Informed decisions about appropriate payment rates and rate changes cannot be made unless policymakers have detailed and recent data on beneficiaries’ access to needed services. Mr. Chairman, this concludes my prepared remarks. I will be happy to answer any questions you or the Subcommittee Members may have. For further information regarding this testimony, please contact me at (312) 220-7600. Sheila Avruch, Hannah Fein, Sandra Gove, Joy Kraybill, and Craig Winslow made contributions to this statement.
Medicare has paid higher than market rates for various medical equipment and supplies and often considerably higher than provider acquisition costs for Medicare-covered outpatient drugs. Congress has enacted a series of legislative changes affecting payment methods and payment adjustment authority for medical equipment and supplies and outpatient drugs since the late 1980s. However, progress in setting appropriate rates has been mixed, owing, in part, to various constraints faced by the agency responsible for administering Medicare--the Centers for Medicare and Medicaid Services (CMS). Because of the program's size, scope, and role as a public payer, Medicare has limited options to set and adjust payments for medical equipment, supplies and outpatient drugs. Medicare's method of paying for medical equipment and supplies is through fee schedules that remain tied to suppliers' historical charges to Medicare rather than market prices. Medicare's payment approaches lack flexibility to keep pace with market changes, and, as a result, Medicare often pays higher prices than other public payers. Previous efforts to lower Medicare's overly generous payments suggest several lessons. First, payment changes are most effectively implemented when the process used to set or adjust a rate is defensible. Second, the information on Medicare claims for medical equipment and supplies is not specific enough to enable CMS to determine which products Medicare is actually paying for. Also, for the foreseeable future, CMS will have to continue to rely on fee schedules based on historical charges in setting payment rates for medical equipment and supply items.
Thorough and comprehensive planning is crucial to the success of any large, long-term project, especially one with the cost, complexity, and high stakes of the decennial census. Indeed, the Bureau’s past experience has shown that the lack of proper planning can increase the costs and risks of downstream operations. Past experience has also underscored the importance of strong oversight of the census to (1) inform congressional decision making on budgetary and operational matters, (2) raise Congress’s confidence that the Bureau has chosen an optimum design and will manage operations and control costs effectively, and (3) help ensure the progress the Bureau has made thus far in refining, planning, and testing census-taking activities, continues as the Bureau shifts into the operational phases of the decennial. Given the escalating cost of the census in an era of serious national fiscal challenges, oversight will be particularly important. Bureau officials estimate the total life-cycle cost of the 2010 Census will be around $11.3 billion, which would make it the most expensive census in our country’s history, even after adjusting for inflation. Although some cost growth can be expected, in part, because the number of housing units—and hence the Bureau’s workload—has grown, the cost escalation has far exceeded the housing unit increase. The Bureau estimates that the number of housing units for the 2010 Census will increase by 10 percent over 2000 Census levels; meanwhile, the average cost per housing unit for 2010 is expected to increase by approximately 29 percent from 2000 levels (from $56 to $72), nearly five and a half times greater than the $13 it cost to count each household in 1970 (see fig. 1). A key reason for the increasing cost of the census is that because of various societal trends such as concerns over personal privacy, more non- English speakers, and more people residing in makeshift and other nontraditional living arrangements, the Bureau is finding it increasingly difficult to locate people and get them to participate in the census. As a result, the Bureau needs to spend more money simply to achieve the accuracy of earlier enumerations. This can be seen, for example, in the rising cost of securing public participation in the census. During the 1990 Census, the Bureau spent an average of $0.88 per housing unit (in 2000 dollars) to market the census and was able to rely on a pro-bono advertising campaign. The response rate was 65 percent. For the 2000 Census, recognizing that extra effort would be needed to motivate participation, the Bureau used a paid advertising campaign developed by a consortium of private-sector advertising agencies. It cost an average of $3.19 per housing unit (in 2000 dollars) and achieved a response rate of 64 percent. As the Bureau plans for 2010, maintaining cost effectiveness will be one of the single greatest challenges confronting the agency. The Bureau’s preparations for the 2010 Census appear to be further along than at a similar point during the planning cycle for the 2000 Census. For example, the fundamental design of the 2010 Census has the potential to contain costs and improve coverage and accuracy, and the Bureau’s planning process for 2010 is generally more thorough than was the case for the 2000 Census. At the same time, the 2004 test and, to date, the 2006 test, have identified areas where improvements are needed. Uncovering trouble spots is an important objective of any test, thus it is not surprising, and, in fact, should be expected and commended that problems were found. Moreover, the Bureau has taken steps to resolve the issues that have surfaced. Remaining activities in the 2006 test, and the 2008 Dress Rehearsal, will help determine the effectiveness of the Bureau’s efforts. The Bureau developed a design for the 2010 Census early in the decade, and Congress has been supportive of the Bureau’s approach. The situation 10 years ago was vastly different. In testimony before Congress in late 1995, we expressed concern that Congress and the Bureau had not agreed on the fundamental design and budget of the census, and that the longer this situation continued, the opportunity for a well-planned census would be lost and the greater the risk that hundreds of millions of dollars would be spent inefficiently. Key features of the design of the 2010 Census include the following: Enhancing procedures for building its address list, known as the Master Address File, and its associated geographic information system, called the Topologically Integrated Geographic Encoding and Referencing (TIGER)® database; Replacing the census long-form questionnaire with the American Community Survey (ACS); and Conducting a short-form-only decennial census supported by early research and testing. Also noteworthy is the fact that for the 2010 Census, the Bureau plans to make the most extensive use of contractors in its history, turning to the private sector to supply a number of different mission-critical functions, including the Bureau’s nationwide data processing activities, and improvements to the Master Address File and TIGER. The Bureau estimates that of the $11.3 billion total cost of the census, around $1.9 billion (approximately 17 percent) will be spent via its seven largest contracts which include information technology systems, advertising, and the leasing of local census offices. The Bureau is relying more heavily on contractors because it recognizes it needs to look outside the agency to obtain the expertise and services essential for a successful enumeration. That said, the Bureau’s contracting efforts during the 2000 Census did not always go smoothly, and it will be important for Bureau management to focus on its procurement activities to help ensure the 2010 contractors fulfill the Bureau’s expectations. Our companion testimony at today’s hearing provides greater detail on two of the Bureau’s information technology contracts. In concept, the Bureau’s approach has the potential to achieve its principal goals for the 2010 Census which include (1) increasing the relevance and timeliness of data, (2) reducing operational risk, (3) increasing coverage and accuracy, and (4) containing costs. However, some aspects of the design, including the use of technology that has never been employed for the decennial, as well as the heavy reliance on contractors, introduce new risk. This is not inappropriate as the need to secure a complete count and addressing problems with past censuses call for bold new initiatives that entail risk. What will be important is how effectively the Bureau manages those risks. Another sign of progress can be found in the thoroughness of the Bureau’s planning process where the Bureau has taken several positive steps to correct problems it encountered when planning past censuses. For example, early in the decade, senior Bureau staff considered various goals for the 2010 Census and articulated a design to achieve those goals. Moreover, staff with operational experience in the census participated in the 2010 design process. According to Bureau officials, this was a departure from the 2000 planning effort when Bureau staff with little operational experience played key roles in the design process, which resulted in impractical reform ideas that could not be implemented. At the same time, the Bureau’s planning process could benefit from an overall business or project plan that (1) includes milestones for completing key activities; (2) itemizes the estimated cost of each component; (3) articulates a clear system of coordination among project components; and (4) translates key goals into measurable, operational terms to provide meaningful guidance for planning and measuring progress. Some, but not all of this information is available in various documents, but one would need to piece it together. Noting the importance of this information to inform congressional decision-making and oversight of the census, as well as to improve the Bureau’s planning process, in our January 2004 report, we recommended that the Bureau combine this information into a single, comprehensive document. The Bureau disagreed with the recommendation although it said it would develop such a plan nonetheless and provide it to GAO, Congress, and other stakeholders. The Bureau has not yet issued this document. A complete and accurate address list is the cornerstone of a successful census because it identifies all households that are to receive a census questionnaire, and serves as the control mechanism for following up with households that fail to respond. Although the Bureau went to great lengths to build a complete and accurate Master Address File for the 2000 Census, of the 116 million housing units contained in the database, the Bureau estimates it incorrectly included 2.3 million housing units and missed another 2.7 million housing units. In light of these and other problems, the Bureau concluded that enhancements to the Master Address File and TIGER were necessary to make census data more complete and accurate. In the preliminary results of our ongoing work on enhancements to the Master Address File and TIGER, we found that the Bureau has developed procedures to help resolve each of the broad categories of problems experienced in 2000 including addresses that were duplicated, missed, deleted, and incorrectly located on a map (a problem known as geocoding error). The Bureau has several ongoing evaluations that should provide valuable information on the effectiveness of these procedures. The Bureau is also taking steps to improve the accuracy of the TIGER maps which, among other benefits, should help prevent geocoding errors. In June 2002, the Bureau awarded an 8-year contract, in excess of $200 million intended to, among other tasks, correct in TIGER the location of every street, boundary, and other map feature so that coordinates are aligned with their true geographic locations. According to the Bureau, the contractor completed this work for 250 counties in 2003, 602 counties in 2004, and 623 counties in 2005. Furthermore, the contractor plans to deliver the remaining 1,758 county maps between 2006 and 2008. However, based on this time line, it appears that several hundred county TIGER maps will not be updated in time for the Local Update of Census Addresses (LUCA) program, through which the Bureau gives local and tribal government officials the opportunity to review and suggest corrections to the address lists and maps for their jurisdictions. LUCA is to begin in July 2007 when, according to the current schedule, the Bureau will still have 368 counties to update in 2008 alone. These counties will not have the most current maps to review but will instead be given the most recent maps the Bureau has available. According to the Bureau, some of the maps have been updated for the American Community Survey, but others have not been updated since the 2000 Census, which could affect the quality of a local government’s review. The Bureau is aware of the overlapping schedules, but told us that it needs to start LUCA in 2007 in order to complete the operation in time for address canvassing. LUCA is an example of how the Bureau partners with external entities, tapping into their knowledge of local populations and housing conditions in order to secure a more complete count. In 1994, Congress required the Bureau to develop a local address review program to give local and tribal governments greater input into the Bureau’s address list development process. When the Bureau conducted LUCA for the 2000 Census, the results were mixed. In our 1999 congressional testimony, we noted that many local governments said they were satisfied with specific aspects of the materials and assistance the Bureau provided to them. At the same time, LUCA may have stretched the resources of local governments, and participation in the program could have been better. The census schedule will also be a challenge for an operation called address canvassing, where census workers are to walk every street in the country, verifying addresses and updating maps as necessary. The Bureau has allotted 6 weeks to verify the nation’s inventory of 116 million housing units. This translates into a completion rate of over 2.75 million housing units every day. The challenge in maintaining this schedule can be seen in the fact that for the 2000 Census, it took the Bureau 18 weeks just to canvass “city-style” address areas, which are localities where the U.S. Postal Service uses house-number and street-name addresses for most mail delivery. Of particular concern is the previous unreliability of the MCDs the Bureau plans to use for its address canvassing and nonresponse follow-up operations (see fig. 2). For address canvassing, the MCDs are to be loaded with address information and maps; for nonresponse follow-up, they will be used in lieu of paper questionnaires and maps to collect household information. The MCDs are also equipped with global positioning system (GPS) receivers, a satellite-based navigational system to help enumerators locate street addresses and to collect coordinates for each structure in their assignment area. Bureau officials expect the MCDs will help improve the cost- effectiveness of the census by allowing it to eliminate millions of paper questionnaires and maps, improve the quality of address data, and update enumerators’ nonresponse follow-up workload on a daily basis. The move from paper to digital was a very positive step. At the same time, rigorous testing is essential to assess their durability, functionality, and that enumerators are able to use them. The MCDs were first evaluated for nonresponse follow-up as part of the 2004 Census Test, and for address canvassing in 2005 as part of the 2006 Census Test. The Bureau will use MCDs next month for nonresponse follow-up in the 2006 test. In both our prior and ongoing work, we found the test results have been mixed. On the one hand, the census workers we observed had little difficulty using the MCDs. For example, address canvassers we interviewed said the electronic maps were accurate and that they were able to find their assignment areas with relative ease. On the other hand, the reliability of the MCDs proved troublesome during the 2004 and to date, the 2006 test. For example, in 2004, the MCDs experienced transmission problems, memory overloads, and difficulties with a mapping feature--all of which added inefficiencies to the nonresponse follow-up operation. The Bureau is using MCDs made by a different manufacturer for the 2006 test which resolved some of these problems, but other difficulties emerged during address canvassing. For example, the device was slow to pull up and exit address registers, accept the data entered by the canvassers, and link map locations to addresses for multi-unit structures. Furthermore, the MCDs would sometimes lock up, requiring canvassers to reboot them. Canvassers also found it difficult to transmit an address and map location that needed to be deleted from the master list. The Bureau was unable to fix this problem so canvassers had to return to the local census office where technicians dealt with the problem. The reliability of the GPS was also problematic. Some workers had problems receiving a signal, and when a signal was available, it was sometimes slow to locate assignment areas and correct map locations. According to the Bureau, these problems reduced the productivity of the canvassers, and the Bureau stopped the operation 10 days after it was scheduled to finish. Even with the extension, however, the Bureau was unable to complete the job, leaving census blocks in both Austin and on the Cheyenne Indian Reservation unverified. According to the Bureau, the problems were caused by unstable software and insufficient memory. The Bureau delayed the start of address canvassing for a month at both test sites to troubleshoot the MCDs. However, it was unable to fix all the problems and decided to move forward with the test. The MCDs will be evaluated again next month as part of the 2006 Census Test and we will be on-site to assess the extent to which the Bureau has fixed the MCD problems. However, even if the MCDs prove to be more reliable, questions remain for the future. The Bureau has acknowledged that the MCD’s performance is an issue, but believes it will be addressed as part of its contract for the Field Data Collection Automation (FDCA) program, which is aimed at automating the Bureau’s field data collection efforts, and is scheduled to be awarded later this month (the MCDs used for the 2006 test are off-the-shelf purchases that were customized by the Bureau). As a result, the 2008 Dress Rehearsal will be the first time the entire system—including the contractor’s MCD—will be tested under conditions that are as close as possible to the actual census. If new problems emerge, little time will be left to develop and test any refinements. Our field observations also suggest that the training of census workers could be improved to help ensure they follow proper procedures. Failure to do so could affect the reliability of census data. During the 2004 test, for example, we observed enumerators who did not read the coverage and race/ethnicity question exactly as worded, and did not properly use flashcards the Bureau had developed that were designed to help respondents answer specific questions. During the address canvassing operation for the 2006 test, we observed workers who were not properly verifying addresses, or were unsure of what to do when they happened upon dwellings such as duplex housing units. In our past work, we recommended that the Bureau take a more strategic approach to training, and that local census offices include in their instruction special modules covering the unique living arrangements that might be prevalent in that particular jurisdiction. The Bureau acknowledged that the shortcomings we identified require improvement, and indicated that for the 2006 test, it will enhance training to reinforce the procedural requirements. The Bureau also intends to incorporate additional training to prepare enumerators to handle realistic situations encountered in their work. As part of our field work for the 2006 test, we will review the improvements the Bureau made to its training procedures. If the operational challenges of conducting a census were not daunting enough, the Bureau faces the additional challenge of a possible brain drain. In our June 2005 report, we noted that the Bureau has projected that 45 percent of its workforce will be eligible to retire by 2010. The Bureau has long benefited from its core group of managers and experienced staff who developed their expertise over several census cycles; their institutional knowledge is critical to keeping the census on track. Indeed, according to Bureau officials, many experienced employees retired or left the agency after the 1990 Census which affected planning efforts for the 2000 Census. Leading organizations go beyond simply backfilling vacancies, and instead focus on strengthening both current and future organizational capacity. In this regard the Bureau acknowledges that re-engineering the 2010 Census requires new skills in project, contract, and financial management; advanced programming and technology; as well as other areas. To help address this important human capital issue, the Bureau has implemented various succession planning and management efforts to better position the agency to meet its future skill requirements. Still, we found that the Bureau could take additional steps to enhance its succession planning and management efforts and recommended that the Bureau (1) strengthen the monitoring of its mission-critical workforce, (2) seek appropriate opportunities to coordinate and share core succession training and development programs with other outside agencies, and (3) evaluate core succession training and development programs to gauge the extent to which they contribute to enhancing organizational capacity. The Bureau agreed with our recommendations and indicated it was taking steps to implement them. On August 29, 2005, Hurricane Katrina devastated the coastal communities of Louisiana, Mississippi, Texas, and Alabama. A few weeks later, Hurricane Rita plowed through the border area of Texas and Louisiana. Damage was widespread. In the wake of Katrina, for example, the Red Cross estimated that nearly 525,000 people were displaced. Their homes were declared uninhabitable, and streets, bridges, and other landmarks were destroyed. Approximately 90,000 square miles were affected overall and, as shown in figure 3, entire communities were obliterated. The destruction and chaos caused by the storms underscore the nation’s vulnerability to all types of hazards, and highlights how important it is for government agencies to consider disaster preparedness and continuity of operations as part of their planning. We have had a preliminary discussion with the Bureau on this topic and will continue to assess the Bureau’s contingency planning as part of our oversight of the 2010 Census. Moreover, it will be important for the Bureau to assess the impact the storms might have on its census-taking activities, as well as whether the affected areas have any special needs for data. Securing a complete count, a difficult task under normal circumstances, could face additional hurdles along the Gulf Coast, in large part because the baseline the Bureau will be working with—streets, housing stock, and the population itself—will be in flux for some time to come. According to the Bureau, different parts of the agency work on hurricane-related issues at different times, but no formal body has been created to deal with the hurricanes’ impact on the 2010 Census. The Bureau anticipates that by 2008, as it is preparing to conduct address canvassing, people will have decided whether or not to return. At that time, the Bureau believes it will be in a better position to identify vacant, occupied, and new construction for 2010. Although Census Day is still several years away, preliminary activities, such as operations for building the Master Address File, are to occur sooner. Consequently, a key question is whether the Bureau’s existing operations are adequate for capturing the migration that has taken place along the Gulf Coast, the various types of dwellings in which people live, and the changes to roads and other geographic features that have occurred, or does the Bureau need to develop enhanced and/or additional procedures to account for them? For example, new housing and street construction could require more frequent updates of the Bureau’s address file and maps, while local governments’ participation in LUCA might be affected because of the loss of key personnel, information systems, or records needed to verify the Bureau’s address lists and maps. It will also be important for the Bureau to work with Congress and state and local governments to determine whether the hurricane-affected areas have any special data needs to track the economic and social well-being of the region and benchmark the recovery process. Although the decennial census would not be the instrument to collect this information, it might be feasible doing so through one of the Bureau’s other survey programs. To date, the Bureau plans to do a special tabulation of its American Community Survey (ACS) data for the areas affected by Katrina that will provide information on the population that remained in the region. However, because of several methodological issues, it will not be an “official” ACS data product. The Bureau is also trying to use data from administrative records to update its population estimates of the area. Building on these efforts, some key considerations for the future include the following: 1. How have the hurricanes affected the counties and parishes in the Gulf Coast region and what are the implications, if any, for the Bureau’s future operations? 2. Which external and internal stakeholders including federal, state, and local government agencies, as well as nonprofit organizations and specific areas of expertise need to be included in the Bureau’s decision-making process? 3. To what extent does the Bureau have a plan (including objectives, tasks, milestones, etc.) for assessing and acting on any new requirements imposed by the hurricanes? 4. Do the hurricane-affected areas have any special data requirements, and if so, how should they be addressed and which stakeholders need to be involved? In summary, over the last few years, the Bureau has put forth a tremendous effort to help ensure the success of the 2010 Census. The Bureau is moving forward along a number of fronts, and has been responsive to the recommendations we made in our past work aimed at improving its planning process, address file, MCDs, training, human capital, and other census-taking activities. Still, some aspects of the census are proving to be problematic and a number of operational questions need to be resolved. To be sure, challenges are to be expected in an endeavor as vast and complex as the decennial census. Moreover, shortcomings with prior censuses call for the Bureau to consider bold initiatives for 2010 that entail some risk. Thus, in looking toward the future, as the planning and testing phases of the 2010 Census begin to wind down, it will be important for Congress to monitor the Bureau’s progress in (1) identifying and diagnosing problems, (2) devising cost-effective solutions, and (3) integrating refinements and fixes in time to be evaluated during the Dress Rehearsal in 2008. Indeed, while the ramp-up to 2010 is making progress, past experience has shown that Congress has every reason to remain vigilant. As we have done throughout the past several decades, we look forward to supporting the subcommittee in its decision-making and oversight efforts. Mr. Chairman, Mr. Clay, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee might have. For further information regarding this testimony, please contact Brenda S. Farrell on (202) 512-3604, or by e-mail at [email protected]. Individuals making contributions to this testimony included Betty Clark, Robert Goldenkoff, Carlos E. Hazera, Shirley Hwang, Andrea Levine, Anne McDonough-Hughes, Lisa Pearson, Michael Volpe, and Timothy Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Rigorous planning is key to a successful census as it helps ensure greater effectiveness and efficiency. The U.S. Census Bureau (Bureau) estimates the 2010 Census will cost around $11.3 billion, which would make it the most expensive census in our country's history, even after adjusting for inflation. GAO was asked to testify on (1) the Bureau's progress in preparing for the 2010 Census, (2) the challenges that Hurricanes Katrina and Rita might pose for the Bureau's future activities, and, (3) more broadly, the importance of planning for a range of events that could severely disrupt the census. The Bureau's preparations for the 2010 Census are making progress along several fronts. Of particular note is (1) the re-engineered design of the census, which holds promise for controlling costs and maintaining accuracy; (2) the Bureau's early planning process which was more rigorous than for the 2000 Census; and (3) the Bureau's greater willingness to outsource key census-taking operations that would be difficult for it to carry out on its own. At the same time, it will be important for the Bureau to resolve issues that pose a risk to a successful census. For example, the Bureau plans to use hand-held mobile computing devices (MCD) to develop the census address list and collect data from millions of households that do not respond to the initial census questionnaire. The MCDs are an important step forward because they are designed to replace many of the paper questionnaires and maps that were used in past censuses, and are a key element of the Bureau's Field Data Collection Automation program. The Bureau has never before used the devices in a decennial. In tests held in 2004 and 2006 to date, census workers found the MCDs easy to use, but sometimes unreliable, which reduced efficiency. Some workers also deviated from prescribed procedures which points to the need for better training. The Bureau has taken steps to address these issues and future tests will help determine the effectiveness of the Bureau's actions. The Bureau also faces a possible brain drain, as 45 percent of its workforce will be eligible to retire by 2010. Although the Bureau has taken preventative measures, it could improve those efforts by, among other actions, strengthening the monitoring of its mission-critical workforce. Hurricanes Katrina and Rita highlight the importance of contingency planning and examining whether the Bureau's existing operations are adequate for capturing the demographic and physical changes that have occurred along the Gulf Coast. Overall, as the Bureau's preparations for 2010 continue, it will be important for Congress to monitor the Bureau's progress in (1) identifying and diagnosing problems, (2) devising solutions, and (3) integrating refinements in time to be evaluated during the Census Dress Rehearsal scheduled for 2008.
A primary goal of the U.S. national drug control strategy is to reduce the amount of cocaine entering the United States. In November 1993, the executive branch issued a counternarcotics policy for cocaine in the Western Hemisphere. The strategy called for, among other things, a controlled shift in emphasis from the transit zone to the source countries. The transit zone is the 2-million square mile area between the U.S. and South American borders and covers the Caribbean Sea, the Gulf of Mexico, Central America, Mexico, and the Eastern Pacific. For the purposes of this report, the Caribbean portion of the transit zone consists of the leeward islands, the windward islands, the Bahamas, Jamaica, the Dominican Republic, Haiti, Puerto Rico, and the U.S. Virgin Islands. In April 1994, the executive branch issued the National Interdiction Command and Control Plan to strengthen interagency coordination. The plan called for creating several joint interagency task forces made up of representatives from federal agencies, including the Department of Defense (DOD), the U.S. Customs Service, and the U.S. Coast Guard. Within the transit zone, the Department of State manages and coordinates U.S. government efforts while DOD supports U.S. law enforcement agencies by tracking suspected drug-trafficking activities and provides training to host nations. The U.S. Customs Service and the U.S. Coast Guard also provide aircraft and ships to assist in tracking and interdicting drug-trafficking activities. The various U.S. activities are expected to be coordinated through the Joint Interagency Task Force East (JIATF-East), located in Key West, Florida. JIATF-East was to be supported by personnel from various agencies such as the Department of State, the Drug Enforcement Administration (DEA), and the Federal Bureau of Investigation. According to the State Department’s 1996 International Narcotics Control Strategy Report, about 780 metric tons of cocaine is produced each year in South America. U.S. officials believe that about 30 percent of the cocaine shipped into the United States comes through the Caribbean into Puerto Rico and other U.S. entry points. The remaining 70 percent is shipped through Mexico. While trend data on the amount of cocaine shipments through the Eastern Caribbean and Puerto Rico are based on inexact information, U.S. officials believe that the level of activity may be increasing. Figure 1 shows the drug-trafficking routes in the Eastern Caribbean. Puerto Rico is the major entry point for cocaine moving through the Eastern Caribbean. U.S. drug officials believe that after 1993 traffickers moved some of their activities from the Bahamas to Puerto Rico because U.S. interdiction efforts in the Bahamas had increased the risk to traffickers. Puerto Rico has become the primary transshipment point into the southeastern United States. An August 1995 U.S. interagency report stated that Puerto Rico and the U.S. Virgin Islands accounted for 26 percent of the documented attempts to smuggle cocaine into the continental United States during 1994. U.S. officials stated that cocaine-related activity in Puerto Rico and the U.S. Virgin Islands has increased. The U.S. Customs Service cocaine seizures increased from 5,507 kilograms in fiscal year 1993 to 8,700 kilograms in fiscal year 1995. Reports are mixed on whether drug-trafficking activities are increasing throughout other islands in the Eastern Caribbean and into the southern United States. A June 1995 local law enforcement report of air-smuggling activities in southern Florida concluded that there were significant increases in drug-trafficking activities occurring from the Caribbean into south Florida. The report also stated that drug-trafficking activities in southern Florida are resulting in a return to the patterns of the 1970s and early 1980s when drug detection and interdiction efforts in the Caribbean were minimal. In contrast, DOD officials stated that they did not have any data indicating that there was air-smuggling activity into Florida from the Caribbean area. USIC staff also noted that they were unaware of significant increases of air smuggling into southern Florida. However, U.S. law enforcement officials stated that various intelligence sources confirm that cocaine-related air activities are increasing in southern Florida. According to the State Department, total drug seizures in the Bahamas represented only a small percentage of cocaine trafficking in the transit zone. DEA officials stated that recently traffickers increased their activities throughout the area, but they could not accurately assess the extent of this increase. U.S. Customs reported that, since the destruction of the base at Gun Cay during hurricane Hugo and the diminution of maritime enforcement since that time, there have been fewer drug interdiction missions in the Bahamas. Neither Customs nor Royal Bahamian Defense Forces are able to deny access to the favorite off-load and stash sites used by drug traffickers. In March 1996, the State Department reported that, although cocaine seizures during 1995 remained below the levels of the late 1980s, there were indications of increased cocaine-trafficking activities. Hispaniola refers to Haiti and the Dominican Republic. Hispaniola serves as a convenient staging area for air and maritime drug-related activities because its long, unpatrolled coastline and numerous airstrips facilitate staging and refueling operations. A May 1995 U.S. report stated that cocaine transshipment through Haiti had reemerged since the lifting of the United Nations embargo in October 1994. The U.S. Embassy reported that drug trafficking may be also increasing in the Dominican Republic. The leeward islands extend from the U.S. Virgin Islands to Dominica and include the British Virgin Islands, Anguilla, Antigua and Barbuda, St. Martin, St. Kitts-Nevis, Montserrat, Guadeloupe, and Dominica. The islands are hubs for commercial air and sea traffick. Their proximity to Puerto Rico makes them vulnerable to drug trafficking. Most of the drugs shipped through the islands are destined for further transit through Puerto Rico to the United States. The windward islands extend from Dominica to Grenada and include Martinique, St. Lucia, St. Vincent and the Grenadines, Grenada, Barbados, and Trinidad and Tobago. The islands are used for drug transit and storage. For example, Trinidad and Tobago is only 7 miles from Venezuela and is a natural staging site for drugs smuggled from South America to other Caribbean islands. However, significant increases in drug-trafficking activity have recently been observed. In February 1994, the U.S. Embassy in Barbados reported that law enforcement officials throughout the islands had reported an escalation in air drops and other trafficking activities, which were leading to increases in crime. Drug enforcement officials told us that drug traffickers are increasingly relying on noncommercial and commercial maritime vessels (such as go-fast boats, sailing and fishing vessels, and containerized cargo ships) to transport drugs. DOD records show that the number of known drug-trafficking aircraft events in the transit zone declined by about 65 percent from 1992 to 1995 and that known maritime events increased by about 40 percent from 1993 to 1995. ” Known events,” according to DOD officials, represent clear, firm information about a drug shipment, confirmed delivery, aborted mission, or apprehension. “Results” are apprehensions, seizures, or jettisons. Table 1 lists the number of air and maritime events and results for 1992-95 and shows that maritime drug activity accounted for more events and results than drug shipments via air. According to DOD, drug smuggling by commercial vessels is the primary maritime method for shipping drugs in the transit zone. U.S. Customs and DEA officials believe that smugglers have concealed large shipments of cocaine in legitimate containers aboard commercial sea vessels. In some cases, crew members have attached smaller shipments in parasite containers attached to the hull of the mother vessel. DOD and U.S. Coast Guard officials stated that the large number of ships and complexity of smuggling via commercial vessels severely restricts interdiction at sea. These cargo ships are not routinely inspected because they contain perishable goods that, if inspected, could spoil. U.S. officials stated that the large number of noncommercial vessels traveling in the transit zone makes it difficult to detect or intercept many drug-trafficking activities. Vessels routinely transporting cocaine between the Bahamas and Florida can blend in with legitimate traffic. DOD believes that the number of noncommercial vehicles is difficult to quantify. While air events and results have decreased significantly since 1992, smugglers continue to use general aviation aircraft to move cocaine to transshipment and staging areas in the Caribbean or Mexico. The decline in recorded air events could be due to a combination of factors, including a reduced capability by U.S. agencies to detect air activities, increased sophistication by cocaine smugglers, and traffickers’ preference for maritime smuggling methods. Drug traffickers are using sophisticated communications technology and global positioning systems to avoid detection when airdropping cocaine to boats in the transit zone. U.S. officials stated that the traffickers use cellular phones and global positioning systems to determine drop coordinates prior to departure. The traffickers relay the coordinates to the boats who will pick up an airdrop. According to U.S. officials, the global positioning systems are available commercially and are accurate to within 10 meters of a target. Because of these systems, traffickers do not have to openly communicate as frequently as they did in the past. According to DOD and U.S. law enforcement officials, the increasing use of these technologies makes it more difficult to gather the information needed to track and interdict cocaine shipments through the Caribbean because traffickers can detect whether they are being followed. According to State Department and U.S. law enforcement officials, most Caribbean host nations are cooperating in fighting drug trafficking. However, most Caribbean nations lack resources and law enforcement capabilities and have some corruption problems that hamper their efforts to combat drug trafficking. The Department of State’s March 1996 International Narcotics Control Strategy Report provides a detailed discussion on the Caribbean countries. With few exceptions, the report concluded that cooperation with U.S. authorities was excellent in 1995. For example, Barbados was recognized for its excellent cooperation with U.S. law enforcement, strong enforcement, tough courts, and public mobilization that resulted in a drop in crime and an increase in drug arrest. However, the report noted that the governments of many Caribbean countries were unable to finance their law enforcement operations at a level commensurate with the trafficker threat. The report noted the following: The Government of the Commonwealth of the Bahamas strives to fulfill the goals and objectives of U.S.- Bahamian bilateral counternarcotics accords. A key objective of U.S. counternarcotics assistance is to strengthen the Bahamas’ counternarcotics institutions so they can assume a greater share of the financial burden of combating traffickers. However, even with stronger counternarcotics institutions, the Bahamas will probably remain dependent on U.S. assistance because of the Bahamas’ small population, geography, and limited resources. The Government of the Dominican Republic has fully cooperated with U.S. agencies. However, it lacks effective enforcement mechanisms and the political will to expose and eliminate the corruption that threatens the country’s fragile democratic institutions. The Government of Haiti has shown the political will to cooperate, but its lack of institutional experience undermines its effectiveness. Haiti lacks a national police counternarcotics unit and coast guard, a maritime law enforcement agreement, money laundering legislation, and a national counternarcotics plan. The Government of Jamaica and U.S. law enforcement cooperation is considered to be at the highest level in 5 years. However, Jamaica has not completed its counterdrug legislation or fully implemented it. Although the government passed an asset forfeiture act in 1994, it has still not prosecuted an asset forfeiture case. The Governments of Antigua and Barbuda do not have an effective drug and money laundering enforcement policy. The Government of Dominica has severe resource restraints but has fully cooperated with U.S. law enforcement agencies. In 1995, JIATF-East personnel developed their own assessment of various Eastern Caribbean nations’ maritime law enforcement capabilities. The assessment was based on a subjective judgment of JIATF-East officials regarding the relationships they experienced in operations with host nations. The assessment concluded that, while several countries had relatively good law enforcement capability, others had only fair to poor law enforcement capabilities. In table 2, we show JIATF-East’s inventory of the Eastern Caribbean nations’ interdiction assets. The table shows that few assets are available to Caribbean nations for counternarcotics purposes. State Department officials stated that many national forces do not always cooperate with one another because of insufficient political will, an inability to coordinate, and insufficient available resources. A February 1995 law enforcement agency reported that cooperation between local law enforcement agencies in Trinidad and Tobago has not been good. In August 1995, a U.S. law enforcement agency reported that there was an underlying problem of mistrust between the Dutch government and local law enforcement agencies in the Antilles and Aruba. U.S. officials stated that Caribbean nations will always have limited capabilities because they have small populations and limited funds available for counternarcotics. As a result, U.S. officials are trying to improve interdiction capabilities by signing agreements that allow U.S. personnel to conduct antidrug sea and air operations within the territorial waters and airspace of these nations. U.S. agencies are also providing limited supplies and training to the police forces and the judicial institutions. By the end of 1992, the United States had entered into bilateral agreements with the Bahamas, Turks and Caicos, and Belize regarding shipboarding, shipriding, and pursuit and entry into territorial waters. Since March 1995, the State Department has concluded a series of maritime counternarcotics agreements with the Dominican Republic, St. Kitts and Nevis, Antigua and Barbuda, Dominica, St. Lucia, Grenada, Trinidad and Tobago, and St. Vincent and the Grenadines. As of March 1996, other maritime counternarcotics agreements were pending with Barbados, Jamaica, Honduras, Haiti, Colombia, Ecuador and the Dominican Republic. Many of these agreements are limited to maritime matters, and most of the agreements do not authorize overflight and ordering aircraft to land. Currently only the Trinidad and Tobago agreement allows overflight of territorial airspace for the counternarcotics operations along with order-to-land authority. The Bahamas agreement contained overflight authority. Eastern Caribbean nations have authorized overflight authority on an ad hoc basis in support of combined operations. New efforts are underway to address overflight and air issues. Even though the United States has reached agreement with some Caribbean countries, it does not have one with Cuba that would allow forces to either track or interdict drug-trafficking activity that may occur within Cuban territorial waters or airspace. U.S. Customs reported that in both maritime and aviation, they have noted the use of the waters and airspace adjacent to Cuba as a transfer location or air-drop location. However, DOD data indicate that, for the period between fiscal years 1991 and 1995, there were only 13 out of 947 known air events that flew over Cuban airspace. Various U.S. officials told us that, despite changes in governments, corruption is still widespread throughout the Caribbean. Drug traffickers’ influence in the region is evident. Payoffs are a common form of corruption, particularly in countries with poorly paid public servants. Law enforcement and State Department reports support these statements. A February 1995 law enforcement agency report on one island indicated that corruption may be occurring at high levels of the government. This report stated that there were indications that the leader of a political party was linked to the illegal drug trade. Furthermore, the report also stated that there were numerous allegations regarding corruption in the country’s customs operation at the airport. A March 1995 U.S. law enforcement report stated that in the Bahamas drug law enforcement efforts have been plagued by corruption. The report further stated that, faced with promises of instant wealth, police officers assigned to these islands often succumb to the bribes offered by traffickers. Corruption, according to the report, can also be found in Nassau in just about every police division. Efforts by honest authorities are often thwarted by corrupt officials. Other 1995 U.S. agency reports stated the following: Although there were no official confirmed cases of corruption in St. Lucia, a recent undercover operation indicated the appearance of impropriety by high-ranking law enforcement officials. On one island, there were continuing rumors and allegations regarding the corruption of high-ranking government officials (including officials in the police department). Also, the current administration and opposition party were both perceived to be involved in illegal activities. In Antigua and Barbuda, some individuals with close ties to the current regime are involved in narcotics trafficking. In 1994, authorities reported processing 148 cases involving 152 defendants. Convicted traffickers could pay a heavy fine instead of going to jail. During February and March 1995, the State Department reported the following: In St. Kitts, violence involving politics and drugs plagued the island in 1994, threatening the stability of its minority government. In 1994, a private pleasure craft with the former St. Kitts ambassador to the United Nations, his wife, and family aboard disappeared and was presumed to be lost at sea. The former ambassador had been publicly accused of money laundering and drug charges. Both Colombian and local traffickers have attempted to exploit a tense domestic environment. In Trinidad and Tobago unsubstantiated rumors regarding corruption have included ministers, politicians, and judicial and law enforcement personnel at every level. Despite these rumors, authorities have not initiated any investigations. In addition, alleged police drug payoffs identified by a 1993 Scotland Yard team have not been pursued. The report noted that the team could not fully develop cases because of limited police cooperation. According to the report, structures to deal with corruption issues are either not in place or not functioning. From fiscal year 1992 to fiscal year 1995, the budgets for most federal activities in the transit zone declined significantly. A presidential directive issued in November 1993 called for a gradual shift in emphasis from the transit zone to the source countries. As indicated in table 3, the shift in U.S. resources away from the transit zone actually began as early as fiscal year 1993. DOD and most other federal agencies had a large portion of transit zone resources reduced in fiscal year 1994. As indicated in table 4, the anticipated shift of U.S. funding to efforts in the source countries never materialized, and counternarcotics funding in the source countries declined from fiscal year 1993 to the lower levels in fiscal years 1994 and 1995. Although the actual amount of funds dedicated to source country efforts decreased, source country funding as a percentage of the total increased. Various agencies stressed that decisions to reduce the funding devoted to drug interdiction were often beyond their control. For example, DOD noted that a resource shift from the transit zone to source countries did not occur because its overall drug budget was reduced in fiscal year 1994 by $300 million, $200 million of which was taken from transit zone operations. Also, the U.S. Coast Guard noted that during the early 1990s, it was involved in increasing emigrant activity in the Caribbean that culminated in two mass exoduses of emigrants from Haiti and Cuba. During this period, assets were reallocated from counterdrug missions to respond to this high-priority, national and international humanitarian crisis. The U.S. Customs Service also reported impacts that budget reductions had on its ability to fulfill its missions. The Customs Marine Law Enforcement Program lost 51 percent of its budget, 54 percent of its personnel, and 50 percent of its vessels in fiscal year 1995. According to the U.S. Customs Service, these reductions resulted in a significant impact on its ability to fulfill its traditional maritime role. Between December 1994 and November 1995, DOD deactivated three Bahamian Aerostat radars, two Caribbean Basin Radar Network sites, two mobile tactical radars, and two remote high-frequency Link 11 transmitters/receivers. As indicated in figure 2, the loss of these radars significantly reduced the coverage area. Between 1994 and 1995, DOD activated two Relocatable Over the Horizon Radar systems. Although this radar system provides a larger area of coverage footprint than microwave radars, it has less probability to detect an air event and is not as accurate in vectoring in interceptions as microwave radars. U.S. law enforcement officials have reported that lost radar capabilities have hampered their operations in and around the Bahamas. A March 1995 report concluded that the loss of radar coverage had hampered operations to detect suspect aircraft flying to the Bahamas. Another report noted that the loss of aerostat balloons and ground base radars left the Bahamas virtually free of detection and monitoring assets. DEA officials stated that the reduction in law enforcement resources and direct asset support (e.g., aircraft) have impacted their operations. As indicated in table 5, the number of ship days devoted to drug interdiction went from 4,448 ship days in the peak fiscal year 1993 to 2,668 ship days in fiscal year 1994 and 2,845 ship days in fiscal year 1995. The reductions involved almost all classes of ships. As table 5 shows, in fiscal years 1993 and 1994, the number of ship days for frigates significantly declined from fiscal year 1992 levels. During the same period, the Navy began to deploy other classes of vessels, such as Ocean-Going Radar Picket Ships. This change resulted in reduced capability. These radar picket ships are outfitted with air search radar and are deployed for aerial detection and monitoring. They are not employed for surface law enforcement and, due to their low speed, are not well suited for a surface mission. In addition to reduced radar coverage and reduced maritime deployments, the number of Airborne Warning and Control System sorties also declined between fiscal years 1993 and 1995. For example, DOD reported that flight hours flown in the transit zone declined by 52 percent—from 38,100 hours in fiscal year 1992 to 18,155 hours in fiscal year 1995. DOD officials stated that Airborne Warning and Control System were flown at the maximum extent possible based on crew availability, operational tempo, and reduced asset availability due to other world hot spots. Cocaine seizures in the entire transit zone have declined from 1991-92 levels. As shown in figure 3, cocaine seizures dropped significantly from 70,336 kilograms in fiscal year 1992 to 37,181 kilograms in fiscal year 1995. Air seizures accounted for the greatest amount of decline, from 40,253 kilograms in fiscal year 1992 to only 14,564 kilograms in fiscal year 1995: Maritime seizures increased as a proportion of total seizures, accounting for about 61 percent in fiscal year 1995 compared to about 43 percent in fiscal year 1992. The decline in recorded cocaine seizures is likely due to a combination of factors, including reduced capability by U.S. agencies to detect air and maritime activities and cocaine traffickers’ increased smuggling sophistication. In 1995, the ONDCP contracted with Evidence-Base Research to conduct a study to (1) develop a baseline inventory for fiscal year 1994 of interdiction and law enforcement operations and resources in the transit zone and (2) consider the impact on disruption success rates with a $200-million and a $500-million increase in resources. The study had a number of recognized limitations, including a low level of confidence in its predictions and a limited scope. For example, the scope of the study did not analyze the potential benefits of investing resources in the source countries. It reported that in fiscal year 1994, drug smugglers were not disrupted in 69 percent of the attempts to bring drugs into the United States. With a $200-million and a $500-million increase in spending, the study estimated within a 10- to 20-percent confidence level that the smugglers success rate would decline to 58 percent and 53 percent, respectively. If funding was increased, it suggested the following order of priority: Increase intelligence, which because of its relative low cost, has the greatest leverage and is critical for responding to the maritime threat. Improve disruption capability because, without it, law enforcement would be unable to respond to the targets identified by increased and improved intelligence. Increase detection and monitoring to fill geographic gaps and ensure an ability to link intelligence and disruption capability. The study noted that the federal policy challenge is not only to determine the benefits from direct investment in the transit zone but also to consider whether the investment of a similar level of resources elsewhere in the drug strategy might produce even more benefits. U.S. officials stated that they generally agreed that if additional funds were provided for the transit zone that they should follow the priorities contained in the contractor’s report. However, they pointed out that the study’s low confidence level made the conclusions about stopping drug activities highly questionable. DEA officials stated that the conclusions of the study were questionable because no one knows the actual amount of cocaine that is flowing through the transit zone into the United States. These data would be needed to address the study’s conclusions about potential success of increased interdiction efforts. The executive branch has not developed a plan to implement the U.S. antidrug strategy in the Caribbean. DOD, the Department of State, and law enforcement agencies have various agreements to implement the national drug strategy in the Caribbean region. However, counternarcotics officials expressed concern over the lack of overall responsibility for implementing the current cocaine strategy in the Caribbean. Although agencies had developed individual operational plans, they cited the lack of a coordinated regional action plan as the foremost impediment to accomplishing the goals of the national strategy. Furthermore, they believed that implementing a coordinated regional plan, if one was developed, would be difficult unless someone with real authority was in charge. DOD officials responsible for implementing the detection and monitoring program stated that, because no authority existed that would require participating agencies to commit resources to drug interdiction efforts, it was difficult for them to develop effective plans. Participating agencies indicated that they often had to juggle competing priorities at a time when they were downsizing. Various U.S. officials noted that there is a need for leadership and commitment by ONDCP to ensure that agencies are carrying out their missions to achieve U.S. counternarcotics objectives in the Caribbean. These officials stated that neither ONDCP nor the USIC had authority to direct other participating agencies in meeting agreed-to resource commitments and operational plans. DOD officials stated that if the U.S. government was serious about eradicating drugs in the United States, ONDCP needed to become more authoritative and directive. Because participating agencies have not adequately staffed JIATF-East, it has not achieved the interagency culture initially hoped for at its creation. In April 1994, ONDCP and the participating agencies approved the National Interdiction Command and Control Plan. This plan provided for establishing three geographically oriented counterdrug Joint Interagency Task Forces and a Domestic Air Interdiction Coordination Center. The task forces were to be headed up and staffed by DOD, the U.S. Customs Service, and the U.S. Coast Guard. A major premise of the plan was that the full-time personnel assigned to the task forces would become stakeholders in its operations. It was anticipated that this would ensure close planning and operational coordination; the availability of federal assets; and a seamless handoff of suspected air, sea, or land targets. Other agencies who either had an interest in or who are impacted by the operations were to provide liaison personnel. Unfortunately, participating agencies have not provided the required staffing to the task force and, thus, JIATF-East has been dominated by DOD personnel and has not achieved the intended interagency makeup. The U.S. Customs Service has provided only 8 of 22 authorized staff. U.S. Customs stated that it could not provide additional staff due to agency downsizing. Furthermore, JIATF-East officials experienced problems with the personnel assigned by the U.S. Customs Service. For example, some U.S. Customs personnel lacked the proper security clearances and could not be trained as operators in the classified watch environments. Also, U.S. Customs personnel sent to fill the high-level positions of Vice Director and Deputy Director for Plans had not obtained the security clearances required for these positions and could not participate in planning for and using DOD classified assets. The Department of State has not filled a position to meet the JIATF-East requirement due to downsizing. Although the Federal Bureau of Investigation periodically assigned an intelligence analyst on temporary duty, it has not assigned a full-time person because of personnel constraints. Federal Bureau of Investigation officials stated that they have developed a plan to assign two agents—an intelligence analyst and a Supervisory Special Agent. These officials stated that the plan has not been approved. Although DEA had a Supervisory Special Agent performing as a liaison officer, DEA disagreed with JIATF-East on the integration of a DEA person into the operational aspects of JIATF-East and did not fill an intelligence analysis position because of resource constraints. Agency responses to staffing the USIC have also been inadequate. ONDCP agreed to an interagency staffing level of 11 positions for USIC, including 5 positions to be filled from the U.S. Coast Guard and 1 each from the Joint Chief of Staff, the Office of the Secretary of Defense, the Central Intelligence Agency, the U.S.Customs Service, the Department of State, and DEA. As of March 1996, 2 of the 11 positions had not been filled. The Department of State and the Office of the Secretary of Defense have not filled these positions. Although progress has been made in improving intelligence sharing in the last 2 years, it remains a contentious issue among various collectors and users of intelligence data. In June 1994, DOD, along with other federal agencies, assessed counterdrug support programs in the transit zone. A major conclusion of the review was that, although accurate intelligence was essential to efficient transit zone operations, transit zone intelligence functions were hampered by (1) legal and agency-imposed limitations on access to law enforcement intelligence, (2) limited predictive analysis, and (3) problems of host nation corruption. Available intelligence information was not considered timely or specific enough regarding locations to support successful operations. DOD concluded that better coordination of intelligence and targeting information among users would improve resource use and recommended a concerted effort to alleviate the effects or reduce the scope of constraints in interagency information sharing. According to DOD officials, the requirements for collecting, retaining, and sharing counterdrug information and intelligence with other federal agencies are contained in a myriad of executive orders, individual agency regulations, and agreements between agencies. Also, the sharing of counterdrug information and intelligence with U.S. allies is governed by many of the same executive orders, regulations, and agreements, as well as existing bilateral agreements. JIATF-East officials told us that they had found limited understanding of regulations and much misinformation about intelligence sharing within the counternarcotics community. DOD officials stated that ONDCP had issued the Interdiction Intelligence Support Plan in March 1995 to ensure that the JIATF-South, the Domestic Air Interdiction Control Center, the Intelligence Analysis Center, and the U.S. Customs Service National Aviation Center were provided access to tactical information necessary to perform their mission. DOD officials believe the current regulations allow sharing and dissemination of significant information beyond that currently being provided. They also believe that restrictions on information sharing are most likely the result of institutional practices and can be rectified by implementing existing procedures, not by creating additional procedures. U.S. law enforcement officials believed that the sharing of their intelligence and information with other agencies was consistent with the legal limitation on the availability of the information and existing regulations. DEA officials noted that there are limitations on what intelligence they can legally provide other federal agencies developed from Grand Jury information, wire taps, and court sealing orders. They also noted that some intelligence is not released to protect sources and the integrity of ongoing investigations. DEA officials also stated that the El Paso Intelligence Center provides JIATF-East with the necessary information to track suspect aircraft and vessels until the respective U.S. and foreign authorities can take appropriate law enforcement action. While the debate over whether DOD is receiving required intelligence continues, ONDCP has been involved in developing an intelligence infrastructure to implement the plan and improve intelligence sharing. Agencies have installed a system that allows the various agencies to pass information from one data base to another on an interagency network. Moreover, according to JIATF-East officials, Caribbean host nations are also concerned about the significant lack of counterdrug information flowing back to their counterdrug forces. The officials said that continued host nation cooperation in counterdrug programs may depend on improvements to intelligence and information sharing with host nation forces. However, given the widespread corruption within the region, it may be difficult to strike a balance that satisfies all parties. JIATF-East records covering known counterdrug events occurring between October 1, 1994, and November 30, 1995, showed that in 87 of 92 cases other federal agencies have fully cooperated with them and provided required operational assistance. JIATF-East officials stated that they were pleased with the cooperation and contributions from U.S. Customs air resources. We noted five occasions where the U.S. Customs Service did not to support JIATF-East requests to track and pursue suspected drug smugglers. In these cases, JIATF-East officials stated that U.S. Customs always had valid reasons such as asset limitations, the geometry of the intercept problem, or the timeliness of the notification. We recommend that the Director of ONDCP develop a regional action plan focused on the Caribbean part of the transit zone to fully implement the U.S. policy for cocaine in the Western Hemisphere. At a minimum, the plan should determine resources and staffing needed and delineate a comprehensive strategy to improve host nation capabilities and commitment to counternarcotics interdiction. ONDCP, USIC, and DEA provided written comments on a draft of this report (see apps. I through III); the Departments of State and Defense and the U.S. Custom Service provided oral comments. ONDCP, the Departments of State and Defense, and the U.S. Customs Service generally agreed with the report’s major conclusions and recommendations. ONDCP stated that many of the recommendations were sound and that it was in the process of implementing some of them. ONDCP said it will carefully examine all of the recommendations in preparing the 1996 National Drug Control Strategy. Several agencies, including USIC, provided additional information or suggested language to clarify the facts presented in this report. We have incorporated these comments into the report. DEA raised concerns regarding intelligence sharing in the Caribbean. DEA believed that every effort was being made to share intelligence within the counternarcotics intelligence community. However, as we noted in our report, JIATF-East and DOD still voiced concerns on intelligence sharing. Several agency comments addressed the impact that budget reductions and downsizing have had on the ability to support transit zone operations. For example, State Department officials noted that the Congress had substantially reduced the State Department’s budget below the levels requested by the President for international law enforcement programs. Also, ONDCP stated that successive cuts in the interdiction budgets over the past several years have served to reduce dramatically the resources available in the interdiction efforts from the transit zone to the source countries. We interviewed officials and reviewed pertinent documents in Washington, D.C., at ONDCP, the Departments of State and Defense, DEA, the U.S. Coast Guard, and the U.S. Customs Service. We also interviewed officials and reviewed documents at the Office of the U.S. Interdiction Coordinator, located within the headquarters of the U.S. Coast Guard. In addition, we interviewed officials and reviewed pertinent documents at the U.S. Atlantic Command in Norfolk, Virginia; the JIATF-East in Key West, Florida; the offices of the DEA, the U.S. Customs Service, and the U.S. Coast Guard in Miami, Florida. We also interviewed officials from the U.S. Customs Air Wing in Puerto Rico and the DEA’s office in Nassau, Bahamas. We conducted our review between November 1995 and March 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Director of ONDCP, the Secretaries of the Departments of Defense and State, the Commissioner of the U.S. Customs Service, the Commandant of the U.S. Coast Guard and the U.S. Interdiction Coordinator, the Administrator of DEA, the Director of the Federal Bureau of Investigation, and interested congressional committees. We will make copies of this report available to others upon request. Please contact me on (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report were Louis Zanardi, Ronald Hughes, and Robert Jaxel. The following is GAO’s comment on the Office of National Drug Control Policy’s (ONDCP) letter dated April 3, 1996. 1. ONDCP has the statutory authority to certify the drug budgets of every federal agency and department. However, ONDCP has limited authority to direct agencies to meet agreed-to resource commitments and operational plans. The following are GAO’s comments on the United States Interdiction Coordinator’s letter dated March 15, 1996. 1. We have made appropriate technical changes to the report. 2. The Joint Interagency Task Force East’s (JIATF-East) assessment of the political will was deleted from the report after further discussions with the State Department and JIATF-East officials. However, the State Department concurred with our conclusion that U.S. antidrug activities are impeded by some countries’ lack of political will, corruption, and limited local law enforcement capabilities. These conclusions are supported by State Department’s March 1996 Internatinal Narcotics Contol Strategy Report. The following are GAO’s comments on the Drug Enforcement Administration’s (DEA) letter dated April 2, 1996. 1. The JIATF-East assessment of the countries’ political will was deleted from the draft report after further discussions with the Department of State and JIATF-East. 2. The report includes information on the El Paso Intelligence Center’s role in providing information to JIATF-East. 3. The Department of State concurred with our conclusion that U.S. counterdrug activities are impeded by a lack of political will, corruption, and limited local law enforcement capabilities in some countries. Furthermore, the report is consistent with the State Department’s March 1996 International Narcotics Control Strategy Report. 4. The report clarifies JIATF-East’s subjective views. 5. The statement is supported not only by statements made by JIATF-East officials but also by various law enforcement agency reports. 6. We have clarified the report and eliminated the apparent contradiction. 7. Notwithstanding the fact that drug seizures were relatively high before 1994, they still represented a relatively low percentage of drugs transiting the area. 8. This information is taken from Department of State reports and law enforcement reports. Moreover, the concern of those reporting unsubstantiated rumors and allegations is not only whether they are true but, more importantly, that often the country is not investigating them. While most U.S. officials agreed that corruption was a problem, the evidence that it occurred was admittedly weak and we took care to properly detail it as such. 9. Our prior reports dealt with existing conditions in the 1989 to 1991 time frame. We are not suggesting that increased funding cited in the ONDCP study will lead to any greater interdiction success. 10. We have added DEA statements concerning their belief that JIATF-East is receiving all the actionable intelligence it requires and the limitations DEA has in providing intelligence. Nevertheless, there is clearly a disagreement between JIATF-East and DEA over whether it is getting all of the necessary intelligence notwithstanding the fact that progress has been made, including the stationing of two JIATF-East personnel at the El Paso Intelligence Center. DOD believes that better understanding of current regulations would further improve the sharing of intelligence information. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed drug trafficking activities in the Caribbean, focusing on: (1) the activities' nature in the transit zone, particularly in the Eastern Caribbean; (2) host nation impediments to an effective regional control strategy; (3) U.S. agencies' capabilities to interdict drug trafficking throughout the Caribbean transit zone; and (4) federal agency planning, coordination, and implementation of U.S. interdiction efforts. GAO found that: (1) U.S. officials believe that drug trafficking through the Caribbean is increasing; (2) drug traffickers have shifted their drug transportation operations from primarily air to commercial and marine transportation and are using advanced technologies to counter U.S. interdiction efforts; (3) the U.S.-Caribbean strategy attempts to strengthen host nations' political will and capabilities to support U.S. objectives, but most host nations lack the resources to conduct antidrug operations; (4) the United States has entered into agreements with several governments that give it authority to operate in their territorial waters and airspace; (5) widespread political and police corruption in the Caribbean hampers U.S. interdiction efforts; (6) budget cuts have reduced the Department of Defense's and law enforcement agencies' interdiction capabilities in the transit zone and the expected increases in funds for source country activities have not materialized; (7) lost radar and ship capabilities had the greatest impact on air and surface interdictions; (8) cocaine seizures declined by almost one-half from fiscal year (FY) 1991 to FY 1995; (9) the Administration has not developed a regional, antidrug implementation plan, adequately staffed interagency organizations with key personnel, or fully resolved intelligence-sharing issues; and (10) the two organizations with coordination responsibilities lack the authority to command the use of any agency's resources.
Generally, HCFA considers transportation costs to be part of physicians’ practice expense for a service under Medicare’s physician fee schedule. For example, physicians do not receive separate transportation payments when they visit Medicare beneficiaries in nursing homes. However, this policy is not followed when it comes to the transportation of equipment used to do diagnostic tests. HCFA established specific guidance for carriers to follow regarding portable x-ray and EKG services. Because HCFA did not issue specific instructions for other diagnostic tests, such as ultrasound, each Medicare carrier developed its own policies. Section 1861(s)(3) of the Social Security Act provides the basis for the coverage of diagnostic x-rays furnished in a Medicare beneficiary’s residence. HCFA believes that because of the increased costs associated with transporting x-ray equipment to the beneficiary, the Congress intended for HCFA to pay an additional amount for the transportation service furnished by an approved portable x-ray supplier. Thus, HCFA has established specific procedure codes to pay for the transportation of x-ray equipment. HCFA added EKG services allowed in homes to the established list of approved services that suppliers may provide and established a code to pay for the transportation of EKG equipment. Many Medicare carriers limited payment of transportation costs for EKG services to portable x-ray suppliers. However, others had allowed it for other types of providers such as independent physiological laboratories (IPL). HCFA never established a national policy for transportation costs related to ultrasound services. Each carrier developed its own policy. Medical directors for each of the carriers decided whether to reimburse for transportation costs separately. In 15 states, carriers had a policy to reimburse separately for transportation costs associated with ultrasound services. Beginning January 1, 1996, carriers could allow transportation payments for only the following services: (1) x-ray and standard EKG services furnished by an approved portable x-ray supplier and (2) standard EKG services furnished by an IPL under special conditions. For all other types of diagnostic tests payable under the physician fee schedule, travel expenses were considered “bundled” into the procedure payment. For example, carriers could no longer make separate transportation payments associated with ultrasound services. After further review, HCFA again revised its policy. HCFA concluded that the statute authorized carriers to make separate transportation payments only for portable x-ray services. Therefore, HCFA published a final regulation providing that effective January 1, 1997, carriers would no longer make separate transportation payments associated with EKG services. The enactment of the Balanced Budget Act in August 1997 caused additional changes in Medicare’s transportation payment policy. First, BBA temporarily restored separate payments for transporting EKG equipment but not ultrasound equipment during 1998. The law requires the Secretary of Health and Human Services to make a recommendation by July 1, 1998, to the Committees on Commerce and Ways and Means of the House of Representatives and the Committee on Finance of the Senate on whether there should be a separate Medicare transportation fee for portable EKGs starting in 1999. Second, BBA phases in a prospective payment system for skilled nursing care that will pay an all-inclusive per diem rate for covered services. Beneficiaries needing skilled care after being discharged from the hospital are covered under Part A for 100 days of care during a benefit period. Part A coverage includes room and board, skilled nursing and rehabilitative services, and other services and supplies. Thus, the per diem rate paid to nursing facilities would include all services during the period the beneficiary is receiving posthospital extended care. For example, services such as EKGs and ultrasound will no longer be paid for separately but will be included in the per diem rate. The prospective payment provision begins July 1, 1998. Third, BBA establishes an ambulance service fee schedule beginning in 2000. This provision is designed to help contain Medicare spending on ambulance service. Medicare paid for more than 14 million EKG and 5 million ultrasound services in 1995 at a cost to the Medicare program of about $597 and $976 million, respectively. Most EKG and ultrasound services were performed in physicians’ offices or hospitals. In 1995, about 2 percent of the EKG and less than 1 percent of the ultrasound services were provided in beneficiaries’ homes or nursing homes, costing the Medicare program about $12 million for the EKGs and $8 million for the ultrasound services. Of these services, about 88 percent of the EKG and 82 percent of the ultrasound services were done in a nursing home. These services were usually provided by portable x-ray suppliers and IPLs. Table 1 compares these services in these settings. Because HCFA regulations allowed EKG service transportation payments to be paid only to portable x-ray providers and certain IPLs for EKG services done in a beneficiary’s residence, it is not surprising that these providers accounted for 83 percent of all Medicare EKG services performed in nursing homes. Likewise, these two types of providers accounted for a high portion of the Medicare ultrasound services provided in nursing homes. General practitioners, cardiologists, and internists also provided EKG and ultrasound services. In 1995, 1,317 providers were doing EKGs and 337 were doing ultrasound services in nursing homes. Of the total EKG providers, 676 were portable x-ray suppliers and 75 were IPLs. Of the total ultrasound providers, 51 were portable x-ray suppliers and 83 were IPLs, and combined they accounted for more than half of the ultrasound services done in nursing homes. About one-fifth of the states accounted for a disproportionately high concentration of EKG and ultrasound services in 1995, compared with these states’ nursing home populations. In addition, it appears that these services were generally provided by a few large providers. Thus, this change in transportation policy will have a larger effect on Medicare spending in some geographic areas. Eleven states accounted for nearly three-fourths of the 255,000 EKGs done in nursing homes. This appears to be disproportionately high when compared with the nursing home population in the 11 states. Figure 1 shows the use rates in each state per 100 Medicare nursing home residents. Furthermore, a handful of providers in each of these states accounted for most of the services. For example, in New York 7 percent of the providers accounted for 77 percent of the services. (See table 2.) Similarly, the data show that 10 states accounted for more than 84 percent of the ultrasound services done in nursing homes in 1995. The use rate in these 10 states appears to be somewhat higher than in the 40 other states. Figure 2 shows the ultrasound use rates in each state. Less than half of the portable x-ray suppliers and IPLs did most of the ultrasound services for which separate transportation payments were made, and only a handful of them did more than half of these services. Data show that 54 portable x-ray suppliers and IPLs did 89 percent of these services. Further, 11 of these 54 providers accounted for 52 percent of the transportation claims. Similar to what we found in the EKG data, there were a few high-volume providers in the 10 states, as shown in table 3. About 19 percent of the EKGs and 21 percent of the ultrasound tests done in nursing homes in 1995 would be unaffected by any change in the transportation payment policy because BBA eliminates separate payments for services provided to beneficiaries in skilled facilities while their stay is covered under posthospital extended care. An additional 37 percent of the portable EKGs and 68 percent of the ultrasound tests were done without the providers’ receiving additional payments for transporting the equipment. Consequently, 56 percent of the EKG services and 89 percent of the ultrasound tests provided to beneficiaries in their place of residence would be unaffected by the elimination of separate transportation payments. There is some uncertainty, however, as to whether (and to what extent) providers will cut back on services for which they previously received a transportation payment. Nonetheless, it is reasonable to assume that at least some of these services would also continue under a revised payment policy. If providers reduced services in nursing homes, some residents would be inconvenienced by having to travel to obtain these tests. In some instances, the nursing home may need to provide transportation or staff to accompany a resident to a test site. Consequently, nursing homes could be affected as well. In the future, all services provided to Medicare beneficiaries in skilled facilities who are under posthospital extended care will be included under a per diem prospective payment rate. Nursing facilities will receive a per diem rate for routine services such as room and board and all other services such as EKGs and ultrasound. Based on the 1995 data, 19 percent (48,000) of the EKG services and 21 percent (6,520) of the ultrasound services will be incorporated under the prospective rates. In 1995, only portable x-ray suppliers and certain IPLs received separate transportation payments. Therefore, any EKG services done in nursing homes by other medical providers such as general practitioners, internists, and cardiologists did not include separate transportation payments. Data for 1995 show that 55,580 of the EKG services done in nursing homes did not include a separate transportation payment. (See table 4.) When an EKG or ultrasound service is done in conjunction with an x-ray, the provider receives a transportation fee for the x-ray service but not the EKG or ultrasound. The 1995 data covering EKG services with separate transportation payments show that 38,820 of the beneficiaries who received an EKG service also had an x-ray service done during the same visit. Thus, any provider doing an EKG and an x-ray service would continue to receive a separate transportation payment for the x-ray service. Before HCFA issued regulations in December 1995, Medicare providers in less than a third of the states were paid for transporting ultrasound equipment to beneficiaries’ residences. Each carrier had its own policy regarding reimbursement for ultrasound equipment transportation costs. Carrier representatives responsible for Medicare Part B program payments in only 14 states and part of another told us that they had a policy to make transportation payments when billed for ultrasound services. See figure 3. Because carriers responsible for fewer than one-third of the states allowed separate transportation payments, most ultrasound services performed in nursing homes were done without such payment. Only 3,220 (15 percent) of the 23,600 ultrasound services done in nursing homes in 1995 had claims for separate transportation payments. The remainder, approximately 20,380, were done without a separate transportation payment. (See table 4.) Even in states where carriers had a policy to pay separate transportation payments, there were many instances in which providers performed ultrasound services in nursing homes but did not receive a separate transportation payment. For example, in Maryland and Pennsylvania, where carriers had policies to make separate transportation payments, 79 and 55 percent, respectively, of the ultrasound services done in nursing homes by providers did not involve separate transportation payments. The average frequency of ultrasound tests per nursing home resident varied among states but did not vary systematically with carriers’ transportation payment policies. That is, there is no indication from the 1995 data that nursing home residents systematically received fewer services in states that did not make separate transportation payments compared with residents in states that did pay. For example, Michigan and New York—states where separate transportation payments were generally not made—had high ultrasound use rates, while Massachusetts—where separate transportation payments were made—had a low rate. Advocacy groups gave contradictory opinions as to the possible effects HCFA’s changed policy would have on Medicare beneficiaries. Generally, officials representing medical groups believed that EKG and ultrasound services would continue to be available and thus did not see an adverse effect on the availability of care for patients. In contrast, representatives from nursing homes and EKG provider associations expressed concern about potential decreases in quality of care, especially for frail elderly beneficiaries who would be most affected by being transported away from their homes. In addition, officials at several nursing homes we visited said that sending beneficiaries out also imposes additional costs and burdens on the nursing home because often these beneficiaries have to be accompanied by a nursing home representative. We cannot predict whether the revised payment policy will decrease or increase Medicare spending because we do not know the extent to which providers will continue to supply portable EKG and ultrasound services without separate transportation payments. Because of these uncertainties, we developed a range estimate of potential savings and costs associated with the revised payment policy. In 1995, if the prospective payment system for skilled nursing care and the policy of not making transportation payments had been in effect, Medicare outlays would have been lower by as much as $11 million on EKGs and $400,600 on ultrasound services. However, these savings would have materialized only to the extent that homebound beneficiaries and nursing home residents did not travel outside in Medicare-paid ambulances to receive these tests. We cannot predict the likelihood that savings will be realized because they depend upon the future actions of portable equipment providers and nursing home operators. Providers of portable equipment may continue to provide EKG and ultrasound services even if they no longer receive the separate transportation payments. Many mobile providers have established private business relationships with the nursing homes they serve and may be eager to maintain those relationships. In addition, many also provide other services to nursing homes, such as x-ray services. Therefore, they would be likely to continue EKG services to some degree. Prospective payment may change the way nursing facilities provide services. Some nursing homes may purchase the equipment to provide diagnostic tests in house. Representatives from two of the seven nursing homes we visited told us that they were considering purchasing EKG equipment and having nursing home staff perform the tests. The representatives noted that this would be feasible because EKG equipment is relatively inexpensive and staff need only limited training to perform the tests (no certification is needed). They also noted that residents needing EKGs would receive quicker service if the equipment were always on the premises. Because nursing homes may have additional transportation or staff costs for each test, the revised payment policy may produce Medicare savings by reducing the use of EKG and ultrasound services. During our review of case files at selected nursing homes, we observed a number of instances in which beneficiaries entering the nursing home were receiving EKG tests, although there were no indications that these beneficiaries were experiencing any problems to warrant such tests. In many of these situations, nursing home officials said that the tests provided baseline information. To the extent that eliminating the transportation payment would reduce inappropriate screening tests billed to Medicare, it would produce savings. Eliminating separate transportation payments could increase Medicare spending if beneficiaries travel to hospitals or physicians’ offices to be tested. Some very sick or frail beneficiaries would need to travel by ambulance. We found that the costs for the service itself are about the same whether the service is delivered in a hospital, a physician’s office, or a nursing home. However, the cost of transporting a beneficiary by ambulance is substantially greater than the amount paid to mobile providers for transporting equipment to a beneficiary’s residence. We estimate that the potential annual net costs to Medicare from eliminating transportation payments could be as much as $9.7 million for EKGs and $125,000 for ultrasound tests. These estimates, based on 1995 data, represent an upper limit that would be reached only if equipment providers stopped providing all services for which they previously received a transportation payment and the beneficiaries were transported by ambulance to receive the services. Our net cost estimates are based on (1) the number of beneficiaries who would be likely to need transporting by ambulance to receive EKG and ultrasound services, (2) the cost of ambulance transportation, and (3) the costs of EKGs and ultrasound tests in other settings. We estimate that about half of the beneficiaries who received an EKG and more than one-third of the beneficiaries who received an ultrasound service in 1995 would likely have been transported by ambulance had the equipment not been brought to them. Our estimates are based on our review of beneficiary case files from several nursing homes in two states. (See appendix I for more detail.) The transportation payments by Medicare for ambulance services are significantly greater than the transportation payments made to providers of portable EKG and ultrasound equipment. In 1995, the average ambulance transportation payment for beneficiaries in skilled nursing facilities who were transported for an EKG test ranged from $164 (for an average trip in North Carolina) to $471 (for an average trip in Connecticut). For the same period, the average payment made for transporting EKG equipment to a nursing home ranged from about $26 (in Illinois) to $145 (in Hawaii, Maine, Massachusetts, New Hampshire, and Rhode Island). The cost for EKG or ultrasound services is about the same in every setting. Anywhere other than a hospital outpatient setting, the Medicare payment for the service is determined by the physician fee schedule. In a hospital outpatient setting, Medicare payments for services such as EKGs and ultrasound tests are limited to the lesser of reasonable costs, customary charges, or a “blended amount” that relates a percentage of the hospital’s costs to a percentage of the prevailing charges that would apply if the services had been performed in a physician’s office. Our analysis of 1995 hospital cost reports does not suggest that Medicare would pay more for the services if they were performed at a hospital. While millions of EKG and ultrasound tests are provided yearly to Medicare beneficiaries, only a small percentage of these tests are performed in a beneficiary’s home or nursing home. Many of the EKGs and most of the ultrasound tests performed in those settings would be unaffected by the elimination of separate transportation payments. We cannot predict how providers of portable EKG and ultrasound equipment will react over the long term to the elimination of transportation payments or what actions nursing homes might take to provide services if they were not delivered. Also, we cannot predict what actions skilled facilities may take as a result of the prospective payment system that will be implemented. Consequently, our estimate of the effect of a revised payment policy ranges from a savings of $11 million to a cost of $9.7 million for EKG tests and a savings of $400,000 to a cost of $125,000 for ultrasound tests. Because providers’ reactions are uncertain, HCFA would have to eliminate transportation payments to reliably gauge the revised policy’s effect on Medicare spending. By carefully monitoring the revised policy over a sufficient period of time, HCFA could determine whether the revised payment policy caused a net decrease in Medicare spending or a net increase. In the absence of such hard data, however, we cannot recommend a specific course of action regarding the retention or elimination of separate Medicare transportation payments for portable EKG and ultrasound tests. HCFA officials stated that our methodology was appropriate and that they generally agreed with the results of our review. Furthermore, they agreed that precisely estimating the potential cost of the revised payment policy is difficult. However, HCFA officials believe that the upper limit of our potential Medicare spending estimate is based on very conservative assumptions and that this amount of additional Medicare spending is unlikely to occur if separate transportation payments are eliminated. We agree that our approach was conservative so as not to understate the potential for additional Medicare spending. However, as we state in the report, if providers continue to supply these services for business reasons, then Medicare might save money or incur additional costs below our estimated upper limit because fewer beneficiaries would need transporting by ambulance for the services. This would also be true, especially in the case of EKGs, if nursing homes purchase the necessary equipment and keep it on site. HCFA officials were also concerned over what appears to be a disproportionate amount of EKG and ultrasound services by a few providers in selected states. HCFA officials thought this pattern may indicate potential abuse. We did not attempt to determine appropriate use rates for these services and thus cannot conclude whether the rates are too high or too low in some areas. Our purpose in showing the concentration of EKG and ultrasound services was to provide some perspective on the beneficiaries likely to be most affected by HCFA’s changed payment policy. We incorporated other HCFA comments in the final report where appropriate. As agreed with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies to the Secretary of the Department of Health and Human Services, the Administrator of HCFA, interested congressional committees, and others who are interested. We will also make copies available to others on request. Please call James Cosgrove, Assistant Director, at (202) 512-7029 if you or your staffs have any questions about this report. Other major contributors include Cam Zola and Bob DeRoy. To obtain information on electrocardiogram (EKG) and ultrasound tests done in 1995, we extracted pertinent use data from a national database consisting of all Medicare Part B claims from a 5-percent sample of beneficiaries. We used valid 1995 EKG and ultrasound procedure codes for the diagnostic procedure itself. We eliminated all codes that represented only a physician’s interpretation or report and codes for procedures that were delivered in settings other than nursing homes. We used 1995 data because it was the last year in which both EKG and ultrasound transportation costs could have been reimbursed under Medicare. In addition, we obtained data on outpatient costs for radiological and other diagnostic tests for all hospitals reporting such data to the Health Care Financing Administration (HCFA) in 1995. Because paying transportation costs relating to ultrasound services was a “local” decision, we contacted all the Medicare Part B carriers to determine the reimbursement practices in effect in every state in 1995. We visited 12 judgmentally chosen nursing homes in Florida and Pennsylvania and randomly selected 176 cases of beneficiaries who had an EKG or ultrasound test done in the home during 1995. We discussed the reasons for the test and the general condition of the beneficiary at the time of the test with an appropriate nursing home official, usually a nurse. We asked the nurses to provide us with their opinion as to how each beneficiary would have been transported if he or she had to travel away from the home for the test. These beneficiaries may better reflect the need for ambulance services by most nursing home beneficiaries. From our sample, we determined that about 50 percent of the beneficiaries who received an EKG test and 40 percent of the beneficiaries who received an ultrasound test would most likely have been transported by ambulance if the tests had been done outside the nursing home. Most of the beneficiaries who the nurses believed would have needed an ambulance were totally bedridden. The concern generating the order for the test had been either that an episode developed late at night or that a condition was serious enough to border on a call to 911. Beneficiaries whom the nurses believed could be transported by means other than an ambulance were usually ambulatory and their medical situations generally involved a scheduled service done 1 or 2 days after the order or a baseline test requested upon entering the home. We discussed HCFA’s policy with HCFA officials, representatives of organizations representing portable x-ray suppliers, independent physiological laboratory providers, and several individual providers of EKG and ultrasound services. Also, we sought the opinions of several medical associations, including the American College of Cardiology, the American College of Physicians, and the American College of Radiology. In addition, we solicited comments from 11 health care associations. In estimating the potential net cost to Medicare from eliminating transportation payments, we did the following: (1) identified, from the sample 5-percent national claims data file, the Medicare beneficiary population that received an EKG or ultrasound service from a provider that was paid a transportation fee for delivering the service; (2) reduced this count by the beneficiaries who also had an x-ray service (since the provider would continue to get transportation fees for the x-ray), the beneficiaries who had the service delivered by a provider who could not be paid transportation expenses, and beneficiaries receiving the services while covered under posthospital extended care; (3) estimated the percentage of beneficiaries who would have been transported by ambulance (using our observations from case files in two states); (4) developed an average ambulance fee paid in each state (using data on the skilled nursing home beneficiaries who went by ambulance in 1995 to an outpatient facility for a diagnostic test); and (5) determined the transportation fee paid to mobile providers in each state. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how the Health Care Financing Administration's (HCFA) revised payment policies would affect Medicare beneficiaries and program costs, focusing on the: (1) Medicare recipients, places of service, and providers who might be affected most; (2) number of services that would be affected by the changed policy; and (3) effect on Medicare's program costs. GAO noted that: (1) only a fraction of the electrocardiogram (EKG) and ultrasound tests paid for by Medicare are performed outside of physicians' offices or hospital settings and, thus, are potentially affected by the payment policy changes; (2) in 1995, Medicare paid approximately $597 million for 14 million EKGs and about $976 million for 5 million ultrasound tests in various settings; (3) only 290,000 of the EKGs and only 37,000 of the ultrasound tests were done in locations such as nursing homes or beneficiaries' residences where the provider needed to transport the diagnostic equipment; (4) nearly 90 percent of the services that required transporting equipment were provided to residents of nursing homes; (5) they were usually provided by portable x-ray and ultrasound providers; (6) some states appear to have a higher concentration of these services, with a small number of providers accounting for a large portion of each state's total portable EKG and ultrasound services; (7) many EKGs and ultrasound services provided in nursing homes would be unaffected if transportation payments were eliminated; (8) given the experience of 1995, about 56 percent of the EKGs and 89 percent of the ultrasound services provided in nursing homes would be unaffected by transportation payment changes and presumably would continue to be provided in those settings; (9) in July 1998, nursing homes will receive an inclusive per diem payment for all services provided to beneficiaries receiving Medicare-covered skilled nursing care; (10) a decision to eliminate or retain separate transportation payments for other beneficiaries will not affect the per diem payment; (11) another reason is that many nursing home EKGs and most ultrasound services in 1995 were performed by providers who did not receive a transportation payment; (12) the effect of eliminating transportation payments on the remaining 44 percent of the EKG and 11 percent of the ultrasound services is unknown because it depends on how providers respond; (13) because relatively few services would be affected, eliminating transportation payments would likely have a nominal effect on Medicare spending; (14) Medicare could save $11 million if mobile providers continue to supply services; (15) however, if mobile providers stopped bringing portable EKG equipment to beneficiaries, then some people would travel in Medicare-paid ambulances to obtain these tests; (16) eliminating transportation payments for ultrasound services would have a smaller effect; and (17) GAO estimates the effect on Medicare spending might range from $400,000 in savings to $125,000 in increased costs.
As shown in figure 1, of the four insular areas addressed in this report, three are located in the Pacific—American Samoa, CNMI, and Guam—and one is located in the Caribbean—the USVI. Each of these insular areas has its own unique culture and historical relationship with the United States. See appendices V, VI, VII, and VIII for detailed descriptions of the history and development of the judicial systems of American Samoa, CNMI, Guam, and USVI, respectively. American Samoa, the only U.S. insular area in the southern hemisphere, is located about 2,600 miles southwest of Hawaii. American Samoa consists of five volcanic islands and two coral atolls, covering a land area of 76 square miles, slightly larger than Washington, D.C. The capital of American Samoa, Pago Pago, is located on the main island of Tutuila, which is mostly rugged terrain with relatively little level land. Agricultural production is limited by the scarcity of arable land, and tourism is impaired by the island’s remote location and lack of tourist-rated facilities. Two tuna canneries constitute the main sources of private sector employment. Most of the economic activity and government operations on Tutuila take place in the Pago Pago Bay area. According to the American Samoa Department of Commerce data, in 2005 the population of American Samoa was about 65,500. Unlike residents born in CNMI, Guam, and USVI, residents born in American Samoa are nationals of the United States, but may become naturalized U.S. citizens. Like residents of the other insular areas, residents of American Samoa have many of the rights of citizens of the 50 states, but cannot vote in U.S. national elections and do not have voting representation in the final approval of legislation by the full Congress. The Delegate from American Samoa has all congressional privileges, including a vote in committee, except a vote in Congress as a whole. Further, according to Census Bureau data for 2000, the median household income in American Samoa was $18,200, less than half of the U.S. median household income of almost $41,000. American Samoa does not have an organic act that formally establishes the relationship between American Samoa and the United States. Two deeds of cession were initially completed between Samoan chiefs, or matai, and the United States in 1900 and 1904 and ratified by the federal government in 1929. In these deeds, the United States pledged to promote peace and welfare, to establish a good and sound government, and to preserve the rights and property of the people. The U.S. Navy was initially responsible for federal governance of the territory. Then, in 1951, federal governance was transferred to the Secretary of the Interior, which continues today. The Secretary exercises broad powers with regard to American Samoa, including “all civil, judicial, and military powers” of government in American Samoa. American Samoa has had its own constitution since 1960, and since 1983, the local American Samoa constitution may only be amended by an act of Congress. The American Samoa Constitution provides for three separate branches of government—the executive, the legislative, and the judicial. Since 1977, a popularly elected Governor heads the American Samoa executive branch for 4-year terms. Nearly 40 American Samoa departments, offices, and other entities within the executive branch of the American Samoa government provide public safety, public works, education, health, commerce, and other services. The Governor has responsibility for appointing the Attorney General, Director of Public Safety, and other executive branch agency leaders. The legislature, or Fono, is comprised of 18 senators and 20 representatives. Each of the senators is elected in accordance with Samoan custom by the city councils of the counties that the senator represents. Each of the representatives is popularly elected from the representative districts. American Samoa exercises authority over its immigration system through its own locally adopted laws. In fiscal year 2007, a total of almost $105 million in federal funds were provided from a variety of federal agencies, including the Departments of the Interior, Education, Agriculture, Transportation, and Health and Human Services. Specifically, DOI provided funds that same year in the amount of $22.9 million for American Samoa government operations, including the High Court of American Samoa. In addition to these federal funds, a portion of the funding for American Samoa government operations comes from local revenues. American Samoa Judiciary The American Samoa judiciary, as provided in the American Samoa Constitution and Samoan Code, consists of a High Court and a local district court under the administration and supervision of the Chief Justice. The High Court consists of four divisions—the trial division; the family, drug, and alcohol division; the land and titles division; and the appellate division. The trial division, which consists of the Chief Justice, the Associate Justice, and associate judges, is a court of general jurisdiction, empowered to hear, among other things, felony cases and civil cases in which the amount in controversy exceeds $5,000. The Chief Justice and the Associate Justice are appointed by the U.S. Secretary of the Interior and are required to be trained in the law. There are six associate judges, who are appointed by the Governor and are not required to have legal training. The associate judges are matai, or chiefs, and they preside over cases in the High Court, playing a more significant role in deciding issues of matai titles and land. There is one local district court judge, who is appointed by the Governor and must also have legal training, who hears matters such as misdemeanor criminal offenses and civil cases in which the matter in controversy does not exceed $5,000. The Chief and Associate Justices, and the local district and associate judges hold office for life with good behavior. The American Samoa judiciary has a public defender, probation officers, translators, and marshals. Since the 1970s the Secretary of the Interior has appointed federal judges, usually from the Ninth Circuit, to serve temporarily as Acting Associate Justices in the appellate division of the High Court of American Samoa. American Samoan customs and traditions have an influence over the local legal system. The distinctive Samoan way of life, or fa’a Samoa, is deeply imbedded in traditional American Samoa history and culture. Fa’a Samoa is organized around the concept of extended family groups—people related by blood, marriage, or adoption—or aiga. Family members acknowledge allegiance to the island leader hierarchy comprised of family leaders, or matai (chiefs). Matai are responsible for the welfare of their respective aiga and play a central role in protecting and allocating family lands. About 90 percent of land in American Samoa is communally owned and controlled by matai, and there are limits in American Samoa law regarding the transfer of property. The concept of fa’a Samoa extends to the governance structures in American Samoa and, thus, most high- ranking government officials, including judges, are matai. Further, Samoan law allows for a custom of ifoga, or ceremonial apology, whereby if a member of one family commits an offense against a member of another family, the family of the offender proceeds to the headquarters of the family of the offended person and asks for forgiveness. After appropriate confession of guilt and ceremonial contrition by the offending family, the family offended against can forgive the offense. If the offender is convicted in court, the court may reduce the sentence of the offender if it finds that an ifoga was performed. The issue of establishing a federal court in American Samoa is not new. This issue has arisen within the larger question of defining the political status of American Samoa and its relationship with the United States. For example, in the 1930s, Congress considered legislation that would provide an avenue of appeal from the High Court of American Samoa to the U.S. District Court of Hawaii, during its deliberation of an organic act for American Samoa. However, this initiative was not enacted by Congress. Further, since 1969, there have been three American Samoa commissions convened to study the future political status of American Samoa. These commissions have studied, among other things, the necessity of an organic act. The most recent commission’s report, published in January 2007, did not recommend any changes in American Samoa’s political status as an unorganized and unincorporated territory of the United States, with the intent that American Samoa could continue to be a part of the United States and also have the freedom to preserve Samoan culture. In addition, in the mid-1990s DOJ proposed legislative options for changing the judicial structure of American Samoa, including establishing a federal court within the territory. These proposals were developed in response to growing concerns involving white-collar crime in American Samoa, which were detailed in a December 1994 DOJ crime assessment report. However, while the House Committee on Resources held hearings on the 1994 DOJ report in August 1995, and judicial committees studied various legislative options, the Congress did not take any actions on the proposals. Then, in February 2006, the Delegate from American Samoa introduced legislation in the U.S. Congress to establish a federal court in American Samoa and later that month, the American Samoa Fono held a joint legislative public hearing to solicit public comments on the bill. No congressional actions were taken on the bill and the Delegate from American Samoa withdrew the legislation after he and others requested this report. The federal courts in the insular areas of CNMI, Guam, and USVI were established under Article IV of the Constitution, whereas U.S. district courts elsewhere in the United States were established under Article III of the Constitution. Article IV courts are similar to Article III courts, but differ in terms of specific jurisdiction and tenure of the judges. As shown in table 1, Article IV courts generally exercise the same jurisdiction as Article III courts and may also exercise jurisdiction over local matters. Article IV judges are appointed by the President, with the advice and consent of the Senate, serve terms of 10 years, and can be removed by the President for cause. Article III judges are appointed by the President, with the advice and consent of the Senate, and serve with Article III protections of life tenure for good behavior and immunity from reductions in salary. Article IV judges hear both federal and bankruptcy cases, whereas Article III courts generally have a separate unit to hear bankruptcy cases. An Article III judge can be designated by the Chief Judge of the Circuit Court of Appeals or the Chief Justice of the United States to sit on an Article IV court. However, an Article IV judge can be designated to sit only as a magistrate judge on an Article III court. The federal courts in CNMI, Guam, and USVI were established at different times, but developed in similar ways. The District Court for the Northern Mariana Islands was established in 1977 as specified in the 1975 agreement, or covenant, between the Northern Mariana Islands and the United States. The District Court of Guam was established when the federal government passed an Organic Act for Guam in 1950. The District Court of the Virgin Islands, as it currently exists, was established by an Organic Act in 1936. Each of these federal courts initially had jurisdiction over federal, as well as local, issues. Over time, however, the federal courts were divested of jurisdiction over local issues, with the exception of the District Court of the Virgin Islands, which maintains jurisdiction over cases involving local offenses that have the same underlying facts as federal offenses. Similarly, each of the federal courts had appellate jurisdiction over the local trial courts until the local government established a local appellate court. CNMI, Guam, and USVI have all established local Supreme Courts, so that the federal courts no longer have appellate jurisdiction over local cases. As such, the jurisdiction of each of the three federal courts currently resembles that of district courts of the United States, which include federal question jurisdiction, diversity jurisdiction, and the jurisdiction of a bankruptcy court. Decisions of the District Court for the Northern Mariana Islands and the District Court of Guam may be appealed to the U.S. Court of Appeals for the Ninth Circuit, and decisions of the District Court of the Virgin Islands may be appealed to the U.S. Court of Appeals for the Third Circuit. An Article IV judge—two Article IV judges in the case of the Virgin Islands—sits on each of the federal courts and is appointed by the President with the advice and consent of the Senate, for a term of 10 years, but may be removed by the President for cause. For the history and development of courts in the CNMI, Guam, and USVI, see appendixes VI, VII, and VIII, respectively. Unlike other insular areas, such as CNMI, Guam, and USVI, American Samoa does not have a federal court. As a result, federal law enforcement officials have pursued violations of federal criminal law arising in American Samoa in the U.S. district courts in Hawaii or the District of Columbia. In the absence of a federal court in American Samoa, federal law has provided federal jurisdiction to the High Court of American Samoa in areas such as food safety and shipping issues, which is quite narrow compared to the comprehensive federal jurisdiction granted to federal courts in other insular areas. With regard to its local judicial structure, American Samoa is different from other U.S. insular areas. The judicial system in American Samoa consists only of local courts that handle limited federal matters, whereas the judicial system in CNMI, Guam, and USVI are composed of local courts and federal courts that operate independently from each other. Also, whereas the justices of the High Court in American Samoa are appointed by the Secretary of the Interior, the judges of the local courts in CNMI, Guam, and USVI are appointed by the Governors of each insular area. Further, although decisions of the appellate division of the High Court of American Samoa have been appealed to the Secretary of the Interior, federal law provides that, 15 years after the establishment of a local appellate court, decisions of the local appellate courts in CNMI, Guam, and USVI may be appealed to the U.S. Supreme Court. Because there is no federal court in American Samoa, matters of federal law arising in American Samoa have generally been adjudicated in either the District of Hawaii (Honolulu, Hawaii) or the District of Columbia (Washington, D.C.), as stated earlier. With regard to criminal matters, although federal criminal law extends to American Samoa, questions surrounding the proper jurisdiction and venue of cases have posed complex legal issues when violations of federal law occurred solely in American Samoa. However, since a 2001 precedent- setting case involving human trafficking, DOJ prosecutors told us that some of the legal issues regarding jurisdiction and venue that had been unsettled in the past have been resolved. For example, federal law provides that the proper venue for a criminal case involving a federal crime committed outside of a judicial district is: (1) the district in which the defendant is arrested or first brought; or (2) if the defendant is not yet arrested or first brought to a district, in the judicial district of the defendant’s last known residence; or (3) if no such residence is known, in the U.S. District Court for the District of Columbia. Prior to this 2001 case, most cases arising in American Samoa were brought in the U.S. District Court for the District of Columbia. In this 2001 case, prosecutors used the “first brought” statute to establish venue in the District of Hawaii, since the defendant was arrested and “first brought” to Hawaii and then indicted in the District of Hawaii. Based on the facts and arguments presented, the Ninth Circuit upheld this application of the “first brought” statute. Following this case, most defendants who have been charged with committing federal offenses in American Samoa have been charged in one of two venues—the U.S. district courts in Hawaii or the District of Columbia, because there is no federal court in American Samoa. In 2006 and 2007, DOJ attorneys prosecuted defendants in the U.S. district courts in both Hawaii and the District of Columbia for civil rights violations and public corruption cases arising in American Samoa. DOJ prosecutors told us that their approach is adjusted depending on the facts of each case, legal challenges presented, and prosecutorial resources available. With regard to certain federal civil matters, when both the plaintiff and the defendant reside in American Samoa, and the events giving rise to the civil action occurred in American Samoa, there may be no proper federal venue, meaning there may be no federal court that may hear the case. However, some civil cases have been brought against the Secretary of the Department of the Interior alleging that the Secretary’s administration of the government of American Samoa violated the U.S. Constitution. In such cases, the U.S. District Court for the District of Columbia has been the appropriate forum, given that DOI is headquartered in Washington, D.C. Bankruptcy relief is not available in American Samoa since federal law has not explicitly extended the U.S. Bankruptcy Code to American Samoa, and there is not a federal court in American Samoa in which bankruptcy claims may be adjudicated. However, U.S. bankruptcy courts may exercise jurisdiction over petitions for relief filed by American Samoan entities under certain circumstances, such as if the entities reside or do business in a judicial district of the United States and the court finds that exercising jurisdiction would be in the best interest of the creditors and the debtors. As discussed above, because American Samoa does not have a federal court, federal officials have had to seek U.S. district courts to adjudicate matters of federal law arising in American Samoa. Despite the absence of a federal court in American Samoa, federal law provides that the local court—the High Court of American Samoa—has limited federal civil jurisdiction. In particular, federal law has explicitly granted the High Court of American Samoa federal jurisdiction for certain issues, such as food safety, protection of animals, conservation, and shipping issues, as shown in table 3. Although the High Court does not keep data on the number of federal cases it handles, the Chief Justice of the High Court official told us that, on occasion, these federal matters, particularly maritime cases, have taken a significant amount of the court's time. The Chief Justice noted that the piecemeal nature of the High Court's federal jurisdiction sometimes creates challenges. For example, although the High Court has jurisdiction to hear certain maritime cases, the High Court does not have the authority under certain federal statutes to enjoin federal court proceedings or to transfer a case to a federal court. Such a situation may lead to parallel litigation in the High Court and a federal court. As shown in table 4, the federal jurisdiction of the High Court of American Samoa is very limited as compared to comprehensive federal jurisdiction in federal courts located in CNMI, Guam, and USVI. In addition to the limits of federal jurisdiction, there are differences in the way federal matters are heard in the High Court from the federal courts in other insular areas. For example, whereas the Secretary of the Interior asserts authority to review High Court decisions under federal law, the U.S. Courts of Appeals have appellate review of decisions of the federal courts in CNMI, Guam, and USVI. Also, as stated earlier, whereas the Justices of the High Court are appointed by the Secretary of the Interior, the judges of the federal courts in CNMI, Guam, and USVI are appointed by the President, with the advice and consent of the U.S. Senate. While various proposals to change the current system of adjudicating matters of federal law in American Samoa have been periodically discussed and studied, controversy remains regarding whether any changes are necessary and, if so, what options should be pursued. In the mid-1990s, various proposals to change the current system were studied by judicial committees and federal officials. Issues that were raised at that time, such as protecting American Samoan culture and traditions, resurfaced during our interviews with federal and American Samoa government officials, legal experts, and in group discussions and public comments we received. Reasons offered for changing the existing system focus primarily on the difficulties of adjudicating matters of federal law arising in American Samoa, along with the goal of providing American Samoans with more direct access to justice in their place of residence. Reasons offered against changing the current system of adjudicating matters of federal law focus largely on concerns about the impact of an increased federal presence on Samoan culture and traditions, as well as concerns regarding the impartiality of local juries. The issue of changing the system for adjudicating matters of federal law in American Samoa has been raised in the past in response to a government audit and subsequent reports, which cite problems dating back to the 1980s. These reports cited problems with deteriorating financial conditions, poor financial management practices, and vulnerability to fraudulent activities in American Samoa. In March 1993, the newly elected Governor of American Samoa requested assistance from the Secretary of the Interior to help investigate white-collar crime in American Samoa in response to a projected $60 million deficit uncovered by a DOI Inspector General audit. As a result of this request, a team from DOJ spent 3 months assessing the problem of white-collar crime in American Samoa and completed its report in December 1994. The report concluded that white-collar crime—in particular, public corruption—was prevalent in American Samoa and provided details on the difficulties with enforcing federal law in American Samoa. The report discussed three possible solutions: (1) establishing a district court in American Samoa, (2) providing the U.S. District Court of Hawaii with jurisdiction over certain matters of federal law arising in American Samoa, or (3) providing the High Court of American Samoa with federal criminal jurisdiction. By August 1995, the U.S. Congress held hearings on the 1994 DOJ report and possible alternatives to provide for the prosecution of federal crimes arising in American Samoa. At the hearing, some American Samoa government officials opposed suggestions for changing the judicial system in the territory and concern was expressed over increased federal presence, the desire to retain self-determination regarding their judicial structure, and the need to protect and maintain the matai title and land tenure system in American Samoa. The American Samoa Attorney General at that time testified that his office and the Department of Public Safety had created a Joint Task Force on Public Corruption that investigated and prosecuted several white-collar offenses, including embezzlement, bribery, fraud, public corruption, forgery, and tax violations. For several months following the 1995 congressional hearings, different legislative options were studied by judicial committees within Congress and federal officials. One bill was drafted that would have given the U.S. District Court of Hawaii limited jurisdiction over federal cases arising in American Samoa. The bill proposed that one or more magistrate judges may sit in American Samoa, but district judges of the U.S. District Court of Hawaii would presumably preside over trials in Hawaii. The bill was opposed by some federal judicial officials citing an unfair burden that would be placed on the District of Hawaii, as well as on defendants, witnesses, and juries, due, in part, to the logistical difficulties in transporting them between American Samoa and Hawaii. By 1996, the proposed legislation was revised to establish an Article IV court in American Samoa with full staff accompaniments and limited federal jurisdiction that would exclude cases that would put into issue the office or title of matai and land tenure. While DOJ sent the legislation to the President of the Senate and Speaker of the House in October 1996, it was never introduced into the 104th Congress or in subsequent congressional sessions. While the mid-1990’s legislative proposals were primarily concerned with white-collar crime in American Samoa, more recently, different types of criminal activities have emerged. Prior to 1999, FBI officials told us that allegations of criminal activity in American Samoa were investigated by agents based in the Washington, D.C. field office and, due to the distance and costs involved, very few investigations were initiated. Around mid- 1999, FBI began to assign Hawaii-based agents to investigations in American Samoa in response to increasing reports of criminal activity. Then, due to growing caseload and a crime assessment, in December 2005 FBI opened a resident agency in American Samoa. According to an FBI official, other than a National Park Service fish and wildlife investigator affiliated with the National Park of American Samoa, the FBI agents were the first federal law enforcement agents to be stationed in American Samoa. FBI’s increased activities over the past 8 years, and establishment of a resident agency, have targeted a growing number of crimes in American Samoa, including public corruption of high-ranking government officials, fraud against the government, civil rights violations, and human trafficking. Among the most notable was U.S. v. Lee, which was the largest human trafficking case ever prosecuted by DOJ, as reported in 2007. This 2001 case involved about 200 Chinese and Vietnamese victims who were held in a garment factory, and in 2003, Lee was convicted in the U.S. District Court of Hawaii of involuntary servitude, conspiring to violate civil rights, extortion, and money laundering. Another federal case in 2006 resulted in guilty pleas from the prison warden and his associate for conspiring to deprive an inmate of rights, by assaulting him and causing him bodily injury. In December 2004, we found that American Samoa’s failure to complete single audits, federal agencies’ slow reactions to this failure, and instances of theft and fraud limited accountability for 12 key federal grants supporting essential services in American Samoa. We recommended, among other things, that the Secretary of the Interior coordinate with other federal agencies to designate the American Samoa government as a high-risk grantee until it completed all delinquent single audits. In June 2005, DOI designated the American Samoa government as a high-risk grantee. The American Samoa government subsequently completed all overdue audits and made efforts to comply with single audit act requirements. Later, in December 2006, we reported that insular area governments, including American Samoa, face serious economic, fiscal, and financial accountability challenges and that their abilities to strengthen their economies were constrained by their lack of diversification in industries, scarce natural resources, small domestic markets, limited infrastructure, and shortages of skilled labor. Again, we cited the long-standing financial accountability problems in American Samoa, including the late submission of the reports required by the Single Audit Act, the inability to achieve unqualified (“clean”) audit opinions on financial statements, and numerous material weaknesses in internal controls over financial reporting and compliance with laws and regulations governing federal grant awards. We made several recommendations to the Secretary of the Interior, including increasing coordination activities with officials from other federal grant-making agencies on issues such as late single audit reports, high-risk designations, and deficiencies in financial management systems and practices. DOI agreed with our recommendations, but we have not yet assessed its progress toward implementing them. In addition to these GAO reviews, FBI and various inspector general agents have conducted a broad investigation into federal grant-related corruption in American Samoa, which yielded guilty pleas in October 2005 from four former American Samoa government officials, including the Director of Procurement of American Samoa, the Director of the Department of Education of American Samoa, the Director of the Department of Health and Social Services for American Samoa, and the Director of the School Lunch Program for American Samoa. Additionally, recent audits and investigations by the Inspector General offices of the Departments of Homeland Security, Education, and the Interior indicate that the American Samoa government has inadequate controls and oversight over federal funds, that federal competitive bidding practices have been circumvented, and that American Samoan officials have abused federal funds for personal benefit. For example, in September 2007, officials from the U.S. Department of Education designated the American Samoa government as a high-risk grantee due to serious internal control issues raised in previous single audits, and cited a number of underlying fiscal and management problems. Due to the department’s concerns about the American Samoa government’s ability to properly administer and provide services with its funds, the department imposed several special conditions, including restrictions on the drawdown of grant funds. Also, the American Samoa legislature, or Fono, has been assisting federal agencies in their efforts to investigate public corruption and other crimes. Specifically, in early 2007, the Fono established a Senate Select Investigative Committee to review and investigate any unlawful, improper, wasteful, or fraudulent operations involving local and federal funds or any other misconduct involving government operations within all departments, boards, commissions, committees, and agencies of the American Samoa government. An official stated the committee reviews and investigates complaints, holds senate hearings with relevant witnesses, and can refer cases to either the American Samoa Attorney General or FBI for investigation and prosecution. As was the case in the 1990s, and was repeated in the interviews we conducted and e-mail comments we received, the reasons offered for changing the American Samoa judicial system principally stem from challenges associated with adjudicating matters of federal law arising in American Samoa and the desire to provide American Samoans with greater access to justice. Federal law enforcement officials have identified a number of issues that limit their ability to pursue matters of federal law arising in American Samoa. These include logistical challenges related to American Samoa’s remote location. Proponents of changing the judicial system of American Samoa also cite reasons, such as providing more direct access to justice as in other insular areas, serving as a possible deterrent to crime, and providing a means to alleviate the shame, embarrassment, and costs associated with being taken away to be tried more than 2,000 miles from American Samoa. While the main areas of concern in the mid-1990s and in our discussions were related to criminal matters arising in American Samoa, there were also concerns regarding civil matters, such as federal debt collection, although these were not addressed in much detail. Without a federal court in American Samoa, investigators and federal prosecutors whom we interviewed said they were limited in their ability to conduct investigations and prosecute cases due to logistical obstacles related to working in such a remote location. In addition to high travel costs, and infrequent flights into and out of American Samoa, DOJ officials said they face difficulties involving effective witness preparation and difficulties communicating with agents during a small window of time each day (due to the 7-hour time difference between Washington, D.C. and American Samoa). In some cases, search warrants or wiretaps were not used by the prosecutors to the extent that they would have been if American Samoa were in closer proximity to Washington, D.C. or Honolulu, Hawaii. Federal prosecutors told us that far fewer witnesses have been called to testify in front of the grand jury, given the burden of high travel costs from American Samoa. Federal prosecutors also told us that they must also rely on witness observations and summaries from federal agents stationed in American Samoa rather than meet key witnesses face to face before bringing charges or issuing subpoenas, as they would typically do. Further, according to DOJ officials, the cost related to managing these cases has limited the number of cases they are able to pursue. Federal law enforcement agents told us that a federal court located in American Samoa could bring additional investigative and prosecutorial resources so that they would be able to pursue more cases. Although some have suggested that judicial and prosecutorial resources from the judicial districts of CNMI and Guam be deployed to American Samoa, the high travel costs and logistical obstacles would not be any less, given that there are no direct flights between American Samoa and Guam or between American Samoa and CNMI. See figure 2 showing the distances between American Samoa and CNMI, Guam, Hawaii, and Washington, D.C. Another key reason offered for changing the system for adjudicating matters of federal law in American Samoa is that a federal court would provide residents with more direct access to justice and the ability to pursue cases in the federal court system. Currently, the ability to adjudicate federal cases exists only in very limited cases through the High Court, at a significant cost of time and money to travel to U.S. District Courts in Hawaii or Washington, D.C.; or not at all, in the case of some civil matters and bankruptcy. Proponents state that the establishment of a federal court would provide American Samoa parity with other insular areas, such as CNMI, Guam, and USVI, which have federal courts. Further, a legal expert said that a federal court in American Samoa would provide the community with an opportunity to see first hand how parties can come together to resolve their differences with regard to federal matters. For example, some have asserted that if public corruption trials were held in American Samoa, they would act as a deterrent to others contemplating fraudulent behavior; increase accountability with regard to government spending; and provide satisfaction in witnessing wrong doers brought to justice. Some stated in the February 2006 public hearing held by the Fono and in e-mail comments we received that they have felt shame and embarrassment when defendants are taken to distant courts and in our group discussions, it was stated that American Samoa is perceived by others as unable to render justice to its own residents. Further, some officials of American Samoa have noted the significant costs that defendants’ families must bear in traveling great distances to provide support during trials. This burden is exacerbated by the comparatively low family incomes in American Samoa, which, as stated earlier, are less than half of the U.S. median household income, according to 2000 Census Bureau data. Finally, some people we met with stated that the current system of holding federal criminal trials outside of American Samoa subjects defendants to possible prejudices by jurors in other locations. They cited the relative unfamiliarity of the judges and jurors in Washington, D.C. or Honolulu, Hawaii regarding American Samoa cultural and political issues and suggested that American Samoans would receive a fairer trial in American Samoa than in these locations. This issue had also been discussed in the mid-1990s. For example, in his testimony during August 1995 congressional hearings, the then-Governor of American Samoa stated that the people of American Samoa have the ability to deliver just verdicts based on the evidence presented. He noted that for almost 20 years prior, the trial division of the High Court had successfully conducted six-person jury trials as evidence that American Samoan customs and family loyalties had not prevented effective law enforcement. Views in support of changing the current system were also reflected in some comments made during the group discussions we held in American Samoa and in some of the e-mail responses we received. Some members of the public expressed discontent over the significant costs associated with American Samoan defendants and their families having to travel to Hawaii or Washington, D.C. for court matters and they expressed the importance of having a jury of their peers deciding their cases. Other members of the public and a local community group expressed their belief that a federal court in American Samoa may act as a deterrent for the abuse of federal funds and public corruption, and provide opportunities for American Samoans to pursue federal legal matters, such as bankruptcy. While there was no consensus opinion, certain members of the local bar association mentioned that having a federal court could be beneficial for economic development, by attracting qualified attorneys and court staff to American Samoa. Additionally, one member stated that a federal court may lighten the workload and reduce the backlog of the High Court by taking over its federal maritime and admiralty matters. One of the key reasons offered against changing the current judicial system is the concern that a federal court would impinge upon Samoan culture and traditions. The most frequent concerns raised were related issues— that the system of matai chiefs and the land tenure system could be jeopardized. In raising these issues, some cited the deeds of cession which specify that the United States would preserve the rights and property of the Samoan people. Further, some law enforcement officials we met with also opposed a change to the current system for prosecuting federal cases arising in American Samoa because they were concerned that, given the close familial ties in American Samoa, it would be difficult to obtain convictions from local jurors. During the February 2006 Fono hearings, in e-mail comments we received, and in statements by American Samoa government officials we interviewed, concerns were voiced that the establishment of a federal court in American Samoa could jeopardize the matai and land tenure system of American Samoa. As noted above, matai hold positions of authority in the community; for example, only matai may serve as senators in the American Samoa legislature, and matai control the use and development of the communal lands and allocate housing to their extended family members. The land tenure system of American Samoa is such that the majority of the land in American Samoa is communally owned, and the sale or exchange of communally owned land is prohibited without the consent of the Governor. Also prohibited is the sale or exchange of communally owned and individually owned property to people with less than one-half Samoan blood. American Samoa government officials assert that the land tenure system fosters the strong familial and community ties that are the backbone of Samoan culture and that limits on the transfer of land are important to preserve the lands of American Samoa for Samoans and protect the Samoan culture. Currently, cases regarding matai titles and land issues, such as disputes over the rightful successor to a matai or land use or improvements, are heard by the land and titles division of the High Court of American Samoa. This division is composed of the Chief Justice and Associate Justice, as well as associate judges, who are appointed based on their knowledge of Samoan culture and tradition. Pursuant to the federalist structure of the U.S. judiciary, if a federal court were established in American Samoa most cases arising under local law, such as matai and land disputes, would likely continue to be heard by the local court. However, some American Samoa officials stated that they are concerned that if a federal court were established in American Samoa, federal judges, without the requisite knowledge of Samoan culture and tradition, would hear land and title cases. They stated that they would like to keep matai title and land tenure issues within the jurisdiction of the High Court. Another concern that was raised by government officials and residents of American Samoa is that the presence of a federal court in American Samoa may generate constitutional challenges to the matai and land tenure system. Though such challenges may be brought in existing venues, some voiced concerns that the establishment of a federal court in American Samoa may make such challenges less costly and perhaps more likely. To this day, our native land tenure system remains at the very core of our existence: our culture, our heritage and our way of life. Without our native land tenure system, our matai or chieftain system will fade over time—along with our language, our customs and our culture….we, as a people, have an overriding desire to keep the fabric of our society (i.e., our Samoan culture) intact. No other U.S. state or territory enjoys the total and complete preservation of its people’s culture as American Samoa. I fear that the imposition of a federal court system in American Samoa may have a destructive impact on our culture. Some have raised concerns regarding the establishment of a federal jury system, given the potentially small pool of U.S. citizens in American Samoa and the extended family ties among American Samoans. Federal law provides that federal jurors must be U.S. citizens. As discussed earlier, American Samoans are U.S. nationals, not U.S. citizens, although they may apply and become U.S. citizens. Neither the U.S. Census Bureau nor the American Samoa Department of Commerce provides data on the number of U.S. citizens in American Samoa. Thus, the proportion of the American Samoa adult population who are U.S. citizens is unknown. If the number of U.S. citizens is fairly small, then the pool from which to select federal jurors would be fairly small without a statutory change. In addition, law enforcement officials have speculated that extended family ties in American Samoa may limit the government’s ability to successfully prosecute cases. Specifically, they raised the issue of jury nullification— the rendering of a not guilty verdict even though the jury believes that the defendant committed the offense—as a potential problem that may occur if jury trials were held in American Samoa, due to the influence of familial ties or other societal pressures on jurors. Federal law enforcement officials we met with added that some witnesses involved in testifying against others in previous federal criminal cases have relocated outside of American Samoa and have lost their jobs and housing as a result of their participation in cases. These officials stated that they believe that similar societal pressures will be imposed on jurors if trials were held in American Samoa. These officials concluded that the current system of federal criminal trials taking place away from American Samoa is the best way to get unbiased juries. Views expressing opposition to changing the current system were also reflected in some comments we received from the group discussions we held in American Samoa and from e-mail responses. Some members of the public expressed concerns over an increased federal presence in American Samoa and the potential legal challenges which could be brought regarding the land tenure system and matai title traditions. Further, some expressed concerns about non-Samoans filing discrimination lawsuits over their inability to own land. Some stated that the current system operates well and they did not see a need for change. Others expressed opposition to a federal court in American Samoa due to their concerns about impartial jurors. They stated that if a federal court were established in American Samoa, jurors may not be able to be impartial because of the close relations through family, culture, church, government, or business. Finally, others expressed concerns about the U.S. government pushing and imposing its will on American Samoa, and their belief that changes to the current system should come not from the federal government but from American Samoans themselves. Based on our review of legislative proposals considered during the mid- 1990s testimonies and reports and through discussions with legal experts and American Samoa and federal government officials, we identified three potential proposals, or scenarios, if a change to the judicial system of American Samoa were to be made. These scenarios are (1) establishing an Article IV district court in American Samoa, (2) establishing a district court in American Samoa that would be a division of the District of Hawaii, or (3) expanding the federal jurisdiction of the High Court of American Samoa. Each scenario would require a statutory change and present unique operational issues to be addressed. To the extent possible, we cited written documents and knowledgeable sources in the discussion of these issues. See appendix I for detailed information on our scope and methodology. Based on our review of past legislative proposals, testimonies, and reports, and through discussions with legal experts and American Samoa and federal government officials, we identified three potential scenarios for establishing a federal court in American Samoa or expanding the federal jurisdiction of the High Court of American Samoa: (1) establishing an Article IV district court in American Samoa, (2) establishing a district court in American Samoa that would be a division of the District of Hawaii, or (3) expanding the federal jurisdiction of the High Court of American Samoa. These scenarios are similar to those discussed in the 1990s, and are described in table 5. Each scenario would require a statutory change and each presents unique operational issues that would need to be resolved prior to implementation. Some issues to be resolved include determining: (1) what jurisdiction would be granted to the court; (2) what type of courthouse facility and detention arrangements would be needed and to what standards, including security standards; and (3) what jury eligibility requirements would apply. The original structure of this scenario came from draft legislation submitted by DOJ to the Speaker of the U.S. House of Representatives and the President of the U.S. Senate in October 1996, which proposed the creation of a new federal court in American Samoa. The legislation specified that the court would have limited jurisdiction that would exclude matters pertaining to matai title and land tenure issues. Under this scenario, federal law would authorize a federal court structure that most closely resembled federal courts in CNMI, Guam, and USVI. It would include an Article IV district court with a district judge, court clerk, and support staff. Below is a description of the key issues under this scenario. Jurisdiction: The statute creating the Article IV district court would specify the court's jurisdiction. It could be limited to criminal cases only, or may or may not include bankruptcy, federal question, and diversity jurisdiction. American Samoa officials and others whom we interviewed were divided on whether the law establishing a district court in American Samoa should explicitly exclude matai and land tenure issues from the court’s jurisdiction. Another possibility is that, as in other insular area federal courts, the federal jurisdiction of the court could grow over time. For example, while the District Court of Guam began with jurisdiction over cases arising under federal law in 1950, subsequent federal laws expanded its jurisdiction to include that of a district court of the United States, including diversity jurisdiction, and that of a bankruptcy court. Appeals process: The process for appealing decisions would be the same as in other Article IV district courts. Appeals would first go to the U.S. Court of Appeals for the Ninth Circuit and then to the U.S. Supreme Court. Judges: The judge would be appointed in the same manner as federal judges for the other insular areas, who are appointed by the President, with the advice and consent of the Senate, for 10-year terms. Associated Executive and Judicial Branch staff: Probation and Pretrial services staff, U.S. Attorney and staff, and U.S. Marshals staff would establish stand-alone offices. Defender services could be provided, at least initially, through the Federal Public Defender Organization personnel based in the District of Hawaii and/or Criminal Justice Act (CJA) panel attorneys. CJA panel attorneys are designated or approved by the court to furnish legal representation for those defendants who are financially unable to obtain counsel. Physical facilities: Under this scenario, a new courthouse facility would need to be built to provide the courtroom, judge’s chambers, office space for federal court staff, and a holding area for detaining defendants during trials. It is not clear if a detention facility for detaining defendants pretrial and presentencing would need to be built or if a portion of the existing local prison could be upgraded to meet federal standards. According to the U.S. Marshals Service, the current local prison in American Samoa does not meet federal detention standards. Operational issues: Several judicial officials and experts we met with stated that this scenario is the most straightforward option because it would be modeled after the federal courts in other insular areas, which would place residents of American Samoa in a position that is equitable with residents of the other insular areas. Other judicial officials we met with stated, however, that this is potentially the most costly scenario of the three, given the relatively small caseload expected. However, the Pacific Islands Committee stated in its 1995 Supplemental Report that new federal courts historically have drawn business as soon as they open their doors, and it is likely that growth in the court caseload would result. This scenario would create a new division of American Samoa within the District of Hawaii. There are potentially several arrangements which could be devised to handle court matters. Since the U.S. District Court of Hawaii is an Article III court, a judge assigned to a Division of American Samoa would also presumably be an Article III judge, which would differ from the Article IV courts in CNMI, Guam, and USVI. Another possibility would be to assign an Article IV judge to American Samoa. Regardless of the arrangement, a clerk of the court and support staff would be needed in American Samoa to handle the work of the court. Jurisdiction: As with scenario 1, the statute creating the division in the District of Hawaii would specify the court's jurisdiction. It could be limited to criminal cases only, or may or may not include bankruptcy, federal question, and diversity jurisdiction. Appeals process: The process for appealing decisions would be the same as the District of Hawaii, to the U.S. Court of Appeals for the Ninth Circuit and then to the U.S. Supreme Court. Judges: An Article III or Article IV judge would be appointed by the President, with the advice and consent of the Senate, and serve either a life term with good behavior (Article III) or a 10-year term (Article IV) as is true in Guam, CNMI, and USVI. Associated Executive and Judicial Branch staff: Probation and Pretrial services, U.S. Attorney, and U.S. Marshals could provide the minimum staff required in American Samoa and share support functions with their offices in the District of Hawaii. Defender services could be provided, at least initially, through Federal Public Defender Organization personnel based in the U.S. District Court of Hawaii and/or CJA panel attorneys. Physical facilities: As with scenario 1, a new courthouse facility would need to be built to provide the courtroom, judge’s chambers, office space for federal court staff, and a holding area for detaining defendants during trials. Also, similar to scenario 1, it is unclear whether a new detention facility would need to be built or if a portion of the existing local prison could be upgraded to meet federal standards. Operational issues: Some federal and judicial officials we interviewed told us that this scenario may be less costly than scenario 1 because as a division of the District of Hawaii, some administrative functions and resources may be able to be shared with Hawaii. Other federal and judicial officials told us that costs for staff to travel between American Samoa and Hawaii and additional supervisory staff which may be needed in Hawaii may make scenario 2 just as costly, or possibly more costly than scenario 1. Although this scenario would allow for trials to be held in American Samoa, there may be issues to be resolved concerning the status of any judges that would serve in the court and the degree to which resources would be shared with the U.S. District Court of Hawaii. For example, some judicial officials have raised questions of equity about the possibility of Article IV judges being assigned to federal courts in CNMI, Guam, and USVI while an Article III judge was assigned to the federal court in American Samoa. This scenario would expand the federal jurisdiction of the High Court of American Samoa rather than establish a new federal court. This would be a unique structure, as local courts typically do not exercise federal criminal jurisdiction. As a result, a number of unresolved issues associated with this scenario would have to be resolved should this scenario be pursued. Jurisdiction: The jurisdiction of the High Court would be expanded to include additional federal matters, such as federal criminal jurisdiction. This would be a unique structure, as local courts generally do not exercise federal criminal jurisdiction. While there is a history of federal courts in insular areas with jurisdiction over local offenses, there has never been the reverse—a local court with jurisdiction over both local and federal offenses. Appeals process: The appellate process for federal matters under such a scenario is unclear. The current process for the limited federal cases handled by the High Court has five levels of appellate review: (1) to the Appellate Division of the High Court, (2) to the Secretary of the Interior, (3) to the U.S. District Court for the District of Columbia, (4) to the U.S. Court of Appeals for the District of Columbia Circuit, and (5) to the U.S. Supreme Court. Whether the appeals process would match that of the federal courts in CNMI, Guam, and USVI would have to be determined. Judges: The Chief Justice of the High Court stated that the High Court may need an additional judge to handle the increased caseload. Alternatively, in our discussions, Pacific Island Committee members with whom we met suggested that the Secretary of the Interior or the Chief Judge of the Ninth Circuit could designate active and senior district judges within the Ninth Circuit to handle any court workload in American Samoa. They point out that they designated judges from the Ninth Circuit to the District of Guam for over 2 years, when there was an extended judge vacancy. Further, the Ninth Circuit has designated local judges to handle federal matters, when necessary. For example, the judges from the Districts of CNMI and Guam routinely use local Superior Court or Supreme Court judges to handle federal court matters and trials, in cases when they must recuse themselves from a court matter or in the case of a planned or emergency absence. However, Pacific Island Committee members with whom we met stated that presumably federal judges would only handle federal court matters. It was unclear whether High Court justices would handle federal and local court matters and what implications might arise from such a structure. Associated Executive and Judicial Branch staff: It is unclear whether Probation and Pretrial services, U.S. Attorneys, and U.S. Marshals would be established, since these staff are only provided to a district court. Similarly, the authority under the CJA to authorize a federal defender organization to provide representation or to compensate panel attorneys is vested in the district court. The Department of Justice would need to determine whether it would establish a federal prosecutor position in American Samoa to prosecute certain federal cases in the High Court. There are local Public Defender and Attorney General Offices in American Samoa and the extent to which they could assist with cases is unknown. According to the Chief Justice of the High Court, it is unlikely that the existing probation and pretrial or court security staff would be able to handle an increased workload. Currently the High Court has three probation officers who work part-time as translators for the court, and two marshals, one for each of the High Court’s two courtrooms. Physical facilities: The extent to which federal detention and courtroom security requirements would apply is uncertain. Until this issue is resolved, activities could possibly continue in existing courthouse and detention facilities. However, the High Court justices and clerk said that current courtroom facilities are already used to capacity without the added caseload that federal jurisdiction could bring. Operational issues: This scenario may be the lowest-cost scenario and may alleviate concerns about the threat to the matai and land tenure systems. It is potentially the lowest-cost scenario because some of the existing court facilities and staff may be used. Some leaders within the American Samoa government believe this is the best option and supporters of this scenario note that the High Court has a history of respecting American Samoa traditions and so they have fewer concerns that issues of matai titles and land tenure would be in jeopardy. At the same time, as it is unprecedented to give federal criminal jurisdiction to a local court, this scenario could face the most challenges of the three, according to federal judges and other judicial officials. Legal experts with whom we met told us that, because this is a unique arrangement, the High Court and U.S. judiciary may be faced with having to constantly solve unique problems and develop solutions on a regular basis. For example, judicial officials stated that the High Court Justices would have to be cognizant of their roles and responsibilities when shifting from the duties of a local High Court Justice to the duties of a federal judge. A judicial official also noted that the High Court justices may have to become familiar with federal sentencing guidelines, which require a considerable amount of training. In the August 1995 hearing, the DOJ Deputy Assistant Attorney General stated that vesting federal jurisdiction in the High Court runs counter to well-established legislative policy that district courts should have exclusive jurisdiction over certain types of proceedings to which the United States is a party. For example, federal law states that U.S. district courts have exclusive jurisdiction over all offenses against the criminal laws of the United States and with respect to the collection of debts owed to the United States, provides for an exclusive debt collection procedure in the courts created by Congress. Similarly, federal regulatory statutes often provide for enforcement and judicial review in the federal courts. Another issue to be resolved is the appointment process for justices of the High Court. While none of the judicial officials with whom we met had concerns about the independence of the current justices, some expressed concerns about the differences in the way judges are appointed—while federal judges are generally appointed by the President, the justices in American Samoa are appointed by the Secretary of the Interior. As such, they suggested that the justices in American Samoa may not be subject to the same vetting process and protected by the same constitutional and statutory provisions—such as salary guarantees—as are district judges. The potential cost elements for establishing a federal court in American Samoa include agency rental costs, personnel costs, and operational costs; most of which would be funded by congressional appropriations. We collected likely cost elements, to the extent possible, for scenario 1 and 2 from the various federal agencies that would be involved in establishing a federal court in American Samoa. We did not collect cost data for scenario 3 because of its unique judicial arrangement and because there was no comparable existing federal court structure upon which to estimate costs. For scenario 1 and 2, AOUSC officials told us that a new courthouse would need to be built. GSA officials told us that court construction and agency rental costs would be comparatively high—about $80 to $90 per square foot for a new courthouse, compared to typical federal government rental charges for office space in American Samoa of around $45 to $50 per square foot in 2007. Funding sources for the judiciary and DOJ derive primarily from direct congressional appropriations and funding for a federal courthouse in American Samoa would likely be funded similarly. We found the data for scenarios 1 and 2 sufficiently reliable to provide rough estimates of the possible future costs for these scenarios for establishing a federal court in American Samoa, with limitations as noted. Due to limitations on existing buildings and potential land restrictions— about 90 percent of American Samoan land is communally owned—GSA officials told us that a new courthouse in American Samoa would likely use a build-to-suit lease construction arrangement rather than government-owned construction and that construction and consequent rental costs would be comparatively high. GSA provided initial construction and rental costs for the hypothetical courthouse in American Samoa, based on a floor plan submitted for a proposed new one-judge courthouse in CNMI. According to GSA officials, there are no buildings in American Samoa suitable for use as a federal courthouse. Further, officials from the High Court of American Samoa told us that its two-courtroom High Court building and its one-courtroom local district court building are frequently used to capacity. Under build-to-lease construction, the government contracts with a private developer to build the courthouse and, in this case, GSA leases the completed building based on the amortization of a 20-year construction loan. GSA would then rent portions of the building to the tenant federal agencies, such as AOUSC, EOUSA, and USMS. GSA officials gave very preliminary rent estimates of $80 to $90 per square foot, based on requirements similar to an existing build-to-suit lease prospectus for a new courthouse in CNMI. Further, GSA officials told us that federal agencies would be responsible for up-front payments for the particular courthouse governmental features, such as holding cells, and blast protection for security. GSA officials indicate that the accuracy of the initial American Samoa court construction may vary by as much as -20 to +80 percent, thereby influencing rental costs. The GSA Assistant Regional Administrator for Region IX Pacific Rim stated that there are many factors that could affect construction costs and, therefore, the tenant agencies’ rental costs. For example, any cost increases associated with the condition of an unknown site or escalation in construction costs beyond what has been anticipated will have a direct and proportional impact on the rental costs, as well as the up-front costs that agencies may be required to pay. Preliminary rental costs of $80 to $90 per square foot for a new courthouse with specialized building requirements would exceed typical federal government rental charges for offices in American Samoa at the prevailing market rates of $45 to $50 per rentable square foot in 2007. For scenarios 1 and 2, AOUSC officials provided information related to four types of costs: (1) district court costs, (2) probation and pretrial services costs, and (3) federal defender office costs. District court costs: For yearly district court costs under scenario 1, AOUSC provided us with district court cost estimates of about $1.5 million for personnel costs, including the costs of one district court judge and the full-time equivalent salaries of 2 law clerks and 1 secretary, 11 district clerk’s office staff, 1 pro se law clerk, 1 court reporter, and recruitment and training costs. Operational costs were estimated at $0.1 million, which includes judge’s law books, stationery, forms, new case assignment and jury management systems, travel, postage and delivery charges, and consumables for both the first year and recurring years. Information technology and other equipment costs were estimated at $0.1 million. Space and facilities costs ranged between $2.6 million to $2.9 million and include necessary alterations and renovations, signage, furnishings, furniture, and estimated GSA rental costs. Probation and pretrial services costs: For the yearly cost of probation and pretrial services, AOUSC provided us with personnel and benefits costs estimated at $0.3 million, which includes the full-time equivalent salaries of one Chief Probation Officer, one probation officer, and one administrative support staff. Operational costs were estimated at $0.1 million, including travel, training, transportation, postage, printing, maintenance, drug dependent offender testing and aftercare, pretrial drug testing, mental health treatment services, monitoring services, DNA testing, notices/advertising, contractual services, supplies, awards, firearms, and protective equipment. Information technology and other equipment costs were estimated at about $16,000 (i.e., equipment, maintenance, purchase of copy equipment, computer training, phone communications, supplies, computers, phones, data communications equipment, printers, scanner, and computer software). Space and facilities costs were estimated at $0.4 million to $0.5 million, which includes furniture and fixture purchases, as well as GSA rental costs. Federal Defender costs: AOUSC officials did not estimate costs for a Federal Defender’s office, since it is unlikely that the hypothetical court in American Samoa would, at least initially, reach the minimum 200 appointments per year required to authorize a Federal Defender Organization or the number of cases that would warrant the creation of a Federal Public Defender Organization headquartered in the District of Hawaii. The court in American Samoa, as an adjacent district, might be able to share the Federal Public Defender Organization staff based in Hawaii, or the court could rely solely on a CJA panel of attorneys. The costs to the Federal Public Defender Organization in Hawaii and the costs of reimbursing CJA attorneys would vary based on the caseload of the court. District Court costs: According to AOUSC, the estimated district court costs for scenario 2 could be similar to the estimated costs for scenario 1. AOUSC officials indicated that there may not be a need for a clerk, financial/procurement officer, jury clerk, or information technology specialist in American Samoa under scenario 2, as those functions may be handled out of the District of Hawaii office, leading to some possible reductions in personnel salaries. However, some judicial officials stated that any decrease in staff costs for this scenario may be offset by increased costs for travel between Hawaii and American Samoa. GSA rental costs would be comparable to scenario 1. Probation and pretrial services costs: Probation and Pretrial Services officials did not provide any cost differences between scenarios 1 and 2. Federal Defender costs: Either the Office of the Federal Public Defender in Hawaii or a CJA panel may provide defender services in American Samoa under both situations, thereby also not leading to any significant change in cost estimates between scenarios 1 and 2. For the Department of Justice, an EOUSA official provided U.S. Attorney’s Office cost estimates and a USMS official provided security cost estimates for both scenario 1 and scenario 2. Scenario 1 costs: EOUSA officials calculated the cost of a U.S. Attorney’s office based on a partial first year and a complete second year. Modular personnel costs are $0.6 million for the first year and $1.0 million for the second year, which includes one U.S. Attorney, three attorneys, and two support staff. Operational costs ranged from $0.5 million to $0.9 million, including travel and transportation, utilities, advisory and assistance services, printing and reproduction, and supplies and materials. Information technology costs were estimated at $0.1 million for equipment and the operation and maintenance of equipment. Space and facilities costs range between $1.3 million and $1.4 million and include the operation and maintenance of facilities and rent to GSA and others. Scenario 2 costs: EOUSA officials calculated U.S. Attorney’s office personnel costs for a partial first year and a complete second year. Modular personnel costs rose from $0.6 million in the first year to $1.0 million throughout the second year, which includes four attorneys and two support staff. Operational costs remain consistent at $0.2 million for both the first and second years, reflecting travel and transportation, litigation costs, supplies, and other miscellaneous costs. Information technology and equipment costs were estimated to be approximately $0.1 million for both years. Yearly rental rates may also be comparable in the initial years. Personnel and operations costs for scenario 2 were estimated to be less than for scenario 1 because scenario 2 does not include a separate U.S. Attorney for American Samoa. Rather, the costs for scenario 2 are based on the estimated costs and personnel the U.S. Attorney for the District of Hawaii would need to support cases that arise in American Samoa. Scenario 1 costs: USMS officials estimated that personnel costs were $0.8 million, based on fiscal year 2008 salaries, benefits, and law enforcement availability pay for all supervisory (one U.S. Marshal, one Chief Deputy, one Judicial Security Inspector) and nonsupervisory (two Deputy Marshals and one administrative) personnel that would be needed. Operational costs were estimated to be $0.8 million based on fiscal year 2008 standard, nonpersonnel costs for district operational and administrative positions (including vehicles, weapons, protective gear, communications equipment, and operational travel costs), and $0.7 million for defendant transport (including guard wages, airfare, per diem meals, and lodging). Information technology and equipment costs were estimated at $0.6 million for the installation of a computer network and telephone system to all USMS offices, and $0.2 million for yearly service on the wide-area network to American Samoa. Space and facilities costs were estimated between $1.1 million and $1.3 million for rent, plus variable defendant detention facility housing costs. Scenario 2 costs: With regard to scenario 2, USMS officials estimated that yearly personnel costs would be $0.5 million. Since a U.S. Marshal, Chief Deputy, and Judicial Security Officer would be shared with the USMS in Hawaii and not be located in American Samoa, personnel costs for this scenario are estimated to be approximately $0.4 million less than scenario 1. Operational costs (reflecting the standard, nonpersonnel costs for operational and administrative positions) under scenario 2 were estimated to be $0.5 million, or about $0.3 million less than scenario 1. The operational cost differential between the two scenarios with respect to prisoner transport is unclear. While the USMS did not specifically address information technology costs and other equipment costs with respect to scenario 2, the same types of costs in scenario 1 would be involved if a computer network and telephone system would need to be established. With respect to space and facilities, if the USMS were housed in the same court building as used for scenario 1, rental costs should be comparable (between $1.1 million and $1.3 million.) If, however, under scenario 2, the USMS were housed in an office building rather than a courthouse, then the resulting cost may be lower than scenario 1. Additionally, to the extent that defendants are detained in the same facilities as in scenario 1 (e.g., the Bureau of Prisons detention facility in Hawaii), detention facility costs should be comparable. Funding for the federal judiciary and DOJ agencies derives primarily from direct congressional appropriations to each agency and funding for a federal court in American Samoa would likely be funded similarly. In fiscal year 2006, about 94 percent of the total court salary and expense obligations were obtained through direct judiciary funding. The remaining 6 percent was obtained through offsetting collections, such as fees. In that same year, about 95 percent of the total Probation and Pretrial Services obligations were obtained through direct congressional appropriations. With regard to DOJ, in fiscal year 2006, 96 percent of the U.S. Attorneys' obligations to support district court activities were obtained through direct congressional appropriations and the remaining 4 percent were obtained through other sources, such as asset forfeitures. In fiscal year 2008, USMS used direct congressional appropriations to cover the expenses for staff hiring, payroll, relocation, personnel infrastructure, rent, and utilities. The Office of the Federal Detention Trustee funds 100 percent of prisoner detention, meals, medical care, and transportation. AOUSC funds 100 percent of the court security officers, magnetometers, and security measures at courthouse entrances. In May 2008, we requested comments on a draft of this report from the Administrative Office of the U.S. Courts, the Department of the Interior, the Department of Justice, the General Services Administration, and officials representing the executive, legislative, and judicial branches of the government of American Samoa. The Administrative Office of the U.S. Courts and the Department of Justice provided technical comments, which we have incorporated into the report as appropriate. For the Department of Justice, we received comments from the Bureau of Prisons, the Federal Bureau of Investigation, and the U.S Marshals Service. The Bureau of Prisons recommended that the current judicial system in American Samoa be improved—although no specific scenario was endorsed—due to concerns regarding public corruption, the crime rate, and the cost and inconvenience involved in transporting officials, witnesses, and prisoners to courts so far away from American Samoa. The Honolulu Division of the Federal Bureau of Investigation recommended that the District of Hawaii be provided additional resources and designated as the proper venue for federal cases arising in American Samoa, given the small pool of jurors, logistical challenges of permanently stationing federal personnel in American Samoa, and the lack of institutional infrastructure to sustain a federal district court in American Samoa. The U.S. Marshals Service stated it supported scenario 1 and added that that scenario 2 would place a strain on its current prisoner transportation system and be extremely difficult for the Hawaii district office to staff due to the lack of infrastructure and detention space. In addition to the technical comments received, the Administrative Office of the U.S. Courts, the Department of the Interior, and the Office of the Governor of American Samoa provided official letters for inclusion in the report. These letters can be seen in appendixes II, III, and IV, respectively. We are sending copies of this report to the Attorney General and Secretary of the Interior, Director of the Administrative Office of the U.S. Courts, Administrator of the General Services Administration, Governor of American Samoa, President of the Senate and Speaker of the House of the Legislature of American Samoa, Chief Justice of the High Court of American Samoa, and interested congressional committees. The report will be available on the GAO Web site at http://gao.gov. If you or your staff have any questions regarding this report, please contact me at 202-512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgements are listed in appendix IX. We examined the unique judicial structure of American Samoa and identified issues associated with establishing a federal court in American Samoa. Specifically, the objectives of our review were to discuss: (1) the current system and structure for adjudicating matters of federal law arising in American Samoa and how it compares to those in the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and the U.S. Virgin Islands (USVI); (2) the reasons that have been offered for or against changing the current system and structure for adjudicating matters of federal law in American Samoa; (3) the description of different scenarios for establishing a federal court in American Samoa or expanding the federal jurisdiction of the High Court of American Samoa if a change to the current system were made, if a change to the current system were made, and the identification of issues associated with each scenario; and, (4) the potential cost elements and funding sources associated with implementing the different scenarios for establishing a federal court in American Samoa. To address these objectives, we reviewed historical documents, congressional testimonies, law review articles, previous studies, and cost data from and conducted interviews with U.S. government officials from the Administrative Office of the U.S. Courts (AOUSC), including the Federal Judicial Center, Office of Defender Services, and Probation and Pretrial Services; headquarters and field officials from the Department of Interior’s (DOI) Office of Insular Affairs and Inspector General; officials from Department of Justice’s (DOJ) Civil Rights Division, Criminal Division, Executive Office for U.S. Attorneys (EOUSA), Bureau of Prisons, and headquarters and field officials from the U.S. Marshal Service (USMS) and Federal Bureau of Investigation (FBI); headquarters and field officials from the General Services Administration (GSA); officials from the U.S. Attorneys offices for CNMI, Guam, Hawaii, and USVI; headquarters and field officials from the Inspector General offices of the Departments of Agriculture, Education, Homeland Security, Transportation, and Health and Human Services; officials and judges from the Ninth Circuit; and officials and judges from the U.S. District Court of Hawaii, the District Court for the Northern Mariana Islands, the District Court of Guam, and the District Court of the Virgin Islands. Further, we reviewed historical documents, legal decisions, and court statistical data. We also conducted interviews with government officials from the executive, judicial, and legislative branches of government and residents of American Samoa, including the Governor’s Office, High Court of American Samoa, Fono, Office of Samoan Affairs, Controller’s Office, Office of Territorial and International Criminal Intelligence and Drug Enforcement, Attorney General’s Office, and Public Defender’s Office. Also, we reviewed relevant legal review articles and position papers and conducted interviews with recognized legal experts on territorial governance issues. These experts were identified through our literature review and based on their having published work in the area of territorial judicial systems, and through our interviews with and information collected from federal government and territorial government officials. The experts contacted were not intended to be representative of all expert opinion on American Samoa judicial issues, but were contacted because they could provide insights on territorial governance issues in general. We conducted this performance audit from April 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To obtain insight regarding public views related to objectives 2 and 3, during our October 2007 trip to American Samoa we conducted an open forum of college students and the general public and held group discussions with members of the American Samoa Bar Association, American Samoa Chamber of Commerce, and Common Cause of American Samoa. We also established an e-mail account (i.e., [email protected]) and received 62 comments from October 2007 to January 2008 regarding the general public’s views on possible scenarios for establishing a federal court in American Samoa or expanding the federal jurisdiction of the High Court of American Samoa. At all discussions and interviews in American Samoa we distributed flyers (see fig. 3) which solicited views to the e-mail account regarding the possible scenarios for establishing a federal court in American Samoa or expanding the federal jurisdiction of the High Court of American Samoa. Emails received were included in our analysis, except those which did not address the issue of a federal court or only posed questions without further elaboration. We did not independently evaluate the merits of the respondents’ comments. However, we did group and list the comments by topic. With some exceptions, such as responses that were irrelevant or unclear, substantially all of the comments received were categorized. To ensure inclusiveness and avoid subjectivity in presenting the comments, we did not eliminate any comments, even though the comments perhaps were the same as (or very similar to) comments made by another respondent. The open forum, group discussions, and e-mails were designed to provide broader insight regarding American Samoan public views regarding the establishment of a federal court. Because these comments are based on a nongeneralizable sample of individuals, they cannot be used to make inferences about the American Samoan population overall. While these views cannot be generalized to the population of American Samoa as a whole, they provided us with a better understanding of the range of issues that were important to the members of the local community. To address objective 4 on potential cost estimates establishing a federal court in American Samoa, we obtained estimated cost information for scenarios 1 and 2 from federal agencies that would be involved in establishing a federal court. This included obtaining cost information related to three areas: (1) court construction and rent from GSA, (2) judicial branch agency costs from AOUSC, and (3) executive branch agency costs from EOUSA and USMS. To the extent possible, for scenarios 1 and 2 we obtained agency estimates of the relevant cost elements, including build-to-lease construction costs, agency rental fees, salaries and benefits, operational costs, information technology and equipment costs, and space and facility costs. Since the court scenarios were hypothetical and the exact details of the jurisdiction, staffing, and physical facilities are not known, the estimated costs cannot be aggregated to obtain a precise estimate of the total costs for the scenarios. Further, we did not ask GSA, AOUSC, EOUSA, or USMS to estimate the costs of scenario 3 since this would be a unique structure and the federal agencies would have no existing federal structure upon which to estimate costs. To assess the reliability of the estimated costs for scenarios 1 and 2, we talked with agency officials knowledgeable about how the estimates were developed and reviewed relevant documentation, such as building surveys and agency budget documents. We found the data for scenarios 1 and 2 sufficiently reliable to provide estimates of the possible future costs for these scenarios for establishing a federal court in American Samoa, with limitations as noted below. Based on preliminary estimates and on hypothetical requirements similar to a proposed new courthouse in CNMI, GSA officials stated that the rough estimate of construction costs would be approximately $56 million and that the resulting agency rental fees based on the amortization of the construction loan might range from $80 to 90 per square foot, given a projected court construction award date of March 1, 2010. GSA and other agencies officials told us that these initial estimated costs may deviate widely from final construction costs for several reasons: (1) more detailed cost estimates are not available until later stages of (2) the condition of the undetermined site is unknown; (3) prices in the construction market may escalate beyond what has been (4) the cost adjustment index used for American Samoa, which accounts for 29 percent of the projected construction costs, is almost 10 years old and relied on limited expert opinion; and (5) American Samoa lacks local skilled labor and finished materials and the shipping and commodity costs at the time of construction are unknown. These factors would influence final construction costs, and thus agency rental costs. (1) Salaries and operational expenses were based on costs from fiscal years 2007 and 2008 data and would need to be reevaluated at the time a courthouse was projected to be built for inflationary and other cost escalation factors. (2) While CNMI and Guam court costs were used to estimate some court costs, the actual cost variation between American Samoa and the other territorial costs is unknown. (3) Because reliable estimates of the number of civil and criminal cases were not known, AOUSC officials based their personnel and benefits and operational and information technology cost estimates for probation and pretrial services on a percentage of the actual costs obligated in 2007 from the Probation and Pretrial Services Office in Guam, which is a consolidated operation covering both district courts located in CNMI and Guam. AOUSC officials determined the percentage of resources used to support the District Court for the Northern Mariana Islands as a basis for the estimate of costs for an office in American Samoa. (4) Rental costs were based on GSA space requirements estimated for the proposed courthouse in CNMI. (1) Because reliable estimates of the number of criminal and civil cases were not known, the U.S. Attorney’s cost data for scenario 1 non- personnel costs were based on actual fiscal year 2005 and fiscal year 2006 expenditure and allotment obligations for the U.S. Attorney’s Office for the District of Guam. Personnel costs were based on modular costs provided in the fiscal year 2008 President’s budget request to Congress. (2) For scenario 2, first year modular personnel costs represent partial year costs, whereas second year modular costs represent full-year costs. (3) Rental costs for the U.S. Attorney’s Office were based on GSA space requirements estimated for the U.S. Attorney’s Office in the proposed CNMI courthouse. (1) USMS officials assumed that defendant or prisoner transportation costs for a district court in American Samoa are unknown; however, the officials estimated that it would be about the same as costs in the District Court of the Northern Mariana Islands for fiscal year 2007— approximately 65 prisoners received per year and 104 court productions per year from federal detention facilities in Hawaii to American Samoa. If the workload in American Samoa is less or more, then estimated costs will be affected accordingly. (2) USMS officials assumed there would be no local detention space to house defendants or prisoners, so air transportation costs to federal detention facilities were included. Commercial airline rates were used since the Justice Prisoner and Alien Transportation System does not extend its flights to American Samoa. USMS officials said that commercial airline regulations and costs could not be specified under all defendant or prisoner transport circumstances. American Samoa consists of seven islands located about 2,600 miles southwest of Hawaii and about 1,600 miles from New Zealand. American Samoa Department of Commerce data indicate that in 2005, the population of American Samoa was about 65,500. Ethnically, Samoans constitute the vast majority of the population in American Samoa; about 31 percent of the population was born in the independent nation of Samoa. The Samoan islands were originally settled about 1000 B.C. by Polynesians. During the nineteenth century, Germany, Great Britain, and the United States developed commercial and military pursuits in Samoa, and in 1899 the three powers divided their authority over the islands, as Germany and Great Britain renounced all rights to Tutuila and the other Samoan islands east of Longitude 171 degrees west of Greenwich, and the United States renounced all rights to the western islands. On February 19, 1900, President McKinley issued an Executive Order placing control of the islands under the authority of the Department of the Navy, and on the same day, the Secretary of the Navy issued an order providing that the islands were established into a Naval Station, to be known as the Naval Station, Tutuila, and to be under the command of a Commandant. On April 17, 1900, the high chiefs of Tutuila formally ceded the islands of Tutuila and Aunuu to the United States, and on July 16, 1904, the high chief of Manua ceded the islands of Tau, Olosega, Ofu, and Rose to the United States. The Deeds of Cession were not formally accepted by the United States until 1929 when Congress, by joint resolution, accepted and ratified them and provided that “until Congress shall provide for the government of such islands, all civil, judicial, and military powers shall be vested in such person or persons and shall be exercised in such manner as the President of the United States shall direct….” In 1951, President Truman transferred the authority to govern American Samoa from the Secretary of the Navy to the Secretary of the Interior. The Secretary of the Interior subsequently issued an order to delimit the extent and nature of the authority of the American Samoa government, which provided for a Governor and an independent judicial branch. American Samoa ratified a Constitution, which went into effect on October 17, 1960, and a revised Constitution went into effect on July 1, 1967. The Constitution provides for legislative, judicial, and executive branches. The legislature, called the Fono, consists of a House of Representatives and Senate. The House of Representatives is composed of twenty members popularly elected from representative districts. The Senate is composed of eighteen members, each of whom must be matai and elected in accordance with Samoan custom by the city councils of the counties that the member is to represent. The 1967 Constitution provided that the executive branch was to consist of a Governor, to be appointed by the Secretary of the Interior. In 1977, the Secretary of the Interior superseded this provision by issuing an order providing that the Governor and Lieutenant Governor were to be popularly elected. The Governor’s veto power is similar to that of the U.S. President, except that if the Governor vetoes a bill and the legislature overrides the veto with a two- thirds majority of each house, the Governor, if still disapproving of the bill, may submit it to the Secretary of the Interior, who has the ultimate authority to decide if the legislation becomes law. The Constitution also provides for a judicial branch, which consists of the High Court, local district courts, and other courts that may be created by law. In 1983, Congress provided that the Constitution of American Samoa may only be amended by an act of Congress. American Samoa has limited representation in Congress. In 1970, the American Samoa legislature created the Office of the American Samoa Delegate-at-Large, which was to provide American Samoa with official representation in Washington, D.C. In 1978, Congress recognized the delegate from American Samoa and accorded the delegate status equivalent to that of the delegates from Guam and the U.S. Virgin Islands. As such, the delegate from American Samoa has all congressional privileges, including a vote in committee, except a vote in Congress as a whole. Although certain characteristics of the court system in American Samoa have been modified over time, the court system continues to resemble the system established by the first Commandant of the Naval Station in 1900. Although the village courts are no longer used, the High Court and the local district court remain in place, with the same basic division of jurisdiction, such that the High Court has jurisdiction over major local matters, including matters involving land and matai titles, and the local district court has jurisdiction over minor local matters, such as misdemeanor criminal cases and civil cases in which the amount in controversy does not exceed $5,000. While new avenues to appeal decisions of the High Court have been established, the appellate process within the American Samoa judiciary remains similar, with the appellate division of the High Court maintaining jurisdiction over decisions of the other High Court divisions and the local district court. Further, although the judges were initially appointed by the Governor, since 1931, the Chief Justice of the High Court has been appointed by the President’s delegate, first the Secretary of the Navy and then the Secretary of the Interior. In 1900, the first Commandant of the Naval Station, Commander Benjamin Tilley, issued Regulation No. 5, which established a system of courts in American Samoa. The system of courts consisted of village courts, local district courts, and the High Court. The village courts had jurisdiction over minor civil and criminal cases involving Samoans. The local district courts had jurisdiction over more significant cases and cases involving non-Samoans. The High Court had exclusive jurisdiction over major cases involving sums over $250 or criminal penalties over 6 months and all cases involving real property, treason, murder, and offenses committed within the Naval Station. According to a former Naval Governor of American Samoa, the village and local district courts had a case load generally consisting of cases involving offenses such as acts of physical violence, burglary, larceny, sex offenses, desertion, failure to pay taxes, traffic offenses, trespass, nonsupport of wife, and disorderly conduct. At the same time, the High Court mostly handled land and matai title disputes. In 1952, the judiciary of American Samoa underwent a major reorganization. The village courts were no longer used, and their jurisdiction was transferred to the local district courts. The High Court was reorganized into three divisions: appellate, probate, and trial. The structure of the High Court has continued to change over time, and jurisdiction over certain matters has been transferred between divisions. By 1969, local law had added to the High Court a fourth division, the land and titles division, which was to handle disputes related to land and matai titles. In 1979, local law eliminated the probate division and transferred such jurisdiction to the trial division of the High Court. In 2000, local law established a family, drug and alcohol court division. The law authorized the Chief Justice to transfer from the trial division of the High Court or the local district court to the family, drug and alcohol court division juvenile cases, domestic relations cases, certain domestic violence cases, and certain alcohol and substance abuse-related cases. In addition to restructuring the High Court, local law has also granted the High Court additional jurisdiction, such as over certain admiralty and maritime matters. In 1975, in response to Vessel Fijian Swift v. Trial Division of the High Court of American Samoa, in which the High Court held that it did not have in rem admiralty jurisdiction absent an express grant of such jurisdiction, local law granted the High Court jurisdiction, both in personam and in rem, over admiralty and maritime matters in common law. In 1982, the U.S. District Court of Hawaii confirmed that the High Court could exercise both in rem and in personam jurisdiction in admiralty and maritime cases. Although the High Court has jurisdiction over matters of admiralty and maritime common law, the High Court does not necessarily have jurisdiction over actions arising under federal maritime statutes, unless explicitly provided by federal law. Federal law has so provided in, for example, the statute governing maritime commercial instruments and liens. Throughout the 1960s and 1970s, and again in the early 2000s, federal law also provided that the High Court has jurisdiction over cases arising under certain other federal statutes. For example, the High Court has been granted jurisdiction over cases arising under certain federal statutes governing grain standards, pesticide control, animal welfare, animal and plant health, and poultry and meat inspection. Thus, current law provides that the High Court and local district court have jurisdiction over all local matters and certain federal matters. The High Court is composed of the trial; land and titles; family, drug and alcohol; and appellate divisions. The trial division has jurisdiction over civil cases in which the amount in controversy exceeds $5,000 (except land and matai title matters), criminal cases in which a felony is charged, admiralty and maritime matters, juvenile cases, probate, domestic relations except adoptions and certain child and spousal support cases, all writs, and any matter not otherwise provided for in statute. The land and titles division has jurisdiction over all matters relating to matai titles and all controversies relating to land. The family, drug and alcohol court division has jurisdiction over the following types of cases transferred from the trial division or the local district court: juvenile cases, including traffic offenses; domestic relations cases; domestic violence crimes except homicides and other Class A felonies; and criminal cases in which alcohol or other substance abuse is involved, including serious traffic offenses, except cases charging possession of a controlled substance with intent to distribute. The appellate division has appellate jurisdiction over all final decisions of the trial and land and titles divisions, appellate jurisdiction over all local district court and administrative decisions, and appeals of other matters specifically provided for by statute. The local district court retains jurisdiction over civil cases in which the amount in controversy does not exceed $5,000 (except land and matai title matters), criminal cases in which the offense charged is a misdemeanor or any offense punishable by not more than 1 year of imprisonment, traffic cases except those involving a felony, initial appearances and preliminary examinations in all criminal cases, adoptions and certain child and spousal support cases, and certain public health offenses. Beginning in 1900, the appellate division of the High Court had appellate jurisdiction over decisions of the trial division of the High Court and over decisions of the local district courts, and when the land and titles and family, drug, and alcohol court divisions were established within the High Court, the appellate division of the High Court assumed appellate jurisdiction over decisions of those divisions. Initially, the local district courts had appellate jurisdiction over decisions of the village courts, but once the village courts became defunct in 1952, the local district court lost its appellate jurisdiction. As such, current law provides that the appellate division of the High Court has appellate jurisdiction over decisions of the trial, land and titles, and family, drug, and alcohol court divisions of the High Court, as well as appellate jurisdiction over decisions of the local district court accompanied by a stenographic record and appeals based on a question of law. All decisions of the local district court in cases without a stenographic record may be appealed to the trial division of the High Court for de novo review. The Secretary of the Interior may also exercise appellate jurisdiction over decisions of the High Court. In June 1985, the Church of Jesus Christ of Latter-Day Saints requested that the Secretary of the Interior intervene and overturn a decision of the High Court regarding a piece of land in American Samoa. Though he declined to intervene, finding that such an intervention would undermine the U.S. policy of fostering greater self- government and self-sufficiency, the Secretary of the Interior stated that he had the authority to review the decision of the High Court. When the Church of Jesus Christ of Latter-Day Saints subsequently challenged the constitutionality of the Secretary’s refusal to overturn the High Court decision, the U.S. Court of Appeals for the District of Columbia Circuit approved of the Secretary of the Interior’s assertion of authority, stating that: The Congress has delegated its judicial authority with respect to American Samoa to the President, who has in turn delegated it to the Secretary…. The Congress, that is, could have, so far as Article III is concerned, provided that the Secretary himself would exercise the judicial power in American Samoa. No doubt, the due process clause of the Fifth Amendment may qualify this prerogative in some way. The Secretary might not be able to exercise his authority, nor perhaps even to retain it in dormancy, in a case to which he is a party. But that is a far cry from this case. Here, there is no claim that the Secretary was interested in the outcome. So far as due process is concerned, therefore, he could have decided it himself and there can be no cause of action because the court that did so was subservient to him. A decision of the High Court may not only be appealed to the Secretary of the Interior, it may also be collaterally challenged by filing an action in the U.S. District Court for the District of Columbia against the Secretary of the Interior for failing to administer American Samoa in accordance with the U.S. Constitution and federal law. This approach was first tested in King v. Morton in the mid-1970s. In that case, an individual charged in the High Court of American Samoa with willfully failing to pay his income tax moved for, and was denied, a jury trial. He subsequently commenced an action in the U.S. District Court for the District of Columbia against the Secretary of the Interior, requesting that the court declare unconstitutional the Secretary of the Interior’s administration of American Samoa in such a way that denied him the right to trial by jury. The district court dismissed the case for lack of jurisdiction, but the U.S. Court of Appeals for the District of Columbia Circuit held that the U.S. District Court for the District of Columbia could have jurisdiction under the federal question or writ of mandamus statutes, stating that the district court is “competent to judge the Secretary’s administration of the government of American Samoa by constitutional standards and, if necessary, to order the Secretary to take appropriate measures to correct any constitutional deficiencies.” The court again found that district court is competent to hear challenges to the constitutionality of the Secretary of the Interior’s administration of American Samoa in Corporation of Presiding Bishop of Church of Jesus Christ of Latter-Day Saints v. Hodel. In that case, the U.S. District Court for the District of Columbia found that, though the Church of Jesus Christ of Latter-Day Saints failed to raise a federal question, the court had jurisdiction to hear valid claims under the Constitution or federal law against the Secretary of the Interior regarding his administration of American Samoa. Thus, current law provides that decisions of the appellate division of the High Court may be appealed either directly to the Secretary of the Interior or challenged collaterally in the U.S. District Court for the District of Columbia, whose decisions may be appealed to the U.S. Court of Appeals for the District of Columbia Circuit and then to the U.S. Supreme Court. Beginning in 1900, the Commandant of the Naval Station was the President of the High Court and could appoint others to serve as judges. In 1903, the Commandant created the Office of Native Affairs, which was to supervise the judiciary. The Secretary of Native Affairs, a naval officer, became the chief judge of the local district courts, as well as serving as the legal advisor to the Governor, sheriff of the local police force, and prosecutor. Samoans appointed by the Governor sat as judges on the local district courts and magistrates of the village courts, with lifetime tenure, subject to removal only for misconduct. From 1931 until 1951, the Chief Justice of the High Court was appointed by the Secretary of the Navy. In 1931, the Governor separated the functions of the judge and prosecutor in the Chief Justice and Attorney General. The Chief Justice was to be a civilian appointed by the Secretary of the Navy, and the Attorney General position was filled by a naval officer. At this point, the Governor ceased to be the President of the High Court, and the Chief Justice was appointed by, and directly accountable to, the Secretary of the Navy. The Chief Justice was able to select associate judges from among the district judges to assist with cases in the High Court. Since 1951, when administration of American Samoa was transferred from the Secretary of the Navy to the Secretary of the Interior, the Chief Justice has been appointed by the Secretary of the Interior, and since 1962, the Associate Justice has also been appointed by the Secretary of the Interior. In the 1970s, the Secretary of the Interior began appointing federal judges to serve as Acting Associate Justices. About once each year, the Secretary coordinates with the Pacific Islands Committee of the Ninth Circuit to appoint judges to travel to American Samoa to hear appellate cases for approximately a week at a time. Current law provides that the Chief Justice and Associate Justice are appointed by the Secretary of the Interior and hold lifetime tenure for good behavior, but may be removed by the Secretary of the Interior for cause. The Chief Justice and Associate Justice must be trained in law. The associate judges are appointed by the Governor, upon the recommendation of the Chief Justice and confirmation of the Senate, and hold lifetime tenure, except that they may be removed by the Chief Justice for cause. The associate judges are not required to be trained in law, but rather are appointed based on their knowledge of Samoan custom and traditions. Also according to current law, the appellate division of the High Court is composed of the Chief Justice, Associate Justice, Acting Associate Justices, and associate judges. Sessions are held before three justices and two associate judges, and the presence of two justices and one associate judge is necessary to constitute a quorum and decide a case. In the case of a difference of opinion, the opinion of the two justices prevails, except in appeals from the land and title division, in which the opinion of the majority of five associate judges prevails. The land and title division is composed of the Chief Justice, Associate Justice, and the associate judges. For land matters, sessions are held before one justice and two associate judges, and the presence of one justice and one associate judge is necessary to constitute a quorum and decide a case. In the case of a difference of opinion, the opinion of the justice prevails. For matai title matters, sessions are held before one justice and four associate judges, and the presence of one justice and three associate judges is necessary to constitute a quorum and decide a case. In the case of a difference of opinion, the opinion of the majority of the four associate judges prevails, and if there is a tie, the justice casts the deciding vote. The trial division is composed of the Chief Justice, Associate Justice, and the associate judges. Sessions are held before one justice and two associate judges, and the presence of one justice and one associate judge is necessary to constitute a quorum and decide a case. In the case of a difference of opinion, the opinion of the justice prevails. In the family, drug and alcohol court division, sessions are held before the Chief Justice, Associate Justice or Acting Associate Justice, and two associate judges, and the presence of one justice and one associate judge constitutes a quorum for the trial and determination of the case. The local district court judge is appointed by the Governor, upon the recommendation of the Chief Justice and confirmation by the Senate, and holds lifetime tenure, although he may be removed by the Chief Justice for cause. The district court judge must also be trained in law. The Commonwealth of the Northern Mariana Islands, a chain of 14 islands stretching north from Guam, has a total land area of about 185 square miles. The three largest islands are Saipan, Tinian, and Rota. Saipan is about 3,300 miles from Hawaii, or about three-quarters of the distance from Hawaii to the Philippines. According to U.S. Census Bureau Data for 2000, the population of the Northern Mariana Islands is about 69,000, composed primarily of Asians, including Filipinos and Chinese, and Pacific Islanders, including Chamorros, Carolinians, and other Micronesians. About 58 percent of individuals residing in the Northern Mariana Islands are foreign born, and about 57 percent are not U.S. citizens. English, Chamorro, and Carolinian are the official languages of the Northern Mariana Islands. The Chamorro people are believed to have arrived in the Northern Mariana Islands about 1500 B.C. In 1565, Spain claimed the Mariana Islands as a possession, and in the mid-seventeenth century, Spain began to colonize the islands. During the time of Spanish colonization, the Chamorro population of Guam and the Northern Mariana Islands declined significantly—from between 50,000 and 100,000 when the Spanish first arrived in the mid-sixteenth century to around 1,500 by the time of the Spanish census in 1783. In the late-seventeenth century, Spain removed almost all of the population of the Northern Mariana Islands, with the exception of a small population on Rota that evaded the Spanish, to Guam, so that the islands remained nearly uninhabited until the nineteenth century. In the mid-nineteenth century, people from the Caroline Islands began to migrate to the Northern Mariana Islands, and in the late- nineteenth century, the Chamorros were allowed to return from Guam. During the twentieth century, the Northern Mariana Islands passed under the control of several foreign powers. After the Spanish-American War, Spain sold the Northern Mariana Islands to Germany. In 1914, Japan occupied the Northern Mariana Islands and became formally responsible for the islands in 1920. In 1944, the United States invaded the Northern Mariana Islands and defeated the Japanese. Subsequently, in 1947, the Northern Mariana Islands, along with the Caroline and Marshall Islands, entered into a trusteeship called the Trust Territory of the Pacific Islands, to be administered by the United States. The Northern Mariana Islands, however, after an unsuccessful attempt to be integrated with Guam, sought a separate relationship with the United States. By 1972, the Northern Mariana Islands had entered into separate status negotiations with the United States, and in 1975 the Northern Mariana Islands and the United States concluded a Covenant to Establish a Commonwealth of the Northern Mariana Islands in Political Union with the United States of America, making the Northern Mariana Islands a “self-governing commonwealth … in political union with and under the sovereignty of the United States of America.” The Covenant granted citizenship to residents of the Northern Mariana Islands and stated that the Northern Mariana Islands would approve a constitution that would provide for a local legislature, a popularly-elected Governor, and a local court system. The Covenant also provided for a District Court for the Northern Mariana Islands. In 1977, the Northern Mariana Islands adopted the Constitution of the Northern Mariana Islands, and in 1986 the Trusteeship Agreement establishing the Trust Territory of the Pacific Islands was dissolved, making the Covenant fully effective. The court system in the Northern Mariana Islands has developed in such a way that, over time, the local courts were granted additional responsibility and autonomy. For example, although the district court initially had jurisdiction over certain local matters, such jurisdiction was transferred from the District Court for the Northern Mariana Islands to the local Superior Court. Similarly, appellate jurisdiction over decisions of the Superior Court was transferred from the District Court for the Northern Mariana Islands to the newly-created local Supreme Court. Further, the appellate jurisdiction of the U.S. Court of Appeals for the Ninth Circuit over decisions of the Supreme Court expired, so that the U.S. Supreme Court has the same appellate jurisdiction over decisions of the Supreme Court of the Northern Mariana Islands as it does over decisions of the highest state courts. The current court system of the Northern Mariana Islands is composed of a District Court for the Northern Mariana Islands, which has the jurisdiction of a U.S. district court and a bankruptcy court; a local Superior Court, which handles local matters; and a Supreme Court, which has appellate jurisdiction over decisions of the Superior Court. Beginning in the late 1970s, the District Court for the Northern Mariana Islands had the original jurisdiction of a district court, as well as original jurisdiction over certain local criminal and civil cases and appellate jurisdiction over certain criminal and civil cases. Pursuant to the Covenant, in 1977 Congress established the District Court for the Northern Mariana Islands, granting the court the jurisdiction of a district court of the United States, except that cases arising under the Constitution or federal law had no minimum sum or value of the matter in controversy. The federal law also granted the district court original jurisdiction over all cases that the Constitution or laws of the Northern Mariana Islands did not vest in a local court. Further, the law granted the district court appellate jurisdiction as the Constitution and laws of the Northern Mariana Islands provided. Pursuant to the federal law, the Northern Mariana Islands immediately acted to vest limited jurisdiction in the local trial court and to define the appellate jurisdiction of the district court. The Constitution of the Northern Mariana Islands, adopted in 1977, established the Commonwealth Trial Court and granted it jurisdiction over all actions involving land in the Commonwealth, other civil actions in which the value of the matter in controversy did not exceed $5,000, and criminal actions in which the defendant, if convicted, could be fined no more than $5,000 or imprisoned for a term of no more than 5 years. The Constitution also provided that, at least 5 years after the Constitution has been in effect, the legislature could vest additional civil and criminal jurisdiction in the Commonwealth Trial Court. In 1978, the legislature of the Northern Mariana Islands also granted the district court appellate jurisdiction over all final judgments, final orders, and final decrees in criminal and civil cases. Thus, at that time, the district court had original jurisdiction over major local criminal and civil cases, as well as the jurisdiction of a federal district court, and appellate jurisdiction over final decisions in criminal and civil cases. During the 1980s, significant changes were made to the jurisdiction of the courts of the Northern Mariana Islands, as the government of the Northern Mariana Islands vested additional jurisdiction in the local courts, thereby divesting the district court of such jurisdiction. In 1982, the Northern Mariana Islands vested additional jurisdiction in the Commonwealth Trial Court, passing a law such that, effective January 1983, the trial court had original jurisdiction in all civil and criminal cases arising under the laws of the Northern Mariana Islands. Further, in 1988, the Northern Mariana Islands renamed the local trial court and expanded the jurisdiction of the newly-named Superior Court to include all civil actions, in law and in equity, and all criminal actions. The Northern Mariana Islands also established a Supreme Court and provided that, effective in May 1989, the Supreme Court had appellate jurisdiction over judgments and orders of the Superior Court. As a result of these changes, the district court was divested of its original, as well as appellate, jurisdiction over local matters. In 1984, Congress also changed the jurisdiction of the district court by redefining the jurisdiction to be that of a district court of the United States, to include diversity jurisdiction, and the jurisdiction of a bankruptcy court. From 1977 until 1984, the U.S. Court of Appeals for the Ninth Circuit had appellate jurisdiction over decisions of the appellate division of the District Court for the Northern Mariana Islands and decisions arising under federal law of the trial division of the District Court, and the appellate division of the District Court had appellate jurisdiction over decisions arising under local law of the trial division of the District Court. The 1977 federal law implementing the Covenant provided that portions of title 28 of the U.S. Code that apply to Guam or the District Court of Guam apply to the Northern Mariana Islands or the District Court for the Northern Mariana Islands, except as otherwise provided in Article IV of the Covenant. Thus, subject to Article IV of the Covenant, which authorizes the Northern Mariana Islands to determine the appellate jurisdiction of the district court, the U.S. Court of Appeals for the Ninth Circuit would have appellate jurisdiction over all final and interlocutory decisions of the District Court for the Northern Mariana Islands. In 1980, the U.S. Court of Appeals for the Ninth Circuit held that it did not have appellate jurisdiction over decisions in cases arising under local law issued by the trial division of the District Court of the Northern Mariana Islands; rather, the Northern Mariana Islands, as authorized by Article IV of the Covenant, had properly vested the appellate division of the District Court with appellate jurisdiction over such decisions. In 1984, Congress, disapproving of this holding, repealed the statutory provision authorizing the Northern Mariana Islands to determine the appellate jurisdiction of the district court and replaced it with a provision authorizing the Northern Mariana Islands to determine the appellate jurisdiction of the district court only over the courts established by the Constitution and laws of the Northern Mariana Islands. This amendment made clear that the Northern Mariana Islands could not grant the appellate division of the district court appellate jurisdiction over decisions of the trial division of the district court. Rather, the appellate division of the district court had appellate jurisdiction only over decisions of the local Superior Court, and the U.S. Court of Appeals for the Ninth Circuit had appellate jurisdiction over all final decisions of the District Court. The 1984 federal law also codified the appellate jurisdiction of the U.S. Court of Appeals for the Ninth Circuit over final decisions of the appellate division of the District Court for the Northern Mariana Islands. Once the Supreme Court became operational in 1989, this provision became moot. Thus, from 1984 until the present, the U.S. Court of Appeals for the Ninth Circuit has had jurisdiction over all final and interlocutory decisions of the District Court for the Northern Mariana Islands. From 1977 until 1988, the U.S. Supreme Court had appellate jurisdiction over certain decisions of the District Court for the Northern Mariana Islands. The 1977 federal law implementing the Covenant provided that portions of title 28 of the U.S. Code that applied to Guam or the District Court of Guam applied to the Northern Mariana Islands or the District Court for the Northern Mariana Islands, except as otherwise provided in Article IV of the Covenant, such that the U.S. Supreme Court had appellate jurisdiction over any decision of the District Court for the Northern Mariana Islands that held a federal law unconstitutional in a case in which the United States was a party. In 1988, however, Congress repealed the provision allowing a direct appeal to the U.S. Supreme Court from a decision of a district court. As a result, current law provides that decisions of the District Court for the Northern Mariana Islands may not be appealed directly to the U.S. Supreme Court. From 1977 until 1989, decisions of the Superior Court could be appealed to the appellate division of the District Court for the Northern Mariana Islands. The 1977 federal law implementing the Covenant authorized the Northern Mariana Islands to determine the appellate jurisdiction of the District Court for the Northern Mariana Islands, and in 1978, the Northern Mariana Islands provided that the district court had appellate jurisdiction over final decisions in criminal and civil cases. As noted above, in 1984, Congress confirmed that final decisions of the appellate division of the district court could be appealed to the U.S. Court of Appeals for the Ninth Circuit, such that decisions of the Superior Court could be appealed first to the appellate division of the district court and then to the U.S. Court of Appeals for the Ninth Circuit. Once the Supreme Court of the Northern Mariana Islands became operational in 1989, it had appellate jurisdiction over decisions of the Superior Court. From 1989 until 2004, the U.S. Court of Appeals for the Ninth Circuit had appellate jurisdiction over the Supreme Court of the Northern Mariana Islands. Federal law provides that the relations between the federal and local courts with respect to appeals, certiorari, removal of causes, and writs of habeas corpus are governed by the laws respecting the relations between the federal and state courts, except that for the first 15 years following the creation of the Supreme Court, the Ninth Circuit would have jurisdiction to review by writ of certiorari the decisions of such court in all cases involving the Constitution or federal law. Thus, from 1989 until 2004, the first 15 years of the operation of the Supreme Court of the Northern Mariana Islands, the U.S. Court of Appeals for the Ninth Circuit had appellate jurisdiction over cases arising under federal law decided by the Supreme Court of the Northern Mariana Islands. In 2004, the relationship between the Supreme Court of the Northern Mariana Islands and the federal court system became like that between a state supreme court and the federal court system. Of primary importance, final decisions of the Supreme Court of the Northern Mariana Islands may be reviewed by the U.S. Supreme Court, at its discretion, by writ of certiorari where the validity of a treaty or federal law is drawn into question; a territorial statute is drawn into question on the ground of it being repugnant to the U.S. Constitution, treaties, or federal law; or any title, right, privilege, or immunity is specially set up or claimed under the U.S. Constitution, treaties, federal, or commission held or authority exercised under the United States. The length of the terms of appointment for judges sitting on the District Court for the Northern Mariana Islands has increased over time. In 1977, federal law provided that the judge for the district court was to be appointed by the U.S. President with the advice and consent of the Senate for a term of 8 years and paid the same salary as that of a U.S. district judge. The 1984 amendments extended the term of the district judge to 10 years. Thus, current law provides that the district judge for the Northern Mariana Islands holds a term of 10 years and is to receive a salary equal to that of judges of the U.S. district courts. In addition to the district judge for the Northern Mariana Islands, additional judges may be assigned to sit on the District Court for the Northern Mariana Islands, and the population of judges eligible to be assigned to sit on the court has increased over time. In 1977, federal law provided that, whenever such an assignment is necessary for the proper dispatch of the business of the court, the Chief Judge of the Ninth Circuit may assign justices of the High Court of the Trust Territory of the Pacific Islands or judges of courts of record of the Northern Mariana Islands who are licensed attorneys in good standing, or a circuit or district judge of the Ninth Circuit, including a judge of the District Court of Guam who is appointed by the President; and the Chief Justice of the United States may assign any other U.S. circuit or district judge with the consent of the assigned judge and the chief judge of that circuit, to serve temporarily as a judge for the District Court for the Northern Mariana Islands. In 1984, federal law expanded the population of judges eligible to serve temporarily as a judge for the district court by authorizing the Chief Judge of the Ninth Circuit to assign a recalled senior judge of the District Court of Guam or of the District Court for the Northern Mariana Islands. Thus, current law provides that, whenever such an assignment is necessary for the proper dispatch of the business of the court, the Chief Judge of the Ninth Circuit may assign justices of the High Court of the Trust Territory of the Pacific Islands, judges of courts of record of the Northern Mariana Islands who are licensed attorneys in good standing, a circuit or district judge of the Ninth Circuit, including a judge of the District Court of Guam who is appointed by the President, or a recalled senior judge of the District Court of Guam or of the District Court for the Northern Mariana Islands; and the Chief Justice of the United States may assign any other U.S. circuit or district judge with the consent of the assigned judge and the chief judge of that circuit, to serve temporarily as a judge for the District Court for the Northern Mariana Islands. Guam, at 217 square miles, is the largest island in the Northern Pacific. It is located about 3,700 miles from Hawaii, or about three-quarters of the distance from Hawaii to the Philippines. According to U.S. Census Bureau data for 2000, the population of Guam is about 155,000. Guam’s primary ethnic groups are Chamorro and Filipino, and English and Chamorro are the dual official languages. Guam is believed to have been inhabited by the Chamorro people since about 2000 B.C. In 1521, Ferdinand Magellan landed on Guam; Spain claimed Guam and the Northern Mariana Islands as a possession in 1565, and in the mid-seventeenth century Spain began to colonize the islands. During the time of Spanish colonization, the Chamorro population of Guam and the Northern Mariana Islands declined significantly—from between 50,000 and 100,000 when the Spanish first arrived in the mid- sixteenth century to around 1,500 by the time of the Spanish census in 1783. After the Spanish-American War, in 1898, the United States took control of Guam, and the U.S. Navy became responsible for governing Guam. In 1941, Japan invaded Guam and occupied the island until 1944, when American forces recaptured Guam. In 1950, Congress passed the Organic Act for Guam, making Guam an unincorporated but organized territory of the United States. The Organic Act granted U.S. citizenship to the residents of Guam and organized a local government, which was to consist of a legislature; a Governor who would be appointed by the President, with the consent of the U.S. Senate; and a district court. Responsibility for the administration of Guam was subsequently transferred from the Secretary of the Navy to the Secretary of the Interior, where it remains today. In 1968, Congress amended the Organic Act to allow for the popular election of the Governor and Lieutenant Governor of Guam, and in 1972 Congress granted Guam a nonvoting delegate to Congress. Although Congress authorized Guam to call a constitutional convention to draft a local constitution in 1976, the proposed constitution was rejected by voters in a referendum. The court system in Guam has undergone significant changes since 1950. Congress and the Guam legislature have, over time, increased the responsibility and autonomy of the courts in Guam. For example, although the district court initially had jurisdiction over certain local matters, such jurisdiction was subsequently transferred from the District Court of Guam to the local Superior Court. Similarly, while the District Court of Guam had appellate jurisdiction over decisions of the Superior Court for a period of time, such jurisdiction was transferred from the District Court of Guam to the newly-created Supreme Court. Further, in order to provide oversight over the new Supreme Court, Congress originally provided that the U.S. Court of Appeals would have appellate jurisdiction over decisions of the Supreme Court for 15 years after its establishment. However, Congress later repealed this provision, providing that certain decisions of the Supreme Court may be appealed to the U.S. Supreme Court, just as are certain decisions of the highest state courts. The current court system of Guam is composed of a District Court of Guam, which has the jurisdiction of a U.S. district court and a bankruptcy court; a local Superior Court, which handles local matters; and a Supreme Court, which has appellate jurisdiction over decisions of the Superior Court. Beginning in 1950, the District Court of Guam had original jurisdiction over federal cases and some local cases, as well as appellate jurisdiction over certain decisions of the local trial court. In 1950, the Organic Act established the District Court of Guam and granted the court original jurisdiction over all cases arising under federal law, as well as all other cases in Guam not transferred by the Guam legislature to local courts. The Organic Act also granted the district court appellate jurisdiction to be determined by the Guam legislature. The Guam legislature subsequently reorganized the local court system, granting the local Island Court jurisdiction over non-felony cases arising under the laws of Guam, certain felony cases arising under the laws of Guam, all domestic relations and probate cases, and civil cases in which the amount in controversy did not exceed $2,000. Pursuant to the Organic Act, the Guam legislature also created an appellate division of the district court and provided that the district court had appellate jurisdiction over certain civil and criminal decisions of the Island Court. In 1974, Guam vested additional jurisdiction in the local courts, thereby divesting the district court of such jurisdiction. The legislature passed the Court Reorganization Act, creating a Superior Court, which replaced the preexisting Island, Police, and Commissioners’ Courts. The Act provided the Superior Court with original and exclusive jurisdiction over all cases arising under local law, except for cases also arising under federal law or pertaining to the Guam territorial income tax. The Court Reorganization Act also purported to create a Supreme Court, which was to have jurisdiction over appeals from the Superior Court, and repealed provisions of local law governing the appellate jurisdiction of the district court. The Supreme Court was not established under this law, however, as the transfer of appellate jurisdiction from the district court to the Supreme Court by the Guam legislature was challenged, and the U.S. Court of Appeals for the Ninth Circuit held that the Organic Act of Guam did not provide the Guam legislature with the authority to divest the district court of its appellate jurisdiction. In response, Congress amended the Organic Act of Guam in 1984 to authorize the Guam legislature to establish an appellate court and to confer upon such a court jurisdiction over all cases in Guam over which a federal district court does not have exclusive jurisdiction. The federal law also provided that, prior to the establishment of an appellate court, the District Court of Guam would continue to exercise appellate jurisdiction over the local courts of Guam. The same law expanded the jurisdiction of the district court to that of a district court of the United States, to include diversity jurisdiction. As an earlier law had conferred bankruptcy jurisdiction on the district court, from 1984 until 1996 the district court had the jurisdiction of the district court of the United States and a bankruptcy court of the United States, as well as appellate jurisdiction over local cases. The Guam legislature subsequently passed the Frank G. Lujan Memorial Court Reorganization Act of 1992, which created the Supreme Court of Guam. Once the Supreme Court became operational in 1996, the District Court of Guam was divested of appellate jurisdiction over local matters. In 2004, federal law amended the Organic Act to codify into federal law the establishment of the Superior and Supreme Courts of Guam. As a result, the District Court of Guam currently has the jurisdiction of a district court of the United States, including federal question jurisdiction and diversity jurisdiction, and that of a bankruptcy court of the United States. In general, since the establishment of the District Court of Guam, the U.S. Court of Appeals for the Ninth Circuit has had appellate jurisdiction over decisions of the district court. The Organic Act of 1950 provided that the Court of Appeals for the Ninth Circuit was to have appellate jurisdiction over decisions by the district court in all cases arising under federal law, habeas corpus proceedings, and civil cases in which the value in controversy exceeds $5,000. In 1951, Congress repealed this provision and amended federal law governing the appellate jurisdiction of the U.S. Courts of Appeals, providing that the Ninth Circuit Court of Appeals had appellate jurisdiction over all final and interlocutory decisions of the District Court of Guam. In 1982, the U.S. Court of Appeals for the Ninth Circuit held that its appellate jurisdiction extended to decisions of the appellate, as well as the trial, division of the District Court of Guam. In 1984, Congress codified into statute the appellate jurisdiction of the U.S. Court of Appeals for the Ninth Circuit over the decisions of the appellate division of the District Court of Guam. Once the Supreme Court became operational in 1996 and divested the district court of appellate jurisdiction, this provision became moot. Thus, current law provides that final and interlocutory decisions of the District Court of Guam may be appealed to the U.S. Court of Appeals for the Ninth Circuit. From 1950 until 1988, the U.S. Supreme Court had appellate jurisdiction over certain decisions of the District Court of Guam. The Organic Act provided that any party could appeal to the U.S. Supreme Court from a decision of the district court that held a federal law unconstitutional in a case in which the United States was a party. In 1951, although Congress repealed this provision and amended federal law governing the appellate jurisdiction of the U.S. Supreme Court, the right of appeal from the District Court of Guam to the U.S. Supreme Court remained substantively the same. In 1988, however, Congress repealed the provision allowing a direct appeal to the U.S. Supreme Court of a decision of a district court that holds a federal law unconstitutional in a case in which the United States is a party. As a result, current law provides that the decisions of the District Court of Guam may not be appealed directly to the U.S. Supreme Court. From 1950 until 1996, the District Court of Guam had appellate jurisdiction over decisions of the Superior Court. As noted above, the Organic Act granted the district court appellate jurisdiction to be determined by the Guam legislature, and the Guam legislature subsequently created an appellate division of the district court, providing that the district court had appellate jurisdiction over certain civil and criminal decisions of the local court. Pursuant to the 1984 amendments to the Organic Act, the appellate division of the District Court of Guam continued to exercise appellate jurisdiction over decisions of the Superior Court, with the U.S. Court of Appeals for the Ninth Circuit exercising appellate jurisdiction over this appellate division. Once the Supreme Court, authorized by federal law and established by the Guam legislature, became operational in 1996, it had appellate jurisdiction over decisions of the Superior Court. In 2004, the appellate jurisdiction of the Supreme Court was codified in U.S. Code, to include jurisdiction to hear appeals over any cause in Guam decided by the Superior Court of Guam or other courts established under the laws of Guam. Thus, current law provides that the Supreme Court has appellate jurisdiction over decisions of the Superior Court. From 1996 until 2004, the U.S. Court of Appeals for the Ninth Circuit had appellate jurisdiction over the Supreme Court of Guam. Federal law provided that the relations between the federal and local courts with respect to appeals, certiorari, removal of causes, and writs of habeas corpus are governed by the laws respecting the relations between the federal and state courts, except that for the first 15 years following the creation of the Supreme Court, the Ninth Circuit would have jurisdiction to review by writ of certiorari the decisions of such court. Thus, once the Supreme Court became operational in 1996, the U.S. Court of Appeals for the Ninth Circuit had appellate jurisdiction over the decisions of the Supreme Court. The U.S. Court of Appeals for the Ninth Circuit stated that its appellate jurisdiction over Supreme Court decisions extended not only to decisions arising under federal law but also to decisions arising under local law. In 2004, 7 years before the expiration of the 15 years after the establishment of the Supreme Court, Congress repealed the provision providing the Ninth Circuit with temporary appellate jurisdiction over decisions of the Supreme Court. Current law provides that local courts of Guam have the same relationship to federal courts as do state courts. Like final decisions of the highest state courts, final decisions of the Supreme Court of Guam may be reviewed by the U.S. Supreme Court, at its discretion, by writ of certiorari where the validity of a treaty or federal law is drawn into question; a territorial statute is drawn into question on the ground of it being repugnant to the U.S. Constitution, treaties, or federal law; or any title, right, privilege, or immunity is specially set up or claimed under the U.S. Constitution, treaties, federal, or commission held or authority exercised under the United States. The length of the terms of appointment for judges sitting on the District Court of Guam has increased over time. The Organic Act of 1950 provided that the judge for the district court was to be appointed by the U.S. President with the advice and consent of the Senate for a term of 4 years and paid the same salary as the Governor of Guam. The 1958 amendments extended the term of the district judge to 8 years and provided that the district judge of Guam receive the salary of U.S. district judges. In 1984, federal law again extended the term of the district judge of Guam, to 10 years. In addition to the judge appointed to sit on the District Court of Guam, other judges may be assigned to sit on the district court, and the population of judges that may be assigned to sit on the court has increased over time. The Organic Act provided that the Chief Justice of the United States was authorized to assign any consenting U.S. circuit or district judge to serve as a judge in the District Court of Guam whenever necessary for the proper dispatch of the business of the court. In 1958, federal law expanded the population of judges that were eligible to be assigned to serve temporarily in the district court by authorizing the Chief Judge of the Ninth Circuit to assign a judge of the Island Court of Guam, a judge of the High Court of the Trust Territory of the Pacific Islands, or a circuit or district judge of the Ninth Circuit to serve temporarily as a judge in the District Court of Guam. In 1984, federal law again expanded the population of judges eligible to serve temporarily in the district court by authorizing the Chief Judge of the Ninth Circuit to assign a recalled senior judge of the District Court of Guam or of the District Court for the Northern Mariana Islands. As a result, current law provides that the Chief Judge of the Ninth Circuit may assign a judge of any local court of record, a judge of the High Court of the Trust Territory of the Pacific Islands, a circuit or district judge of the Ninth Circuit, or a recalled senior judge of the District Court of Guam or of the District Court for the Northern Mariana Islands; and the Chief Justice of the United States may assign any other U.S. circuit or district judge, to serve temporarily as a judge in the District Court of Guam. The U.S. Virgin Islands consists of three main islands—St. Thomas, St. John, and St. Croix—as well as about 50 islets and cays. The islands have a total land mass of about 135 square miles and are located approximately 1,200 miles southeast of Florida and 40 miles east of Puerto Rico. According to 2000 U.S. Census Bureau data, the population of the U.S. Virgin Islands is about 109,000. Based on the same data, of the U.S. Virgin Islands population, about 76 percent is black and 13 percent is white, and though English is spoken at home by the majority of the population, about 17 percent claim Spanish and about 7 percent French or French Creole as their primary language. The Virgin Islands are believed to have been first inhabited by the Taino branch of the Arawak Indian culture group. The Taino Indians are believed to have been defeated by the Carib Indians, whom Christopher Columbus encountered when he first arrived in St. Croix in 1493. Throughout the seventeenth century, various European powers fought for control of the islands, but by 1735 Denmark governed the islands. With the use of large numbers of slaves, Denmark developed a sugar economy on St. Croix and a trading economy on St. Thomas. The United States purchased the Virgin Islands from Denmark in 1917. Federal law established a temporary government for the U.S. Virgin Islands, vesting the Governor, who, from 1917 until 1931 was a naval officer, with all military, civil, and judicial powers. The law also provided that local laws in effect at the time of enactment would remain in force and be administered by the existing local judicial tribunals. As such, the legislative branch consisted of two legislatures, one in St. Croix and one in St. Thomas, and the judicial branch consisted of the police courts and district court. In 1927, federal law provided that all residents of the U.S. Virgin Islands were U.S. citizens. In 1931, the President transferred responsibility for governing the U.S. Virgin Islands from the Secretary of the Navy to the Secretary of the Interior. Congress subsequently passed the Organic Act of 1936, which established local self-government. The Act provided for two Municipal Councils, one for St. Croix and one for St. Thomas and St. John, which were to meet once a year to enact legislation that would apply to the Virgin Islands as a whole; a Governor, to be appointed by the President with the advice and consent of the Senate, who was to act under the supervision of the Secretary of the Interior; and a District Court of the Virgin Islands and such inferior courts as the local legislature may determine. The Revised Organic Act of 1954 largely maintained the governmental structure from the prior Organic Act, except that it established a unified legislature for the U.S. Virgin Islands. In 1968, federal law provided that the Governor was to be popularly elected, and in 1972, federal law granted the U.S. Virgin Islands a nonvoting delegate to Congress. Although Congress authorized the U.S. Virgin Islands to convene a constitutional convention to draft a constitution, the proposed constitutions were rejected by voters. The court system in the U.S. Virgin Islands has changed over time, with the local courts gradually gaining increased responsibility and autonomy. For example, though the local trial court previously exercised jurisdiction over certain local issues, in the 1970s and early 1980s the local trial court was granted concurrent jurisdiction with the district court over additional local cases, and by 1994 the local trial court had been granted exclusive jurisdiction over local cases. Similarly, while the District Court of the Virgin Islands had appellate jurisdiction over decisions of the Superior Court for a period of time, in 2007 such jurisdiction was transferred from the District Court of the Virgin Islands to the newly-created Supreme Court. The current court system of the U.S. Virgin Islands is composed of the District Court of the Virgin Islands, which has the jurisdiction of a U.S. district court and a bankruptcy court; a local Superior Court, which handles local matters; and a Supreme Court, which has appellate jurisdiction over decisions of the Superior Court. From 1917 until 1936, the local judicial system in the U.S. Virgin Islands operated largely without federal influence. After the United States acquired the Virgin Islands in 1917, Congress passed a law providing that until Congress otherwise provided, local laws were to remain in force and be administered by the existing local judicial tribunals. By 1921, the local judicial tribunals consisted of a district court and three police courts: the Police Court of Frederiksted, the Police Court of Christiansted, and the Police Court of Charlotte Amalie. The district court had jurisdiction over all civil, criminal, admiralty, equity, insolvency, and probate matters and causes, unless jurisdiction was conferred on some other court, in which event the jurisdiction of the district court was concurrent. The police courts had jurisdiction, though not exclusive, over the recovery of specific personal property when the value did not exceed $200, for the recovery of money or damages when the amount claimed did not exceed $200, and over cases in which the defendant confessed without action to certain offenses. The police courts also had criminal jurisdiction, though not exclusive, over cases involving larceny when the value of the property did not exceed $50; assault or assault and battery, except when charged as committed with intent to commit a felony, in the course of a riot, or with any weapon or upon a public officer when upon duty; any other misdemeanor; and any offense over which jurisdiction was specifically conferred upon the police court. The police courts did not have jurisdiction over actions involving the title to real property and actions for false imprisonment, libel, malicious prosecution, criminal conversation, seduction upon a promise to marry, actions of an equitable nature, or admiralty causes. Beginning in 1936, the District Court of the Virgin Islands had original jurisdiction in federal cases and some local cases, as well as appellate jurisdiction over the local courts. In 1936, the Organic Act established the District Court of the Virgin Islands, granting it jurisdiction over criminal cases arising under local or federal law, cases in equity, cases in admiralty, cases of divorce and annulment of marriage, cases at law involving sums exceeding $200, cases involving title to real estate, and cases involving federal offenses committed on the high seas on vessels belonging to U.S. citizens or corporations when the offenders were found on or brought to the Virgin Islands. The District Court of the Virgin Islands also had concurrent jurisdiction with the police courts over civil cases in which the sum did not exceed $200 and criminal cases in which the punishment did not exceed a fine of $100 or imprisonment of 6 months, as well as appellate jurisdiction over decisions of the police courts. At the same time, the Organic Act authorized the local legislature to provide for a local Superior Court and to transfer from the District Court of the Virgin Islands to the Superior Court jurisdiction over all cases other than those arising under federal law. The Revised Organic Act of 1954 provided that the District Court of the Virgin Islands had the jurisdiction of a district court of the United States in all causes arising under federal law, regardless of the sum or value of the matter in controversy. The Revised Organic Act also provided that the district court had general original jurisdiction over all causes in the Virgin Islands, except that the local courts had exclusive jurisdiction over civil actions in which the matter in controversy did not exceed $500, criminal cases in which the maximum punishment did not exceed $100 or imprisonment for 6 months, or both, and all violations of police and executive regulations. The Act further authorized the local legislature to grant the local courts additional jurisdiction, to be exercised concurrently with the district court. Over time, the Virgin Islands government granted the local courts additional jurisdiction, which was exercised concurrently with the district court. In 1976, Virgin Islands law provided that the newly-named Territorial Court had concurrent jurisdiction over civil cases in which the amount in controversy exceeded $500 but did not exceed $50,000 and over criminal cases in which the punishment exceeded a fine of $100 or imprisonment for 6 months but did not exceed imprisonment for 1 year or a fine as prescribed by law. The same law provided that 2 years after the effective date of the law, the Territorial Court would assume jurisdiction, concurrent with the district court, over criminal cases in which the maximum sentence did not exceed imprisonment for 5 years or a fine as prescribed by law. In 1981, local law expanded the civil jurisdiction of the Territorial Court by increasing the maximum amount in controversy from $50,000 to $200,000. In 1984, Congress further defined the jurisdiction of the District Court of the Virgin Islands and authorized the local legislature to divest the district court of jurisdiction over local matters. Congress amended the Organic Act, conferring upon the District Court of the Virgin Islands the jurisdiction of a federal court, including diversity jurisdiction; the jurisdiction of a bankruptcy court; exclusive jurisdiction over cases involving income tax laws applicable to the Virgin Islands; and concurrent jurisdiction with the local courts over offenses against local law that are based on the same underlying facts as offenses against federal law. The amendments also granted the District Court of the Virgin Islands jurisdiction over all causes in the Virgin Islands not vested by local law in the local courts of the U.S. Virgin Islands, except that the jurisdiction of the district court was not to extend to civil cases in which the matter in controversy did not exceed the sum of $500 or to criminal cases in which the maximum punishment did not exceed a fine of $100 or imprisonment for 6 months, or both, and to violations of local police and executive regulations. In conjunction with this provision, the amendments authorized the legislature of the Virgin Islands to vest in the local courts jurisdiction over all causes in the Virgin Islands over which any federal court did not have exclusive jurisdiction. The U.S. Virgin Islands government subsequently took action to expand the jurisdiction of the local courts and divest the district court of jurisdiction over local matters. The local legislature provided that, effective in 1991, the Territorial Court had jurisdiction over all civil cases regardless of the amount in controversy, subject to the original jurisdiction of the District Court of the Virgin Islands. Effective in 1992, Virgin Islands law provided that the Territorial Court had jurisdiction, subject to the concurrent jurisdiction of the district court, over criminal cases in which the punishment did not exceed imprisonment for 15 years or a fine prescribed by law. Effective in 1994, the criminal jurisdiction of the Territorial Court was further expanded, as Virgin Islands law provided that the Territorial Court had jurisdiction over all criminal cases, subject to the concurrent jurisdiction of the district court over local offenses with the same underlying facts as federal offenses. Thus, current law provides that the District Court of the Virgin Islands has the jurisdiction of a district court of the United States, including diversity jurisdiction; the jurisdiction of a bankruptcy court; jurisdiction over all matters relating to income tax laws applicable to the Virgin Islands; and concurrent jurisdiction with the Superior Court over criminal cases arising under local law in which the underlying facts are the same as federal offenses. Since 1917, decisions of the District Court of the Virgin Islands could be appealed to the U.S. Court of Appeals for the Third Circuit. In 1917, after the United States acquired the Virgin Islands, Congress passed a law providing that appeals were to be made to the U.S. Court of Appeals for the Third Circuit. The Organic Act of 1936 provided that appeals from the District Court were to be as provided by the law in force on the date of enactment. In 1948, federal law provided that the U.S. Court of Appeals for the Third Circuit had appellate jurisdiction over final and interlocutory decisions of the District Court of the Virgin Islands. The 1984 amendments to the Organic Act confirmed that such appellate jurisdiction extended to decisions of the appellate division of the district court, which had appellate jurisdiction over decisions of the Superior Court. Once the Supreme Court of the Virgin Islands became operational in 2007, this provision became moot. Thus, current law provides that the U.S. Court of Appeals for the Third Circuit has appellate jurisdiction over final and interlocutory decisions of the District Court of the Virgin Islands. From 1948 until 1988, the U.S. Supreme Court had appellate jurisdiction over certain decisions of the District Court of the Virgin Islands. In 1948, federal law provided that the U.S. Supreme Court had appellate jurisdiction over any decision of the District Court of the Virgin Islands that held a federal law unconstitutional in a case in which the United States was a party. In 1988, however, Congress repealed this provision. As a result, current law provides that the decisions of the District Court of the Virgin Islands may not be appealed directly to the U.S. Supreme Court. From 1936 until 2007, decisions of the Superior Court of the Virgin Islands could be appealed to the District Court of the Virgin Islands; since 2007, the Supreme Court of the Virgin Islands has had appellate jurisdiction over decisions of the Superior Court. The Organic Act of 1936 provided that the District Court of the Virgin Islands had appellate jurisdiction over decisions of the local courts. The Revised Organic Act of 1954 again provided that the District Court of the Virgin Islands had appellate jurisdiction over decisions of the local courts to the extent prescribed by local law. By 1965, the Virgin Islands legislature had defined the appellate jurisdiction of the district court over the decisions of the Superior Court, providing that the district court had appellate jurisdiction over Superior Court decisions in all civil cases, all juvenile and domestic relations cases, and all criminal cases in which the defendant was convicted, other than by guilty plea. In 1984, federal law provided for an appellate division of the District Court of the Virgin Islands, which was to consist of the chief judge of the district court and two designated judges, provided that not more than one of them was a judge of a court established by local law. The federal law also authorized the Virgin Islands legislature to establish an appellate court, and in 2004, the Virgin Islands legislature did so, establishing the Supreme Court of the Virgin Islands. Once the Supreme Court became operational in 2007, it assumed appellate jurisdiction over decisions of the Superior Court. Since 2007, the U.S. Court of Appeals for the Third Circuit has had appellate jurisdiction over the decisions of the Supreme Court of the Virgin Islands. Federal law provides that the relations between the federal and local courts with respect to appeals, certiorari, removal of causes, and writs of habeas corpus are governed by the laws respecting the relations between the federal and state courts; however, the law provides that for the first 15 years following the creation of the Supreme Court, the Third Circuit is to have jurisdiction to review by writ of certiorari the decisions of such court. As such, since 2007, when the Supreme Court became operational, the U.S. Court of Appeals for the Third Circuit has exercised this jurisdiction. In 2022, upon the expiration of the 15 years, local courts of the Virgin Islands will have the same relationship to the federal judicial system as do state courts. Of significance, final decisions of the Supreme Court of the Virgin Islands will be reviewed by the U.S. Supreme Court, at its discretion, by writ of certiorari where the validity of a treaty or federal law is drawn into question; a territorial statute is drawn into question on the ground of it being repugnant to the U.S. Constitution, treaties, or federal law; or any title, right, privilege, or immunity is specially set up or claimed under the U.S. Constitution, treaties, federal, or commission held or authority exercised under the United States. Both the number of judges of the District Court of the Virgin Islands and the terms of appointment of those judges have increased over time. The Organic Act of 1936 provided that the judge of the District Court of the Virgin Islands was to be appointed by the President with the advice and consent of the Senate and hold a term of 4 years unless sooner removed by the President for cause. The Revised Organic Act of 1954 increased the term of the judge to 8 years and provided that the judge should receive the salary equal to that of judges of U.S. district courts. In 1970, the District Court of the Virgin Islands was allocated an additional district judge, and in 1984, federal law increased the term of the two judges of the district court to 10 years. In addition to the judges appointed to sit on the District Court of the Virgin Islands, other judges may be assigned to sit temporarily on the court, and the population of judges eligible to be assigned to the District Court of the Virgin Islands has increased over time. The Revised Organic Act of 1954 provided that, whenever such an assignment is necessary for the proper dispatch of the business of the district court, the Chief Judge of the Third Circuit may assign a circuit or district judge of the Third Circuit, or the Chief Justice of the United States may assign any other U.S. circuit or district judge with the consent of the judge and of the chief judge of that circuit, to serve temporarily as a judge of the District Court of the Virgin Islands. In 1970, federal law expanded the pool of judges that the Chief Judge of the Third Circuit may assign to serve temporarily as a judge of the District Court of the Virgin Islands to include judges of the Municipal Court of the Virgin Islands. The 1984 federal law further expanded the pool of judges eligible to be assigned by the Chief Judge of the Third Circuit to the district court to include any judge of a court of record of the Virgin Islands established by local law and a recalled senior judge of the District Court of the Virgin Islands. Thus, current law provides that, when such an assignment is necessary for the proper dispatch of the business of the court, the chief judge of the Third Circuit may assign a judge of a court of record of the Virgin Islands established by local law, a circuit or district judge of the Third Circuit, or a recalled senior judge of the District Court of the Virgin Islands; and the Chief Justice of the United States may assign any other United States circuit or district judge with the consent of the assigned judge and the chief judge of that circuit, to serve temporarily as a judge of the District Court of the Virgin Islands. In addition to the contact named above, Christopher Conrad, Assistant Director, Chuck Bausell, Jenny Chanley, George Depaoli, Emil Friberg, Jared Hermalin, Nancy Kawahara, Tracey King, Jeff Malcolm, Jan Montgomery, Amy Sheller, and Adam Vogt made key contributions to this report.
American Samoa is the only populated U.S. insular area that does not have a federal court. Congress has granted the local High Court federal jurisdiction for certain federal matters, such as specific areas of maritime law. GAO was asked to conduct a study of American Samoa's system for addressing matters of federal law. Specifically, this report discusses: (1) the current system for adjudicating matters of federal law in American Samoa and how it compares to those in the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and the U.S. Virgin Islands (USVI); (2) the reasons offered for or against changing the current system for adjudicating matters of federal law in American Samoa; (3) potential scenarios and issues associated with establishing a federal court in American Samoa or expanding the federal jurisdiction of the local court; and (4) the potential cost elements and funding sources associated with implementing those different scenarios. To conduct this work, we reviewed previous studies and testimonies, and collected information from and conducted interviews with federal government officials and American Samoa government officials. Because American Samoa does not have a federal court like the CNMI, Guam, or USVI, matters of federal law arising in American Samoa have generally been adjudicated in U.S. district courts in Hawaii or the District of Columbia. Reasons offered for changing the existing system focus primarily on the difficulties of adjudicating matters of federal law arising in American Samoa, principally based on American Samoa's remote location, and the desire to provide American Samoans more direct access to justice. Reasons offered against any changes focus primarily on concerns about the effects of an increased federal presence on Samoan culture and traditions and concerns about juries' impartiality given close family ties. During the mid-1990s, several proposals were studied and many of the issues discussed then, such as the protection of local culture, were also raised during this study. Based on previous studies and information gathered for this report, GAO identified three potential scenarios, if changes were to be made: (1) establish a federal court in American Samoa under Article IV of the U.S. Constitution, (2) establish a district court in American Samoa as a division of the District of Hawaii, or (3) expand the federal jurisdiction of the High Court of American Samoa. Each scenario would present unique issues to be addressed, such as what jurisdiction to grant the court. The potential cost elements for establishing a federal court in American Samoa include agency rental costs, personnel costs, and operational costs, most of which would be funded by congressional appropriations. Exact details of the costs to be incurred would have to be determined when, and if, any of the scenarios were adopted. The controversy surrounding whether and how to create a venue for adjudicating matters of federal law in American Samoa is not principally focused on an analysis of cost effectiveness, but other policy considerations, such as equity, justice, and cultural preservation.
The Minerals Management Service (MMS), an agency of the Department of the Interior, collected about $2.5 billion in royalties for gas sold from leases on federal lands and about $1.6 billion in royalties for oil sold from leases on federal lands in fiscal year 1997. There are approximately 22,000 federal oil and gas leases, which are located in 30 states, off the shore of California, and in the Gulf of Mexico. The federal government distributes about half of the royalties collected from federal leases located in states back to those states (although Alaska receives 90 percent) and shares a smaller portion of the royalties collected from leases off the shore of California and in the Gulf of Mexico with California and the Gulf states. About 78 percent of the federal leases are located in nine western states, but they produce relatively small amounts of oil and gas. In 1996, the most recent year for which data were available, these leases provided less than 13 percent of the total federal royalties; leases in the Gulf of Mexico provided about 83 percent of the total federal royalties (and leases in the rest of the country and off the shore of California provided the remaining 4 percent). Oil and gas royalties are calculated as a percentage (usually 12-1/2 percent for onshore federal leases and 16-2/3 percent for federal leases off the shore of California and in the Gulf of New Mexico) of the value of production, less certain allowable adjustments (reflecting, e.g., the cost of transporting oil to markets). The value of production is generally determined by multiplying the volume produced (which is measured in barrels of oil and in cubic feet of gas) by the sales price. Contracts under which domestic oil is sold specify one of three types of sales prices: (1) posted prices, which are offers made by purchasers to buy oil from a specific area; (2) spot prices, under which the buyer and seller agree to the delivery of a specific quantity of oil in the following month; and (3) prices of crude oil futures contracts that are sold on the New York Mercantile Exchange (NYMEX). Posted prices can change frequently, and contracts using posted prices frequently specify that an additional premium be paid. Spot prices can change daily; two commonly cited spot prices are the prices paid for Alaska North Slope (ANS) and West Texas Intermediate crude oil. NYMEX futures contracts each establish a price for the future delivery of 1,000 barrels of sweet crude oil (similar in quality to West Texas Intermediate oil) at Cushing, Oklahoma, where several major oil pipelines intersect and storage facilities exist. When oil is bought and sold by parties with competing economic interests, the exchange is said to be “at arm’s length” and the price paid establishes a market value for the oil. Roughly one-third of the oil from federal leases is sold at arm’s length; the remaining two-thirds is exchanged between parties that do not have competing economic interests under terms that do not establish a price or market value. For example, oil companies that both produce and refine oil may transport the oil they produce to their own refineries rather than sell it. These oil companies may also exchange similar quantities of oil with other oil companies—rather than sell it—to physically place oil closer to their refineries and thereby reduce their costs of transporting it. Other oil companies that do not refine oil (often referred to as independent producers) may sell the oil they produce to marketing subsidiaries or to other companies with which they share economic interests. The value of oil from a federal lease is determined by the price paid in a sale “at the lease,” which is how independent producers traditionally sold their oil. Since the collapse of world oil prices in 1986, however, independent producers have employed marketers and traders to transport their oil from their leases to market centers and to refineries, where the oil is sold at higher prices. Under these circumstances, federal regulations provide that the price paid at the actual point of sale can be adjusted to approximate the price that would have been paid if the oil had been sold at the lease and that federal royalties can be paid on the adjusted price. While oil and gas royalties are most often paid in cash, they may instead be paid with a portion of the actual oil or gas that is produced (e.g., the lessor, who receives the royalties, would take 12-1/2 barrels of oil from every 100 barrels of oil that is produced). This practice of taking royalties in kind is uncommon because few lessors can or want to store oil or gas or market and sell it. However, some lessors accept royalties in kind under certain circumstances because they can sell the oil or gas for more than they would have received if the royalties had been paid in cash. Paying royalties in kind rather than in cash eliminates the need to determine the sales price of the production because royalties in kind are calculated only on the basis of the volume of oil or gas that is produced. Representatives of the oil industry have suggested that the federal government accept some or all of its oil and gas royalties in kind and have testified before the Congress supporting a federal royalty-in-kind program. Legislation has been introduced in the Congress that would require the federal government to accept all its oil and gas royalties in kind (a recent amendment to the legislation would exempt certain wells). MMS has estimated that this legislation would cost the federal government between about $140 million and $367 million annually. MMS promulgated the oil valuation regulations that are currently in effect in 1988. These regulations define the price of oil sold in arm’s-length transactions, for the purpose of determining federal royalties, as all financial compensation accruing to the seller. This compensation, known as gross proceeds, includes the quoted sales price and any premiums the buyer receives. For other transactions (i.e., those not at arm’s length), the price of the oil is defined as the higher of either the gross proceeds or the amount arrived at by the first applicable valuation method from the following list of five alternatives: (1) the lessee’s posted or contract prices, (2) others’ posted prices, (3) others’ arm’s-length contract prices, (4) arm’s-length spot sales or other relevant matters, and (5) a netback or any other reasonable method. The first two alternatives, and to a lesser extent the third, can rely on posted prices in establishing value. Under the revised oil valuation regulations that are currently proposed, MMS would continue to require that, for the purpose of determining federal royalties, gross proceeds be used to establish the price of oil that is sold in arm’s-length transactions. For transactions that are not at arm’s length, however, the proposed regulations substantially change the means for determining the price of the oil, no longer relying on the use of posted prices and instead relying on spot prices. To determine federal royalties, the proposed regulations define the price of oil not sold in arm’s-length transactions differently in each of three domestic oil markets: (1) Alaska and California (including leases off the shore of California); (2) the six Rocky Mountain states of Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming; and (3) the rest of the country, including the Gulf of Mexico. These regions are depicted in figure 1. Appendix I contains additional information on each of these oil markets. In Alaska and California, the price of oil not sold in arm’s-length transactions is defined in the proposed regulations as the ANS spot price, adjusted for the location of the lease and the quality of the oil. In the six Rocky Mountain states, this price is defined by the first applicable valuation method from the following list of four alternatives: (1) an MMS-approved tendering program (akin to an auction) conducted by the lessee; (2) the weighted average of the lessee’s arm’s-length purchases and sales from the same oil field, if they exceed 50 percent of the lessee’s purchases and sales in that specific oil field; (3) NYMEX prices, adjusted for the location of the lease and the quality of the oil; or (4) a method established by the MMS Director. For the rest of the country, the price of oil is defined as local spot prices, adjusted for the location of the lease and the quality of the oil. MMS estimates that its proposed regulations would increase federal royalties by $66 million annually. MMS’ decision to revise the oil valuation regulations relied on the findings of an interagency task force that examined whether the use of posted prices for the purpose of determining federal royalties in California was appropriate. The task force concluded that posted prices were inappropriately used for this purpose and recommended that MMS revise its oil valuation regulations. MMS also relied on additional studies, for which it had contracted, that concluded that posted prices did not reflect market value in other areas of the country as well. In addition, various states supplied MMS with information on legal settlements they had reached with major oil companies concerning the undervaluation of oil from state leases. By 1991, the City of Long Beach, California, reached an agreement with six of seven major oil companies to accept $345 million to settle a lawsuit it had filed years earlier. Although the lawsuit and settlement included issues other than the valuation of oil, one of the major issues was whether the companies’ use of posted prices represented the market value of oil produced from leases owned by the city and the state. After conducting a preliminary assessment of the implication of the settlement for federal oil leases in California and consulting with state officials, in June 1994 the Department of the Interior assembled an interagency task force with representatives from MMS, Interior’s Office of the Solicitor, the departments of Commerce and Energy, and the Department of Justice’s Antitrust Division. MMS also initiated audits of two of the seven major oil companies that produced oil from federal leases in California. The task force examined documents submitted by the companies in the lawsuit that had formerly been sealed by the court, reviewed the results of MMS’ audits, and employed consultants to analyze the market for oil in California. The market studies noted that the seven major oil companies dominated the oil market in California by controlling most of the facilities that produce, refine, and transport oil in the state—that is, most of these transactions were not at arm’s length—and that this domination in turn suppressed posted prices. According to one of the studies, transactions involving ANS crude oil were at arm’s length however—although ANS oil is refined in California, it is transported into the state by a company that does not own any refineries in California, and it is actively traded. As a result, ANS oil commanded substantial premiums over California oil that was comparable in quality. The task force concluded that the major oil companies in California inappropriately calculated federal royalties on the basis of posted prices, rather than include the premiums over posted prices that they paid or received. The task force estimated that the companies should have paid between $31 million and $856 million in additional royalties (the wide range reflects the use of different methodologies and different treatments of accrued interest) for the period 1978 through 1993. In its final report issued in 1996, the task force recommended that MMS revise its oil valuation regulations to reduce reliance on the use of posted prices for valuing oil for royalty purposes. MMS contracted for additional studies to determine the extent to which posted prices were used to value oil from federal leases in California and in other areas and whether their use accurately reflected market value. These studies provided MMS with information on how oil is exchanged, marketed, and sold, as well as information on the relevance of posted prices, spot markets, and NYMEX futures prices in oil markets. The studies concluded that posted prices do not represent the market value of oil, citing situations in which oil is bought and sold at premiums above posted prices throughout the country. The studies cited the common practice of oil traders’ and purchasers’ quoting a posted price plus a premium, in what is known as the P-plus market, as additional evidence that posted prices are less than market value. Several states provided information to MMS about their experiences in resolving disputes with oil companies regarding the valuation of oil from leases on state lands. In general, the states disputed the oil companies’ use of posted prices as the basis for determining royalties paid to the states, and the disputes were settled by using spot prices and NYMEX prices. For example: Alaska reported settling a lawsuit filed against three major oil companies for about $1 billion. These companies produced oil and transported it directly to their refineries, paying state royalties based on prices the companies had themselves calculated. The state contended that these transactions from 1977 through 1990 were not at arm’s length and that the calculated prices were less than the market value of the oil. The amount of the settlement was determined using a complicated formula that was based on an average of spot prices; in addition, two of the companies agreed to use ANS spot prices to value subsequent transactions. A major oil company agreed to pay Texas $17.5 million to settle allegations that between 1986 and 1995 it had paid royalties on prices for oil from state leases that were less than market value. The company also agreed that it would subsequently value oil from state leases on the basis of NYMEX futures prices. Louisiana reported it settled 10 disputes involving oil companies that owned their own refineries and paid state royalties on posted prices from 1987 through 1998; these companies agreed to collectively pay about $6 million to settle these claims and to make future royalty payments based on average spot prices in the Louisiana oil market. New Mexico reported two settlements with a major oil company that used its own posted prices as a basis for state royalties from 1985 through 1995. The company paid the state about $2 million and agreed to calculate royalties based on higher NYMEX prices and higher posted prices offered by a nearby refinery. From December 1995 through June 1998, in five Federal Register notices and in 14 meetings throughout the country, MMS solicited public comments on its proposal to change the way oil from federal leases is valued for royalty purposes, and it has revised the proposed regulations three times in response to the comments received. Comments submitted by states were often at odds with comments provided by the oil industry: States generally support the proposed regulations because MMS anticipates that royalty revenues—which are shared with the states—will increase; the oil industry, on the other hand, generally opposes the proposed regulations because they would increase oil companies’ royalty payments and administrative burden. When MMS disagreed with a comment received, the agency provided reasons for not revising the proposed regulations as suggested. In total, MMS solicited comments on 39 major issues and received 183 letters in response. MMS has received 34 letters on its most recent revision of the proposed regulations but has not yet publicly addressed these comments. In its first Federal Register notice, published in December 1995, MMS announced that it was considering revising its oil valuation regulations because it had acquired evidence indicating that posted prices no longer represented market value. MMS solicited comments on seven major issues and received 25 letters. In response, representatives of the oil industry generally commented that they opposed any changes to the current regulations but that pending litigation prevented them from offering specific comments on the issues identified by MMS. Several states, on the other hand, commented that they believed that posted prices no longer reflected market value, provided evidence supporting their position, and recommended that MMS adopt spot prices or NYMEX futures prices for valuing oil from federal leases that was not sold at arm’s length. MMS’ second Federal Register notice, published in January 1997, contained the proposed regulations and asked for comments on 10 specific issues. The proposed regulations retained the use of gross proceeds for valuing federal oil sold at arm’s length—but reduced the number of oil companies that could use this method by restricting its applicability to those companies that had not sold oil in the past 2 years—and eliminated the use of posted prices for oil not sold at arm’s length. For these sales, MMS proposed that the value of oil from federal leases in Alaska and California would be based on ANS spot prices and that the value of oil from other federal leases would be based on NYMEX futures prices. Both the ANS and NYMEX prices would be adjusted for differences in the location of the leases and the quality of the oil. MMS received 70 written responses to this second notice. The oil industry generally opposed the proposed regulations, commenting that they were burdensome, that ANS and NYMEX prices did not reflect the market value of oil, that adjustments to these prices were burdensome and inadequate, and that the government should take its oil royalties in kind if it was dissatisfied with the current valuation regulations. Independent oil producers also commented that NYMEX prices should not be applied to the Rocky Mountain states because this oil market is geographically separate from the rest of the country. The states generally supported the proposed regulations, but individual states differed in their opinions on the applicability of NYMEX prices to value oil from federal leases and offered suggestions on the price adjustments for location and quality. The oil industry and several states opposed the proposed 2-year limitation on the use of the gross proceeds methodology, believing it was unnecessarily restrictive. In its third Federal Register notice, published in July 1997, MMS responded to the comments received by revising its proposed regulations: It deleted the proposed limitation on the use of the gross proceeds methodology, specifically asked for alternative suggestions for valuing oil not sold in arm’s-length transactions, and solicited comments on six additional issues. MMS received 28 written responses. Independent oil producers supported the deletion of the limitation on the use of the gross proceeds methodology. However, they also suggested an alternative system to value oil not sold at arm’s length by identifying and using a series of valuation methods based on comparable sales or purchases at the lease. In its fourth Federal Register notice, published in September 1997, MMS reopened the comment period on the proposed regulations and solicited comments on eight additional issues, including the independent producers’ suggestion to identify and use a series of alternative methods to value oil not sold at arm’s length, a suggestion to value such oil using spot prices, and the need for a separate valuation system for the Rocky Mountain states. MMS disagreed with and dismissed the oil industry’s suggestion to initiate a royalty-in-kind program as an alternative to the proposed regulations, stating that the agency would seek input on this issue through other avenues. MMS received 28 letters in response to this notice. The oil industry generally supported the suggestion to use a series of methods to value oil not sold at arm’s length but offered no consensus on the nature of these valuation methods or their relative order; supported establishing a separate valuation methodology for the Rocky Mountain states, agreeing that this market is geographically isolated; and again suggested that the federal government take its royalties in kind. MMS published its fifth and most recent Federal Register notice in February 1998, in which it again revised its proposed regulations. The regulations currently propose a separate system for valuing oil not sold at arm’s length in the Rocky Mountain states, thereby identifying three different domestic oil markets. The proposed regulations also eliminate the use of NYMEX prices in the rest of the country (but retain them as a last alternative for valuing oil not sold at arm’s length in the Rocky Mountain states), offer a definition of an oil company’s affiliate (transactions with affiliated companies are not considered to be at arm’s length), and adopt spot prices as a basis for valuing oil not sold at arm’s length outside Alaska, California, and the Rocky Mountains. MMS also made other modifications and sought comments on seven more issues; it received 34 letters in response. Although states generally support the proposed regulations, respondents from the oil industry continue to oppose them. The oil industry opposes the proposed identification of three oil markets, saying that this situation would be burdensome and would require oil companies to maintain three separate accounting systems. Representatives from the oil industry and two Rocky Mountain states further commented that the proposed valuation system for oil not sold at arm’s length in the Rocky Mountain states is unworkable because of the nature of the Rocky Mountain oil market. The oil industry also opposes MMS’ proposed definition of an affiliate, stating that it is too broad and would cause many sales that occur at arm’s length to be valued inappropriately. MMS has not yet publicly addressed the comments it received in response to its fifth Federal Register notice. In May 1998, in an amendment to the 1998 Emergency Supplemental Appropriations Act for the Department of Defense, the Congress directed MMS to not use any appropriated funds to publish final oil valuation regulations before October 1, 1998. MMS was in the process of responding to the comments but ceased its efforts as a result of this directive. In addition to publishing five notices in the Federal Register, MMS held 14 meetings around the country to further explain the proposed regulations and to solicit additional comments on them. In April 1997, the agency held public meetings in Houston, Texas, and Lakewood, Colorado. In May 1997, it met with representatives from the oil industry and Louisiana to solicit views on the first draft of the regulations. Following its September 1997 Federal Register notice, MMS held public meetings in Washington, D.C.; Lakewood, Colorado; Houston, Texas; Bakersfield, California; Casper, Wyoming; and Roswell, New Mexico. In February and March 1998, MMS also held public meetings on its current version of the proposed regulations in Houston, Texas; Washington, D.C.; Lakewood, Colorado; Bakersfield, California; and Casper, Wyoming. MMS also placed the five Federal Register notices, all 183 letters it received in response to these notices, and additional information concerning the proposed oil valuation regulations on the Internet home page of its Royalty Management Program. We found this site easy to use. Although most oil and gas lessors take their royalties in cash, several limited programs exist in the United States and Canada under which lessors accept their royalties in kind: Oil royalty-in-kind programs are currently operated by MMS, the Canadian Province of Alberta, the City of Long Beach, the University of Texas, and the states of Alaska, California, and Texas; gas royalty-in-kind programs are also currently operated by Texas and the University of Texas. (App. II provides more information on these programs.) According to information from studies and the programs themselves, royalty-in-kind programs seem to be feasible if certain conditions are present. In particular, the programs seem to be most workable if the lessors have (1) relatively easy access to pipelines to transport the oil or gas to market centers or refineries, (2) leases that produce relatively large volumes of oil or gas, (3) competitive arrangements for processing gas, and (4) expertise in marketing oil or gas. However, these conditions do not exist for the federal government or for most federal leases: The federal government does not currently have relatively easy access to pipelines, has thousands of leases that produce relatively low volumes, has many gas leases for which competitive processing arrangements do not exist, and has limited experience in oil or gas marketing. Once produced from a lease, oil or gas generally becomes more valuable (i.e., can be sold for higher prices) the closer it is moved to a market center or refinery, and pipelines are often the only cost-effective means of transporting it. Several of the entities operating royalty-in-kind programs told us that having relative ease of access to pipelines is a key component of their programs because it assures them that they can transport their production when they need to at a relatively low cost. For example, Alberta uses its regulatory authority to direct its lessees to deliver the province’s oil royalties, using extensive pipelines that transport the oil to centrally located storage tanks, where oil marketers who are under contract with Alberta sell the oil. In Texas, state law mandates that all gas pipelines in the state accept and transport gas from the state’s gas royalty-in-kind program. Representatives of the oil royalty-in-kind programs in the City of Long Beach, the states of California and Texas, and the University of Texas reported that because oil from certain leases could be transported on only one pipeline charging high fees, they were unable to accept royalties in kind from these leases or incurred losses in selling this oil because of the high transportation fees. The federal government does not currently have the statutory or regulatory authority over pipelines that would ensure relative ease of access for transporting oil and gas from federal leases. In addition, some pipelines are privately owned, and the owners are free to set their own transportation fees. In some areas of the country, oil from federal leases can be transported on just a single pipeline, and the owner of that pipeline may charge substantial fees. In 1995, MMS conducted a limited royalty-in-kind program on federal leases in the Gulf of Mexico, collecting gas royalties in kind and offering gas for sale near the leases. Because purchasers had to transport the gas and pay transportation fees to use the privately owned pipelines, the purchase bids that MMS received were relatively low. MMS estimated that this program lost about $4.7 million (about 7 percent) when compared to the revenues the agency would have received if it had taken its gas royalties in cash. Oil and gas marketers we contacted confirmed that the federal government would need to transport any royalties in kind it received to market centers or refineries in order to increase its revenues. To be cost-effective, royalty-in-kind programs must have volumes of oil and gas that are high enough for the revenues made from selling these volumes to exceed the programs’ administrative costs. The volumes of oil or gas that are needed for programs to be cost-effective vary among programs. For example, when Wyoming tried in 1997 to initiate a limited oil royalty-in-kind program on 508 leases that produced, on average, less than 3 barrels of oil in royalties per day, it did not receive any bids that would have allowed the state to generate more revenues than it already received by taking its royalties in cash. Texas and the University of Texas generally do not accept royalty volumes of less than 10 barrels daily in their oil royalty-in-kind programs. MMS does not accept oil royalties in kind from leases supplying less than around 50 barrels per day, because it believes that the benefits to refiners from smaller volumes would not offset its administrative costs. And while Alberta accepts all of its oil royalties in kind, these royalty volumes are relatively large: 200 to 10,000 barrels per day are common. Similar situations exist in gas royalty-in-kind programs; for example, program representatives from Texas and the University of Texas told us that they needed to have large volumes of gas—a minimum of either 300,000 or 2,000,000 cubic feet per day, depending on the pipeline used, to obtain pipeline transportation. The majority of oil and gas leases on federal lands produce relatively small volumes and are geographically scattered across many miles—particularly for federal leases located in the western states. For example, MMS estimates that about 65 percent of the wells on federal oil leases in Wyoming produce less than 6 barrels of oil daily, which would result in less than 1 barrel per day in oil royalties in kind. Most federal leases in the San Juan Basin of New Mexico also produce low volumes. Because natural gas may need to be processed before it can be sold, arranging for this processing is a critical consideration in operating a gas royalty-in-kind program. The University of Texas noted that many of the university’s leases produce small volumes of gas requiring processing and that these volumes must be aggregated into a larger amount to be accepted by gas-processing plants. Many federal leases also produce small volumes of gas that need to be processed. In certain areas, there is only a single plant to process the gas from many of these leases. In these circumstances, the lack of competition might allow the plants to charge high fees. For example, MMS estimates that the federal government could lose up to $4.3 million annually if the agency accepted royalties in kind from federal leases in Wyoming for which there is access to only a single gas-processing plant. Lessors who accept royalties in kind must sell the oil or gas to realize revenues, and they are likely to receive higher prices if they move it away from the lease and closer to marketing centers or refineries. Storing, transporting, marketing, and selling oil or gas can be complicated processes; profit margins are often thin; and there may be little room for error. The nonfederal royalty-in-kind programs have generally been in existence for years, and the entities running these programs have gained both experience and expertise. For example, Alberta has been actively marketing its oil royalties in kind since 1974. Similarly, the University of Texas has been accepting its gas royalties in kind and arranging for transportation since 1985. In contrast, the federal government has limited experience in marketing oil or gas royalties in kind. In addition to the limited oil royalty-in-kind program that MMS currently operates, in 1995 it conducted a limited gas royalty-in-kind program in the Gulf of Mexico. However, MMS’ experience in these programs has been limited to sales that occur at the lease; the agency has not transported its oil or gas to market centers or received higher revenues than it would have realized if it had instead taken cash royalties. We provided a copy of a draft of this report to the Department of the Interior for its review and comment. The Department commented that this report provides a fair description of its oil valuation rulemaking efforts and of the issues it would face if required to implement a mandatory royalty-in-kind program. The Department also provided some minor technical clarifications, which we incorporated. Interior’s comments are reproduced in appendix III. We performed our review from March 1998 through July 1998 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix IV. We will send copies of this report to appropriate congressional committees, the Secretary of the Interior, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-3841. Major contributors are listed in appendix V. In developing its proposed oil valuation regulations, the Minerals Management Service (MMS) received comments from the oil industry making the point that separate oil markets exist in different geographic areas of the United States. In response to these comments, MMS’ proposed regulations now identify three domestic oil markets: (1) Alaska and California; (2) the six Rocky Mountain states of Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming; and (3) the rest of the country. A large portion of the oil produced in Alaska comes from the Prudhoe Bay region on the state’s North Slope. Alaska North Slope (ANS) crude oil, an intermediate grade of oil, is transported about 800 miles south through the Trans-Alaskan Pipeline System to Valdez, Alaska, where it is loaded onto oil tankers. Most ANS oil is shipped to oil refineries in the Puget Sound, Los Angeles, and San Francisco, although some is shipped to the Far East or refined in Alaska. ANS oil represents about 40 percent of the oil that is refined in California. In California, oil is produced from onshore leases—in the San Joaquin, Santa Maria, Ventura, and Los Angeles basins in southern California—and from leases off the coast—from Point Arguello southeast to Huntington Beach. Although a variety of grades of crude oil are produced in California, most of its oil is heavy. About two-thirds of the oil in California is produced by seven major oil companies, which also own about three-quarters of the refinery capacity in the state and have major investments in oil pipelines in the state. Many of these pipelines are common carrier lines that are regulated by the state and therefore must be made available to transport the oil of independent producers. However, these seven major oil companies also own three heated pipelines—which make the heavy oil more liquid and therefore more easily transported through pipelines—that are not common carrier lines; the seven major oil companies use their heated lines to transport their oil to their refineries in Los Angeles and San Francisco. Nearly all of the oil produced in California is refined within the state, and most of it is refined into gasoline. About 15 percent of the oil produced nationwide from federal leases is produced in Alaska and California. Most of this oil is transported by the major oil companies from the federal leases directly to their refineries, or it is exchanged for oil that is ultimately moved to these refineries, rather than being sold on the open market. Production from the six Rocky Mountain states of Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming includes a wide range of crude oils from geographic basins in a variety of areas. These basins are often physically separated from one another by rugged terrain and long distances, resulting in local markets within the larger Rocky Mountain market. Individual wells often produce very low volumes—a few barrels a day are not uncommon. Important producing areas include the Powder River and Big Horn basins in Wyoming, the Williston Basin in Montana and North Dakota, the Uinta Basin in Utah, the Piceance Basin in western Colorado, and the Paradox Basin of the Four Corners area. About 8 percent of the oil produced nationwide from federal leases is produced in this region, and about 65 percent of this amount comes from the Wyoming basins. Oil produced in this region is refined almost exclusively within the region by small refineries. The larger of these small refineries are located in Billings, Montana; Denver, Colorado; Salt Lake City, Utah; and various locations in Wyoming. Most of the oil from the region is produced by independent producers who do not own refineries. These producers may market their oil themselves, or they may sell it to oil traders or marketers, who in turn sell and transport the oil to refineries. In the rest of the country, most of the oil is produced from leases located in the Gulf of Mexico and onshore leases located in western Texas, the Gulf states, and the mid-continental states. Leases in the Gulf of Mexico account for about 75 percent of the total federal royalties received from oil leases nationwide. The region has a large number of oil companies, a well-integrated pipeline system, a large number of refineries, and a high refining capacity. Oil that is produced in western Texas and in New Mexico is refined locally or is gathered and transported via pipeline to the market center at Midland, Texas. From Midland, the oil flows either southeast to refineries along the Gulf Coast or northeast to the market center of Cushing, Oklahoma. From Cushing, oil often flows northeast to major oil refineries in Illinois. Oil that is produced in the Gulf of Mexico is generally transported via pipeline to market centers or refineries at Empire and Saint James, Louisiana. This oil can be refined locally or can be piped north to Cushing and ultimately to the Illinois refineries. Because of the extensive pipeline system, oil produced in this region can be easily transported; for this reason, the area has many oil traders, and oil is predominantly sold by these marketers. We examined seven oil royalty-in-kind programs and two gas royalty-in-kind programs that are currently operating in the United States and Canada. Sales of the oil that is taken as royalties occur competitively at the lease, noncompetitively at the lease, or after the oil has been transported to storage tanks. Sales of most of the gas that is taken as royalties occur after the gas is transported. We identified four oil royalty-in-kind programs under which the recipients of the oil royalties sell the oil in competitive sales that occur at the lease: programs operated by the City of Long Beach in California, the University of Texas, and the states of California and Texas. The primary purpose of all of these programs is to maximize revenues. In operating these programs, these entities generally select specific leases to include, solicit bids from interested parties to purchase the oil that has been taken as royalties in kind, and issue short-term contracts (normally from 6 to 18 months) to the successful bidders to purchase this oil. Bidders generally offer premiums above posted prices and must arrange and pay to transport the oil to market centers or refineries. These programs are limited in scope, involving relatively few of the entities’ oil leases that produce high volumes, and none of the programs currently has more than 13 active contracts. Alaska and MMS both operate oil royalty-in-kind programs under which they sell the oil in noncompetitive sales to small refiners. In both programs, the sale occurs at the lease, and the purchaser arranges and pays to transport the oil to the refinery. Under Alaska’s program, the state directly negotiates sales with small refiners. By law, Alaska must realize revenues from selling this oil that are at least equal to what the state would receive under current sales prices for oil; however, the state tries to obtain bonuses on this oil of at least a 15 cents per barrel. Currently, Alaska has three contracts involving about 170,000 barrels of oil per day. Under MMS’ program, the agency solicits interest from small refiners and makes oil from certain leases available if there is interest. MMS must receive an amount equal to the cash royalties that would have been paid plus a fee to cover its administrative costs. Currently, MMS administers six contracts covering 170 leases located in the Gulf of Mexico and off the shore of California. In the Province of Alberta, Canada, royalties from all of the provincial oil leases must be taken as royalties in kind, which constitutes about 125,000 barrels of oil per day. These oil royalties in kind are not taken at the lease. Instead, the province directs its lessees to gather the oil from the leases (which are generally concentrated in one geographic area) and transport it to about 5,500 storage tanks that are centrally located; the province then reimburses the transportation fees. Alberta has 5-year contracts with three oil marketers, each whom is generally responsible for one of three grades of oil and receives fees equal to 5 cents per barrel to sell this oil. In addition to their oil royalty-in-kind programs, Texas and the University of Texas also operate small gas royalty-in-kind programs. Texas accepts gas royalties in kind from about 6 percent of its leases; the program is intended to increase royalty revenues for the state’s school fund and to provide gas to state facilities—schools, universities, hospitals, and prisons—at a cheaper price than is offered by local gas distribution companies. The University of Texas accepts gas royalties in kind from seven of its leases and sells this gas under a single contract. To determine what information MMS used to justify the need for revising its oil valuation regulations, we reviewed MMS’ reasons for proposing new regulations as published in Federal Register notices and read all of the comments submitted in response to the first notice that solicited information on oil marketing and the relevance of posted prices. We interviewed officials in MMS’ Royalty Valuation Division and reviewed the marketing studies for which MMS had contracted. We also reviewed the final report of the interagency task force that examined federal oil valuation in California and interviewed individuals who had served on that task force and individuals who were involved in the City of Long Beach’s litigation. In addition, we solicited information on lawsuits and settlements from state representatives present at a meeting of the State and Tribal Royalty Audit Committee in Denver, Colorado, and we subsequently contacted representatives of these various states for additional information. To ascertain how MMS addressed concerns expressed by the oil industry and states in developing its proposed regulations, we identified 39 major issues on which MMS had solicited comments in its Federal Register notices. We selected a judgmental sample of about 50 percent of the 183 letters that were submitted to MMS in response to these notices. In selecting this sample, we sought to represent a cross-section of the oil industry and included in our sample major oil companies that both produced and refined oil, large independent companies that only produced oil, small independent producers, independent refiners, oil marketers, and oil industry trade associations. Because the number of letters MMS received from states was significantly less than the number of letters MMS received from representatives of the oil industry, we read all of the comments submitted by states. We summarized concerns expressed by the oil industry and states on each of the 39 issues and determined how MMS addressed these concerns—that is, whether and how the proposed regulations were revised in response to the comments. In addition, we interviewed representatives from the following oil industry associations: the American Petroleum Institute, the Independent Petroleum Association of America, the Independent Petroleum Association of Mountain States, the Independent Oil Producers Association, and the California Independent Petroleum Association. We attended or read transcripts from several public meetings conducted by MMS on the proposed regulations. To determine what existing studies and programs indicate about the feasibility of the federal government’s taking its oil and gas royalties in kind, we (1) identified and read two studies—a 1997 study by MMS of the feasibility of royalties in kind and a 1997 analysis by the Congressional Research Service on the oil royalty-in-kind program run by the Canadian Province of Alberta—and interviewed their authors and (2) identified nine royalty-in-kind programs that are currently in operation and interviewed representatives of these programs: the seven oil royalty-in-kind programs operated by MMS, the Canadian Province of Alberta, the City of Long Beach, the University of Texas, and the states of Alaska, California, and Texas; and the two gas royalty-in-kind programs operated by Texas and the University of Texas. We also reviewed an attempt by Wyoming in 1997 to take oil royalties in kind, reviewed a pilot program conducted by MMS in 1995 in the Gulf of Mexico to take gas royalties in kind, and interviewed MMS representatives who are designing limited royalty-in-kind programs that are planned for federal leases in Wyoming and the Gulf of Mexico. In addition, we interviewed oil and gas marketers who are active in the Rocky Mountains, mid-continental, and Gulf of Mexico regions; we met with technical staff in MMS’ Pacific Outer Continental Shelf Region in Camarillo, California; and we reviewed the proposed legislation mandating that MMS accept federal oil and gas royalties in kind, MMS’ analysis of the financial impact of this proposed legislation, and the Barents Group’s response to MMS’ analysis. We conducted our review from March 1998 through July 1998 in accordance with generally accepted government auditing standards. Alan R. Kasdan The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Minerals Management Service's (MMS) efforts to revise its regulations for valuing oil from federal leases, focusing on: (1) the information used by MMS to justify the need for revising its oil valuation regulations; (2) how MMS has addressed concerns expressed by the oil industry and states in developing these regulations; and (3) the feasibility of the federal government's taking its oil and gas royalties in kind, as indicated by existing studies and programs. GAO noted that: (1) in justifying the need to revise its oil valuation regulations, MMS relied heavily on the findings and recommendations of an interagency task force--composed of representatives from MMS and the Departments of Commerce, Energy, Justice, and the Interior--assembled in 1994 by Interior to study the value of oil produced from federal leases in California; (2) the task force concluded that the major oil companies' use of posted prices in California to calculate federal royalties was inappropriate and recommended that the federal oil valuation regulations be revised; (3) MMS subsequently determined that in other parts of the country as well, posted prices should not be used as the basis to calculate royalties on oil from federal leases; (4) beginning in 1995, MMS solicited public comments on the proposed regulations in five Federal Register notices; it solicited comments in each notice and revised its proposed regulations three times in response to the comments received; (5) however, the agency did not agree with all the comments it received and in these cases provided reasons for not incorporating the suggested changes, noting that it planned to seek input on this issue through other means; (6) in total, the agency asked for comments on 39 major issues and received 183 letters from states, representatives of the oil industry, and other parties; (7) on its most recent revision of the proposed regulations, the agency received 34 comments but has not yet publicly addressed them; (8) information from studies of royalties in kind, as well as specific royalty-in-kind programs operated by various entities, indicates that it would not be feasible for the federal government to take its oil and gas royalties in kind except under certain conditions; (9) these conditions include having relatively easy access to pipelines to transport the oil and gas, leases that produce relatively large volumes of oil and gas, competitive arrangements for processing gas, and expertise in marketing oil and gas; (10) however, these conditions are currently lacking for the federal government and for most federal leases; and (11) specifically, the federal government does not have relatively easy access to pipelines, has thousands of leases that produce relatively low volumes, has many gas leases for which competitive processing arrangements do not exist, and has limited experience in oil and gas marketing.
Conditions in Cuba—a hard-line Communist state that restricts nearly all political dissent—pose substantial challenges to implementing, monitoring, and evaluating democracy assistance. USAID does not work cooperatively or collaboratively with the Cuban government, as it does in most countries receiving U.S. democracy assistance. The United States and Cuba do not have diplomatic relations, and the United States maintains an embargo on most trade. USINT staff is restricted to Havana. USAID does not have staff in Cuba, and Cuba program office staff have been unable to obtain visas to visit the island since 2002. Additionally, the range of Cuban partner organizations is significantly limited by U.S. law, which generally prohibits direct assistance to the Cuban government and NGOs with links to the government or the Communist Party. Cuban law prohibits citizens from cooperating with U.S. democracy assistance activities authorized under the Cuban Liberty and Democratic Solidarity Act, punishable with prison terms of up to 20 years. Tactics for suppressing dissent include surveillance, arbitrary arrests, detentions, travel restrictions, exile, criminal prosecutions, and loss of employment. Neighborhood committees (known as Committees for the Defense of the Revolution) monitor residents’ activities; those identified as dissidents are subject to intimidation (acts of repudiation), including psychological and physical violence. Independent groups, dissidents, and activists face constant harassment and infiltration by Cuban government agents. In 2003, the Cuban government arrested and sentenced 75 leading dissidents and activists to terms of up to 28 years in prison. The Cuban government accused some of these individuals of receiving assistance from USAID grantees. A Cuban human rights group known as Damas de Blanco (Ladies in White), formed after the 2003 crackdown, consists of dissidents’ wives, mothers, and sisters who peacefully protest for the unconditional release of political prisoners. There is no free press in Cuba, and independent journalists are harassed and imprisoned. The Cuban government also substantially restricts and controls the flow of information, routinely monitoring international and domestic telephone calls and fax transmissions. As of 2006, only about 200,000 Cubans out of a total population of 11 million had been granted official access to the Internet. The use of satellite dishes, radio antennas, fax machines, and cellular telephones is restricted due to high costs, laws, and the threat of confiscation. The customs service also routinely monitors mail, freight shipments, and visitors’ baggage for materials with political content. Further, the government routinely jams all external, non- Cuban broadcasts, including the U.S. government-supported Radio and TV Martí broadcasts. The Commission for Assistance to a Free Cuba was established by the President to identify measures to help end the Castro dictatorship and identify U.S. programs that could assist an ensuing transition. The commission’s May 2004 report recommended providing an additional $36 million to USAID, State, and other agencies’ grant programs supporting Cuban civil society, as well as $5 million for worldwide public diplomacy initiatives. The report also recommended the creation of a transition coordinator for Cuba at State, a post created and filled in 2005. The commission’s July 2006 report recommended providing $80 million over 2 years to increase support for Cuban civil society, disseminate uncensored information to Cuba, expand international awareness of conditions in Cuba, and help realize a democratic transition. The report also recommended subsequent annual funding of at least $20 million until the end of the Castro regime. These funds would be in addition to current funding for State and USAID democracy assistance programs and Radio and TV Martí. State and USAID officials said that the commission’s 2004 report provides the policy framework for their agencies’ respective grant programs (see table 1). State and USAID lead interagency efforts to provide democracy assistance to independent civil society groups and individuals in Cuba. However, we found weaknesses in the communications between State and USAID regarding the implementation of this assistance. State and USAID made awards to three types of grantees: Cuba-specific NGOs, NGOs with a worldwide or regional focus, and universities. Prior to 2004, all USAID awards were based on unsolicited proposals. In 2004–2005, USAID and State used a competitive process to select grantees. Since the program’s inception, USAID extended the amount and length of about two-thirds of the 40 grants and cooperative agreements it awarded. Since 1996, State and USAID have led the implementation of U.S. democracy assistance focused on Cuba. We observed weaknesses in communication between responsible State and USAID bureaus and offices. State’s Office of Cuban Affairs (under the Bureau of Western Hemisphere Affairs) and USAID’s Cuba program office (under the Latin America and Caribbean Bureau) have led the implementation of assistance programs that support the development of democratic civil society in Cuba, coordinating their activities primarily through an interagency working group. This working group also includes representatives from the National Security Council, Commerce (Bureau of Industry and Security, Foreign Policy Controls Division), and Treasury (Office of Foreign Assets Control). USAID has funded democracy assistance grants and cooperative agreements for Cuba since 1996. USAID’s Cuba program is overseen by a director and one junior officer. In 2005, State initiated a grant program for Cuba democracy assistance through DRL. Headed by an assistant secretary, DRL leads U.S. efforts to promote democracy, protect human rights and international religious freedom, and advance labor rights globally. The Director of U.S. Foreign Assistance (who serves concurrently as the USAID Administrator) is responsible for coordinating State and USAID democracy assistance worldwide, with continued participation in program planning, implementation, and oversight from the various bureaus and offices within State and USAID, and is developing a strategic framework and procedures to ensure that programs match priorities. Table 2 outlines the roles and responsibilities of key executive branch agencies in providing democracy assistance to Cuba. As the table shows, USINT plays an important role in implementing State and USAID democracy assistance focused on Cuba. In addition to these tasks, USINT administers immigration and refugee programs, maintains regular contact with Cuban activists and other embassy officials, and files reports regarding human rights abuses. Effective internal control requires effective communication with key stakeholders who have a significant impact on whether an agency achieves its goals. However, during our fieldwork in Havana and Washington, D.C., we found that communications were sometimes ineffective between State bureaus and offices, USINT, and USAID regarding the implementation of U.S. democracy assistance focused on Cuba. Most critically (given that USAID does not have staff in Cuba and the Cuba program office staff cannot visit the island), routine communication links between USAID and USINT had not been established. Specific examples include the following: USAID did not receive reports prepared by USINT assessing some independent NGOs in Havana, although some of these organizations received assistance from USAID grantees. These reports summarize the observations made during USINT site visits and also recommended adjustments in the level and type of assistance distributed to individual NGOs. Given the lack of a USAID presence in Cuba, information provided in these reports would improve USAID officials’ knowledge of how some assistance is being utilized. USAID’s Cuba program director did not participate in the evaluation and ranking of democracy assistance proposals submitted to State’s DRL. (He had an opportunity to provide comments after State’s review panel had met.) The director said that he potentially could have provided important “lessons learned” about these proposals, based on almost a decade of experience implementing assistance in Cuba. State officials said that the omission of the USAID Cuba program director from the technical review panels was an oversight and that DRL would take steps to ensure USAID’s participation on future Cuba panels. USINT officials said that they received limited information from USAID about its grantees’ on-island activities, such as specific groups or individuals receiving U.S. support. The information these officials had about such matters was based on direct contact with some grantees and comments from dissidents. More complete information about grantee activities would provide a basis for USINT to monitor and report more systematically on groups and individuals receiving U.S. assistance. USINT officials said that they had little advance knowledge of the types and amounts of assistance that USAID grantees expected them to distribute. In addition, they said that some grantee-provided books and other materials had been inappropriate or ill-suited for promoting democracy in Cuba. These officials stated that U.S. assistance would be more effective if they had more advance information about—and input into—grantee shipments. USAID officials agreed that better communication is needed to coordinate these activities. In commenting on this report, State and USAID officials recognized the benefits of improved interagency communication on Cuba democracy assistance and noted that they were taking steps in this direction, such as providing USAID program officials with access to classified communications between State and USINT. According to the USAID Cuba program director, access to classified communications should allow better coordination with USINT on grantee shipments to the island. In addition, State said that DRL, the Bureau of Western Hemisphere Affairs, the Office of the Cuba Transition Coordinator, and USAID would meet regularly to share information gathered in quarterly meetings with grantees. USAID officials commented that including the Cuba program office in interagency working groups and weekly staff meetings of State’s Cuba desk would improve operational coordination. Including the Cuba program office in communications between USINT and grantees also would benefit USAID grantee oversight and management. USAID and State democracy assistance generally aims to support independent civil society groups and individuals in Cuba. The 1992 and 1996 acts authorized support for individuals and independent NGOs in Cuba, such as sending humanitarian assistance to victims of political repression and their families; providing material and other support; sending books and other information; and supporting visits and the permanent deployment of independent human-rights monitors. The USAID Cuba program’s strategic objective is “to help build civil society in Cuba by increasing the flow of accurate information on democracy, human rights, and free enterprise to, from, and within Cuba.” Table 3 summarizes the DRL, USINT, and USAID program activities for democracy assistance targeted at Cuba. In implementing their program objectives, State and USAID awarded 44 grants and cooperative agreements from 1996–2005 to 34 grantees in three categories: Cuba-specific NGOs received awards totaling $37.3 million (about 51 percent of the total value of the awards); NGOs with a worldwide or regional focus received awards totaling $28.7 million (about 39 percent of the total value of the awards); and Universities received awards totaling about $7.6 million (about 10 percent of the total value of the awards). All 34 grantees are U.S.-based, and most are located in Washington, D.C., or Florida. Table 4 summarizes State and USAID awards from 1996–2005. Some of the NGOs with a worldwide or regional focus have a relatively long history working on Cuba issues. In some cases, these NGOs have received grants from NED. From 1984–2005, NED awarded 158 grants totaling $13.3 million for democracy assistance for Cuba. Established by Congress in 1983, NED is a private nonprofit corporation with the purpose of encouraging and supporting activities that promote democracy around the world. As part of its global grants program for “opening dictatorial systems,” NED assistance to Cuba has focused on providing aid to journalists, independent workers’ organizations, and cooperatives, while maintaining exile-based programs that defend human rights, provide uncensored information, and encourage dialogue about a country’s political future. NED’s independent governing board makes decisions about which assistance proposals the organization funds. In 2005, using a $3 million grant from DRL, NED funded 16 Cuba-related grants totaling about $2.2 million. (Four of the 16 grantees also have active USAID grants for Cuba democracy assistance.) Our analysis showed that about 95 percent ($61.9 million) of USAID’s total awards were made in response to unsolicited proposals. From 1996–2004, USAID made 34 awards ($54.7 million) based on unsolicited proposals. The unsolicited proposals were evaluated by the interagency working group (see table 2). In 2004–2005, USAID made 5 awards ($3.5 million) based on two requests for applications (RFA). The proposals received in response to these RFAs were evaluated and ranked by two technical evaluation committees that included State and USAID officials. In 2005, USAID also made an additional award to a previous grantee for $7.2 million based on an unsolicited proposal. The USAID Assistant Administrator for Latin America and the Caribbean authorized the negotiation of awards for both unsolicited and solicited proposals. All awards ultimately were approved by an agreement officer in USAID’s Office of Acquisition and Assistance. In keeping with the Federal Grant and Cooperative Agreement Act, USAID policy encourages competitive awards for grants and cooperative agreements in most circumstances so that the agency can identify and fund the best projects to achieve program objectives. USAID’s general policy is to award all grants and cooperative agreements competitively, seeking applications from all eligible and qualified entities. However, USAID policy permits funding unsolicited proposals (without the benefit of competition) when certain criteria are met. For example, an unsolicited proposal may be funded if USAID did not solicit the proposal and it presents a unique or innovative approach, fully supports U.S. development priorities, and demonstrates a unique capacity for the applicant to carry out program activities. In such cases, USAID guidance requires that officials explain the circumstances that justify funding these proposals. The USAID Cuba program director told us that the interagency working group (see table 2) had opposed prior attempts to employ a competitive process for selecting grantees. USAID’s successful use of competitive solicitations for some awards in 2004–2005 suggests that the Cuba program could have employed this selection strategy for at least some prior awards. A total of 27 NGOs responded to USAID’s 2004 and 2005 RFAs. USAID’s technical evaluation committees found the proposals submitted by 12 of the 27 applicants “within the competitive range” of the RFAs, and recommended awarding cooperative agreements to 6 applicants and asking an additional 6 to submit (revised) best and final proposals. Eight of the 12 applicants had not received prior awards for U.S. democracy assistance for Cuba. In technical comments on this report, USAID officials said that using a competitive process will not always result in grantees different from those that would be selected using a noncompetitive process. All four State awards in 2005 ($8.1 million) were made competitively; two of these awards ($4.5 million) were to USAID Cuba grantees. Proposals received in response to State’s RFA were reviewed and evaluated by two technical committees (panels) that included officials from State’s Western Hemisphere Affairs, DRL bureaus, USINT; awards were approved by the Assistant Secretary of State for Democracy, Human Rights and Labor. As previously discussed, the USAID Cuba program director received copies of the proposals for comment but did not participate in the technical panels. In commenting on this report, State officials said that DRL would continue to the greatest extent possible to use a competitive process for Cuba grants. State officials also said that DRL’s standard practice is to solicit participation by USAID and the appropriate regional bureau on all its evaluation panels, and that they will ensure that this policy is followed on future Cuba panels. Our analysis showed that USAID modified 28 of the 40 agreements awarded between 1996 and 2005 to increase funding, extend program completion dates, or both. In several cases, these modifications substantially altered grantees’ project objectives. These modifications increased the aggregate value of these agreements nearly eight-fold—from about $5.9 million to nearly $50.1 million—and extended the program completion dates by an average of about 3 years. Between November 1997 and May 2006, USAID had modified 12 agreements that we reviewed in detail (see fig. 1). These modifications increased the aggregate value of these agreements from about $4.8 million to nearly $42.3 million and extended the program completion dates by an average of about 4.6 years. USAID policy requires that some modifications and extensions be justified, such as those that extend the life of the award and simultaneously either increase the total estimated amount of the award or change the program description. Officials must explain why the benefits of continuing the assistance activity with the same grantee exceed the benefits of a competitive process favored by law and agency policy. USAID Cuba program officials stated that they modified existing agreements (rather than initiating new ones) to prevent disruption of assistance programs. Additionally, officials said that they wanted to avoid the administrative burdens associated with awarding new grants or cooperative agreements. However, USAID procurement officials told us that, whether modifying an existing agreement or making a new noncompetitive award to the same grantee, a similar amount of work is required. These officials also identified several advantages to closing out awards and making new ones. Following established closeout procedures, for example, provides additional assurance that grantee expenditures to date have been appropriate, and end-of-project reports provide important information about project accomplishments and failings to date. As discussed in a following section, the Cuba program office has decided to require grantees to submit interim evaluations when requesting significant project modifications or extensions. USAID reports that its grantees have provided a wide range of democracy- related assistance since the Cuba program’s inception. In 2005, the 10 grantees that we reviewed delivered humanitarian and material assistance, training, and information to Cuba. In addition, several of these grantees worked to increase international awareness of the Cuban regime’s human rights record; others planned for a democratic transition in Cuba. Recipients of U.S. humanitarian and material assistance, training, and information included human rights activists, political dissidents, independent librarians, journalists, and political prisoners and their families. Grantees employed several methods to deliver these items to the island. According to grantees and U.S. officials, these methods involve different security, flexibility, and cost considerations. Increasingly since 2000, USINT has distributed some grantee-funded assistance directly. USINT also provides information, electronic equipment, and other support to Cubans using its own funding. According to data provided by USAID, from 1996 to 2006 the Cuba program provided the following assistance: 385,000 pounds of medicines, food, and clothing; more than 23,000 shortwave radios; and millions of books, newsletters, and other informational materials. In addition, USAID reported that U.S. assistance supported journalism correspondence courses for more than 200 Cubans, the publication of about 23,000 reports by independent Cuban journalists about conditions or events in Cuba, and visits to Cuba by more than 200 international experts to help train and develop independent NGOs. Dissidents we interviewed in Cuba said that they appreciated the range and types of U.S. democracy assistance, that this assistance was useful in their work, and that this aid demonstrated the U.S. government’s commitment to democracy in Cuba. Dissidents said they appreciate the moral support that U.S. assistance provides, and that this aid enhanced their ability to continue their pro-democracy work. In 2005, the 10 grantees we reviewed reported activities in four categories: (1) providing humanitarian and material assistance and training to independent civil society groups and individuals; (2) disseminating uncensored information to, within, and from Cuba; (3) increasing international criticism of the Cuban regime by highlighting its human and workers’ rights violations; and (4) planning for a future transition to democracy by sponsoring conferences and publishing studies. Our analysis of quarterly reports and other records show that these grantees provided substantial assistance in 2005 (see table 5). In technical comments on this report, USAID officials said that the purpose for providing novels, video games, children’s coloring books, and some other items listed in table 5 is to attract Cubans to independent libraries and other organizations so that they can review other materials on democracy, free markets, and other subjects. Eight grantees also reported conducting international outreach and advocating for human and workers’ rights causes in Cuba (either directly or through subgrantees). Our analysis of quarterly reports shows that these grantees were involved in organizing or participating in the following types of activities in 2005: conferences and meetings held by groups such as the United Nations Human Rights Commission in Geneva, Switzerland, and the Organization of American States General Assembly, in Ft. Lauderdale, Florida; meetings with foreign government and political leaders to discuss human and workers’ rights in Cuba and possible support for activists; conferences and meetings of civil society groups; press conferences, news releases, and other events related to human mail, e-mail, and letters distributed to foreign government officials. One grantee was primarily focused on planning for a democratic transition. This grantee reported that it commissioned academic studies, compiled databases, and organized seminars in the United States and a Latin American country. These resources were made available in print and online. Our analysis of quarterly reports and other records shows that the recipients of U.S. assistance in 2005 included political prisoners and their families, independent librarians, journalists, political parties, labor organizations, other civil society groups and activists, and, to a lesser extent, the general Cuban public (see table 6). According to USINT officials, recipients sometimes give away or sell books, magazines, newspapers, or other assistance. According to senior USINT officials, these actions may have the unintended effect of expanding the reach of U.S. assistance. Senior U.S. officials viewed these losses due to confiscation or reselling as an unavoidable cost of providing democracy assistance in Cuba’s repressive political and economic environment. However, in technical comments on this report, USAID officials said that, despite potential benefits of expanding the reach of US assistance, selling such assistance is not allowed under USAID policy. USAID recently sent an e-mail to its grantees reminding them that they are forbidden to sell or knowingly condone the selling of humanitarian aid or other assistance by recipients. The grantees in our sample reported using several methods to deliver humanitarian aid and material assistance, training, and informational materials to Cuba. Grantees and U.S. officials said that these methods involved different security, flexibility, and cost considerations. For example, the estimated cost of delivering humanitarian or material assistance to the island ranged from about $4 to $20 per pound. Some grantees have taken steps to reduce the risk of loss—due to theft or confiscation by the Cuban government—of assistance shipped to Cuba. Dissidents we interviewed in Havana said that the assistance they received from USAID and State grantees (and other organizations) was sometimes interrupted. In addition, USAID officials said that the Cuban government closed some independent libraries and confiscated their books and equipment in 2005. We plan to issue a classified version of this report that would provide additional information about the methods used to deliver U.S. assistance to Cuba, steps taken to reduce losses of assistance shipped to the island, and some of the recipients of U.S. assistance in Cuba. USINT data shows that it delivers assistance and information to more than 2,500 individuals and groups in Cuba. In 2005, for example, the office distributed over 269,000 books, magazines, articles, pamphlets, and other materials. According to U.S. officials, USINT’s role delivering democracy assistance has increased since 2000—as indicated by the substantial increase in the volume. These officials also said that further expanding the volume of items distributed would require additional staff and resources. The assistance delivered by USINT was funded by State and USAID grantees as well as by USINT. According to U.S. officials, USINT purchased materials, equipment, and information, including U.S. national news and professional magazines, such as the Spanish-language versions of Newsweek, The Economist, Art in America, The Atlantic, Popular Mechanics, and Downbeat. In 2004, the office also purchased equipment, materials, and an electronic subscription allowing it to publish onsite 300 copies of El Nuevo Herald daily newspaper. USINT also purchased and distributed radios, laptop computers, and DVD players. Some of this material and information distributed by USINT is redistributed by individuals and groups to other locations in Cuba. During our fieldwork at USINT, we observed employees unload, sort, and distribute shipments sent by USAID grantees and one U.S.-based NGO, as well as items purchased by USINT. Shipments included materials for independent librarians and journalists, artists, musicians, academics and teachers, churches, and foreign diplomats. Some of these shipments were addressed to specific individuals. USINT officials said that they deliver information directly to some Cuban government officials. USINT officials also distribute literature and equipment to Cubans visiting the consular section for visas or other business. As part of its public diplomacy efforts, USINT provides videoconferencing capabilities and public Internet access to facilitate the work of State and USAID grantees. For example, grantees use Internet-based video conferencing for training sessions. We observed a training session organized by one USAID grantee for approximately 20 independent journalists. In addition to the training, the participants said that they had received other U.S. assistance, such as equipment, supplies, and help in publishing their stories outside Cuba. USINT also provides public access to about 20 computers with Internet access, printers, and copiers. During our fieldwork, we observed that a number of Cuban activists used these computers. The computers also appeared to be popular with the Cuban public—reservations for using them were booked for a month in advance, according to USINT employees managing this equipment. Additionally, as part of USINT’s public diplomacy program, the public affairs office also compiles and selects daily news clippings and quotes to display on an electronic billboard news ticker located on USINT’s exterior. This billboard was installed in January 2006 to display information for people passing the building, which is located on a major Havana street and pedestrian walkway. USAID’s internal controls over both the awarding of Cuba program grants and the oversight of grantees do not provide adequate assurance that the grant funds are being used properly or that grantees are in compliance with applicable laws and regulations. The Guide to Opportunities for Improving Grant Accountability states that organizations that award grants need good internal control systems to provide adequate assurance that funds are properly used and achieve intended results. However, we found some weaknesses in internal control in the preaward, award, implementation, and closeout phases of Cuba-program grants management. The agency’s preaward reviews of grantees often were not completed prior to grant awards, and USAID auditors did not adequately follow up to correct deficiencies after grant awards. In addition, the standardized terms and conditions of grants and cooperative agreements lacked the detail necessary to support adequate accountability; specifically, the grants and cooperative agreements did not include a requirement for an acceptable internal control framework, nor did they contain provisions for correcting deficiencies noted by preaward reviews. USAID’s Cuba program office also does not have adequate policies and procedures for assessing grantee risks in order to put in place proper procedures to reduce that risk. In addition, a lack of adequate oversight and monitoring by USAID’s program office allowed for questionable expenditures by grantees to go undetected; moreover, grantee compliance with cost-sharing provisions was not adequately addressed. The program office also did not provide adequate training to grantees and does not appear to routinely follow prescribed closeout processes. These weaknesses in agency and program office internal control policies and procedures contributed to internal control deficiencies we found at 3 of the 10 grantees we reviewed, leaving USAID’s Cuban democracy program at increased risk of fraud, waste, and abuse. We referred the problems we identified at these 3 grantees to the USAID Office of Inspector General. USAID guidance requires grant officers to determine whether the potential recipient possesses, or has the ability to obtain, the necessary management competence in planning and carrying out assistance programs, and whether it practices mutually agreed upon methods of accountability. As addressed in the Guide to Opportunities for Improving Grant Accountability, an effective review performed before the award—which includes a general review of the control environment and the control activities in place—helps to detect and correct control weaknesses that could contribute to potential fraud, waste, and abuse of grant funds. The potential grantee can then correct these weaknesses before USAID provides funding. During our site visits, we identified fundamental internal control weaknesses at three grantees that might have been mitigated if USAID had performed more timely preaward reviews and performed the necessary follow-up on findings. (Table 7 lists some examples of the internal control weaknesses we identified at these three grantees.) First, in four of the eight instances in which preaward reviews were conducted, the reviews were completed after the awards were made. According to USAID officials, these four reviews were issued from 3–33 days after the award date primarily because of staffing shortages. However, in technical comments on our report, USAID officials said that the agreement officer received oral findings from USAID or Defense Contract Audit Agency auditors before the final report. We also identified one preaward review conducted for USAID by the Defense Contract Audit Agency that appears to have had limitations and weaknesses in its implementation. This review, dated November 20, 2002, concluded that one of the three grantees for which we identified fundamental internal control weaknesses had an adequate accounting system. However, during our site visit in 2006, this grantee could provide only some paid invoices and bank statements for transactions before February 2005. These records were insufficient for tracking and reporting accumulated grantee expenditures or reconciling bank accounts. Second, USAID’s follow-up on preaward reviews was insufficient to provide assurance that deficiencies and weaknesses found during the preaward reviews were adequately addressed. Five of the eight preaward reviews we assessed made recommendations for correcting deficiencies in the grantees’ accounting systems that could adversely affect grantees’ ability to record, process, summarize, and report direct and indirect costs. However, the corresponding grants and cooperative agreements did not include specific provisions for correcting these deficiencies. Moreover, although all eight reviews we assessed recommended follow-up reviews, USAID did not conduct most of them in a timely fashion. In one case, USAID did not conduct a follow-up review until 3 years after such a review was recommended in an initial review. In technical comments on this report, USAID officials said that in the past there were some instances where resources for preaward and follow-up reviews were not available, but that obtaining funding for these reviews is generally a priority for USAID and the Office of Acquisition and Assistance. USAID officials stated that the Office of Acquisition and Assistance will work with the Cuba program office to ensure information regarding grantee audits is communicated to all appropriate staff in a timely manner and that if any subsequent audits are necessary, adequate funding will be made available. We performed a detailed review of four cooperative agreements and one grant agreement that USAID signed between 1997 and 2005 for democracy assistance for Cuba. These agreements had a variety of objectives, ranging from providing humanitarian assistance to dissidents and their families to providing information about conditions in Cuba to the Cuban public and the international community. In general, however, the standardized language of the agreements did not contain sufficient detail to address the unique objectives of each grant, the grantee’s internal controls, or the remediation of known grantee deficiencies. This increases the risk that grantees will use program funding, either unintentionally or intentionally, for purposes that are not intended by the program and that program assets will not be adequately safeguarded. According to the Guide to Opportunities for Improving Grant Accountability, the terms, conditions, and provisions in the award agreement, if well designed, can render all parties more accountable for the award. The terms and conditions in the USAID grants and cooperative agreements we reviewed generally lacked the detail necessary to provide adequate guidance to grantees. For instance, although providing humanitarian assistance is a common objective, the agreements provided insufficient detail for grantees to differentiate between allowable and unallowable types of such assistance. In addition, rather than providing guidance in the agreement document, the agreements pointed to additional sources of rules and regulations, including supporting legislation that the grantees might have difficultly locating or implementing without additional guidance. For example, the agreements do not contain details about acceptable cost-sharing contributions, but instead direct grantees to the Code of Federal Regulations. The grant agreements we reviewed also did not include provisions requiring grantees to establish and maintain an acceptable internal control system or, as previously discussed, provisions for correcting deficiencies identified during preaward reviews. Internal controls should be designed to provide for ongoing monitoring in the course of normal operations. We identified several weaknesses in the USAID Cuba program office’s oversight and monitoring of grantees’ implementation of grants and cooperative agreements, including the lack of policies and procedures for identifying at-risk grantees, formal oversight of grant implementation, and a framework for monitoring cost sharing. In addition, the program office provided inadequate training to grantees. These weaknesses exist in a restrictive environment where the Cuban government precludes Cuba program officers from directly observing the use and outcomes of the assistance. The USAID Cuba program office does not have adequate policies and procedures for assessing and managing the risks associated with specific grantees. USAID Cuba program officials have not performed a formal risk assessment of the grantees providing assistance to Cuba, although they said that they consider recipients of larger awards to be higher risk. Larger recipients often are subject to the Single Audit Act and annual financial statement audits, and are therefore subject to internal control and compliance testing. The program director and program office staff said that they visit grantees at least quarterly. However, one of the grantees we reviewed said that USAID officials do not conduct formal financial oversight visits to their office. Visits to large and small grantees were not formally documented and were not based on structured oversight procedures. In addition, the USAID program office also performed limited to no reviews of the financial records for recipients, increasing the risk that they would operate without effective controls. USAID Cuba program officials said that if the applicant had a prior history of managing USAID or U.S. government contracts or grants, USAID contacted the cognizant USAID or other federal agency technical officer for information about those awards. For applicants without a prior history of managing such federal awards, the program office verifies that the applicant had received 501(c)(3) status from the Internal Revenue Service (IRS). USAID also conducts local inquiries to verify the reputation and qualifications of the applicant. USAID’s Cuba program office does not have a formal grantee monitoring and oversight process to help ensure accountability for grant funds. We found key weaknesses in the oversight USAID did provide. First, USAID lacked adequate documentation of the grantees’ implementation plans. Five agreements between USAID and grantees specified that grantees were to submit implementation plans for approval before initial disbursements. A USAID official said the plans had been communicated orally or included in the grantees’ initial proposals. However, we found inadequate documentation in USAID’s files to support this. In addition, some grantees with whom we spoke lacked an understanding of USAID’s requirements for implementation plans. For example, two grantees could not confirm the existence of implementation plans for their respective grants. Second, USAID did not require grantees to submit detailed, well-supported quarterly reports and did not have a formal process for reviewing those reports. Along with a narrative report, USAID requires grantees to submit one-page quarterly financial reports (but not supporting documentation) to validate underlying expenditures. Although grantees provide summary amounts for expenditures and obligations, the financial information required by USAID in the quarterly reporting process is not sufficiently detailed to help the program office identify potentially inappropriate expenditures. In addition, USAID does not have a formal process for reviewing this reporting. The lack of formal quarterly review procedures and documentation reduces USAID’s ability to identify and correct inappropriate expenditures by grantees. In technical comments on this report, USAID officials said that the Paperwork Reduction Act limits USAID’s ability to require, as a general rule, grantees to report information in addition to that required under OMB circular A-110 and 22 CFR Part 226 without approval from OMB. USAID officials said that they will consider pursuing OMB approval. Third, USAID does not have a protocol for monitoring visits to grantees and does not document the results of those visits. Our Standards for Internal Control in the Federal Government addresses the need for developing and implementing detailed procedures for grantee monitoring. During our fieldwork, we accompanied USAID Cuba office staff on site visits to several grantees. During this fieldwork, we observed that USAID officials did not use a structured review process or coordinate their reviews to prevent gaps or duplication of efforts. USAID officials did not prepare trip reports or other written summaries of their observations during these site visits. Some grantees stated that program officials generally examined only a limited number of invoices during their visits. One program office staff member said that, during site visits, he typically spent about an hour interviewing grantee representatives and reviewing records at each grantee. USAID’s Cuba program office did not have a framework for overseeing grantee compliance with cost-sharing requirements in their grants and cooperative agreements and could not determine whether grantees were complying with these requirements. Cost sharing, an important element of the USAID–grant recipient relationship, is applied to certain grantees on a case-by-case basis. If USAID includes a cost-sharing provision in an agreement, the respective grantee must finance a specified amount of activity costs using nonfederal funds. Some agreements allow grantee contributions to include nonmonetary contributions, such as services and property, in addition to cash contributions. Twelve of the 13 USAID agreements we reviewed contained cost-sharing provisions, totaling about $7.6 million. In some cases, the grantee’s cost share was a significant portion of the total amount of assistance authorized under the agreement. For example, one grantee’s initial share represented 56 percent of the total estimated program amount. Moreover, as previously discussed, the cost-sharing provisions we reviewed offered little guidance about the allowable sources of cost-sharing funds or the methods for valuing non-monetary contributions applied toward the cost share, instead directing grantees to the Code of Federal Regulations. Grantees are required to periodically report to USAID the amounts they have spent as their portion of the cost sharing. However, based on a review of grantee documentation and interviews with agency staff, we determined that USAID does not systematically monitor grantee compliance with cost-sharing requirements. For example, staff does not use a work program or structured methodology to determine whether grantees comply with cost-sharing provisions in their respective agreements. Two of the USAID grantees we reviewed reported that they complied with USAID grant regulations by applying funds received under grants from NED toward their required share of program costs. USAID grant regulations at 22 CFR 226.23 require grantees to meet their cost-sharing requirement with nonfederal resources. For the purpose of complying with USAID grant regulations on cost-sharing requirements, it is unclear whether funds received under grants from NED constitute federal or nonfederal resources. USAID officials, after consulting with State and NED officials, have determined that NED funds provided from U.S. government sources cannot be used by NED grantees to meet required cost-share contributions under USAID regulations. USAID officials said that they will address the proper use of NED grant funds provided from U.S. government sources in relation to existing and future USAID grants. One important role for a grantor program office is the training and guiding of program grantees, as discussed in the 2005 Guide to Opportunities for Improving Grant Accountability. However, USAID does not provide formal grant management training to help grantees understand the regulations, policies, and procedures governing grant funds. According to USAID officials, limited English proficiency has created additional challenges for some of the smaller grantees. The Cuba program director stated that he had wanted to provide formal training to certain grantees, but was concerned about the grantee reaction to creating training requirements for some, but not all, grantees. In technical comments on this report, USAID officials said that although grantees are responsible for understanding and complying with grant provisions and federal laws and regulations, USAID will consider providing Spanish language technical assistance to grantees to build NGO capacity for financial management. USAID also is pursuing providing grant and regulation information to grantees in Spanish. Closeout processes can be used for identifying problems with grantee financial management and program operations, accounting for any real and personal property acquired with federal funds, making upward or downward adjustments to the federal share of costs, and receiving refunds for unobligated funds that the grantee is not authorized to retain. USAID did not provide us with evidence that they routinely performed closeout processes for some agreements. Currently, USAID guidance states that if a U.S. grantee requires a closeout audit, the Office of Acquisitions and Assistance must include a closeout audit request in the next regularly scheduled audit of the organization. In technical comments, USAID officials said that such audit requests are no longer made because the agency uses a database system to track whether grantees required to have closeout audits receive one in accordance with agency policies and procedures. The Office of Acquisitions and Assistance recognizes that the current written policy regarding closeout procedures is outdated and is working to update it. During our limited reviews, we identified fundamental internal control weaknesses at 3 of the 10 grantees that most likely would have been identified had USAID followed up on weaknesses identified by preaward reviews. In addition, the lack of adequate oversight and monitoring by USAID’s program office allowed for questionable expenditures by three grantees to go undetected. Table 7 summarizes the internal control weaknesses we observed at these grantees. The 3 grantees discussed in table 7 accounted for about 9 percent ($4.7 million) of the awards received by the 10 grantees we reviewed. Two of the 3 grantees detailed above did not maintain adequate records of the amount and type of assistance or materials sent to Cuba, the methods and dates assistance was sent or transmitted, or efforts to verify that assistance was received. Additionally, these two grantees had not established systematic procedures for gathering, documenting, and reporting this information. For these three grantees, we identified numerous questionable transactions and expenditures that USAID officials likely would have identified had they performed adequate oversight reviews. For example, two grantees had inadequate support for checks written to key officials of that organization. In addition, one of these two grantees could not justify some purchases made with USAID funds, including a gas chainsaw, computer gaming equipment and software (including Nintendo Gameboys and Sony Playstations), a mountain bike, leather coats, cashmere sweaters, crab meat, and Godiva chocolates. According to this grantee’s proposal, USAID funds were to be used to provide humanitarian assistance and information to dissidents and their families. Subsequent to our questions regarding these purchases, the grantee’s executive director wrote us that he intended to submit corrections to USAID for some of these charges. In conjunction with the USAID Assistant Administrator for Latin America and the Caribbean and the Cuba program director, we referred the problems we identified at the three grantees discussed in table 7 to the USAID Office of Inspector General. An investigator said that the Office of Inspector General was investigating these three grantees. Based on our limited review, 7 of the 10 grantees appear to have established systematic procedures for documenting, tracking, and reporting on the use of grant funds. These 7 grantees accounted for about 91 percent ($47.2 million) of the awards received by the 10 organizations that we reviewed (see footnote 44). The operating procedures at some of these 7 grantees are likely the result of pre-existing internal control operating characteristics (and do not reflect USAID monitoring and oversight). These grantees also had detailed records of their respective activities. For example, one grantee maintained an inventory and signed receipts for humanitarian shipments to Cuba, and dated, handwritten notes of telephone calls or other communications to verify receipt of shipments. Another grantee maintained detailed records of the methods used, quantities of printed material transmitted, and copies of communications as evidence of receipt. Agencies and grantees face an operating environment in Cuba that presents monitoring and evaluation challenges. USAID has conducted some program evaluation, but has not routinely collected program outcome information from its grantees. Instead, USAID and its grantees have largely focused on measuring and reporting program activities. In 2005–2006, however, USAID began to focus on collecting better information about the results of U.S. democracy assistance. The operating environment in Cuba poses a range of challenges to monitoring and evaluating U.S.-funded democracy assistance. Challenges include: The lack of USAID presence in Cuba and the inability of the USAID staff to travel there, because the Cuban government actively opposes U.S. democracy assistance. The lack of operational coordination and routine communication links between State and USAID (as previously discussed). Grantee reluctance to share information with other grantees because of concerns about potential Cuban government infiltration of grantee operations. USAID and grantee concerns that sensitive agency records could be disclosed in response to Freedom of Information Act requests (as previously discussed). U.S. officials and grantees cited potential danger to dissidents and activists in Cuba if sensitive information was released or disclosed. The USAID Cuba program director said that in this environment, strict cause and effect relationships between the USAID program and changes in Cuban civil society are difficult to establish and document. Compared with activities in Cuba, off-island activities, such as those at U.S. universities, are generally easier to carry out, monitor, and evaluate, according to USAID officials. However, off-island activities have a less- evident and slower impact on Cuban society and politics. USAID’s Cuba program office and its grantees have conducted some evaluations of U.S. assistance, but these studies have been limited in number and scope. USAID officials also have informally interviewed Cuban dissidents and émigrés about the receipt and effectiveness of U.S. assistance, but they did not systematically document, compile, or analyze the results of these interviews. Although USINT has assessed some independent libraries in Cuba, USAID has not received its reports. USAID and its grantees have conducted some evaluations of U.S. democracy assistance for Cuba (see table 8). Generally, however, these efforts have not reflected a systematic approach to program evaluation, although some benefits resulted. The USAID program director also has conducted a number of informal interviews with Cuban dissidents and members of independent Cuban NGOs able to travel outside Cuba. Although limited by Cuban government controls on travel, these opportunities provided USAID with some ability to verify the receipt and impact of grantee assistance directly, according to USAID officials. For example, the program director was able to verify that some dissidents had received, and continued to use, computers shipped to the island. In other cases, USAID has relied on USINT reporting to verify receipt of such assistance. However, these interviews and discussions were conducted on a sporadic basis, and USAID officials did not systematically document, compile, or analyze the results. USINT officials have done some monitoring of assistance (books, equipment, and supplies) distributed to about 100 independent NGOs in Havana. (USINT employees distributed this assistance, which it and USAID grantees had purchased.) As we observed during our fieldwork, USINT employees kept records of unannounced inspection visits to these organizations and submitted summary reports to USINT officials. Based on these reports, USINT officials have recommended increases or decreases in the level and type of assistance provided to these NGOs. Although there have been documented losses at some of these organizations, USINT officials said such losses were unavoidable in Cuba and that their policy is to continue providing some limited assistance to these NGOs. As discussed previously, however, USAID has not received these reports. USAID and its grantees have not routinely collected and reported data and other information about the results or impact of the democracy assistance they have provided. USAID’s reports have focused primarily on measures of program activities. The Cuba program office’s accomplishment reports, updated on a monthly basis, consolidate quantitative data about activities and related information submitted quarterly by grantees, such as the number of books, newsletters, and other informational materials sent to the island; the number of reports published by Cuban independent journalists; and instances where the international community denounced Cuban government human rights violations. The Cuba program’s annual operational plan takes a similar approach. USAID officials said that data about shipments of books, newsletters, and other informational materials provide a measure of the flow of information to Cuba. The officials also said that data about the number of independent journalists published outside Cuba on the Internet (or in hard copy) provide a measure of the flow of information from Cuba. However, these reports and data do not provide an assessment of the impact or contribution of these activities in the context of helping to build civil society in Cuba (part of the USAID Cuba program’s strategic objective) or the effectiveness of U.S. assistance in achieving broader U.S. democracy goals and objectives for Cuba. In addition to measures of program activities, USAID officials point to the total number of nonviolent acts of civil resistance in Cuba, as reported in annual Steps to Freedom reports, as a proxy indicator for measuring the positive impact of U.S. democracy assistance. Rich in detail about Cuba’s dissidents, the reports show that total nonviolent acts of civil resistance increased from about 600 acts in 2001 to about 1,800 in 2004. However, the reports show that, between 2002 and 2004, the number of less intense nonviolent acts of civil resistance increased while the number of more intense acts declined. In commenting on a draft of our report, State officials said that this decline coincided with the Cuban government’s 2003 crackdown on dissidents. Annual reports on human rights conditions in Cuba prepared by Amnesty International, Human Rights Watch, and State covering the same period (2001–2004) portray a more complex and ambiguous human rights situation than the generally positive trend shown by the indicator in the Steps to Freedom reports. Grantees’ quarterly reports to USAID are the main vehicle for reporting performance information. The quarterly reports submitted by 10 grantees in 2005 consistently provided data about program activities. However, these reports generally did not provide a focused analysis of program accomplishments. Only two organizations consistently identified program results as part of their quarterly reporting. For example, one grantee’s reports discussed the results of assistance activities in the context of the broader Cuban pro-democracy movement and short- and long-term civil society goals. USAID officials said that they had repeatedly emphasized to grantees the importance of including information about project results in their reporting. The USAID Assistant Administrator for Latin America and the Caribbean and the Cuba program director said that the director had discussed this topic at grantee meetings held several times each year. Since 2005, USAID’s Cuba program has taken several steps to improve data collection and its communication with grantees. These include: Increasing staff expertise and meeting more regularly with grantees. In 2005, a staff member with experience in grant management and performance evaluation joined USAID’s Cuba office; this staff member developed, and began using, a set of structured questions to gather and record grantee performance information. This new staff member also began to meet and regularly communicate with grantees. However, the staff member said that the office’s small number of staff makes effective program monitoring and evaluation challenging. Improving information in grantees’ quarterly reports. The Cuba program acknowledged that quarterly reports submitted by grantees have not included important information about program activities and results. Several grantees said that they were unsure of what evaluation-related information to include in reports and had received relatively little guidance from USAID until recently. According to USAID, smaller grantees have experienced greater challenges in this regard because of their lack of experience working with USAID and because of their limited English proficiency. USAID officials acknowledged grantees had not been provided formal training in program evaluation. In July 2006, USAID’s Cuba program office e-mailed grantees a more detailed description of the types of data and other information to include in their quarterly reports, as part of a series of e-mails to remind grantees of USAID laws, regulations, and policies. USAID staff said that they are working with grantees to improve the quality of their quarterly reports and that they intend to issue additional written guidance. Requiring intermediate program evaluations. In 2006, recognizing that the frequent use of agreement modifications and extensions had postponed end-of-project evaluations for many grantees, the Cuba program office decided to include terms in future grants and cooperative agreements requiring grantees to submit interim evaluations when requesting significant project modifications or extensions. In the context of recent recommendations to increase funding for democracy assistance in Cuba, we conclude that the U.S. government’s efforts to support democratic political change face several significant challenges. Some of these challenges stem from the difficult operating environment in Cuba, while others are the result of weaknesses in the managerial oversight the program has received to date. Recently, however, USAID has taken some steps to establish improved policies and reporting procedures. Effectively delivering democracy-related assistance to Cuba will require a number of improvements, including better communication between State and USAID regarding day-to-day activities, particularly in Cuba. In addition, a number of the basic elements required for effective grant management and oversight need to be strengthened. These include ensuring that effective preaward reviews are performed, strengthening internal controls at the grantee level, and identifying and monitoring at- risk grantees. Further, agency officials need to inform and, as needed, train grantees about their shared responsibilities in collecting information that will permit better monitoring and evaluation of program outcomes. Ultimately, better program oversight can help to assure that resources are responsibly and effectively utilized and grantees are in compliance with applicable laws and regulations. We recommend that the Secretary of State and the USAID Administrator work jointly to improve communication among State bureaus in Washington, D.C.; USINT in Cuba; and USAID offices responsible for implementing U.S. democracy assistance, recognizing that USINT has limited resources but a crucial role in providing and monitoring democracy assistance. We also recommend that the USAID Administrator direct the appropriate bureaus and offices to improve management of grants related to Cuba by taking the following actions: Improving the timeliness of preaward reviews to ensure they are completed prior to the awarding of funds. Improving the timeliness and scope of follow-up procedures to assist in tracking and resolving issues identified during the preaward reviews. Requiring that grantees establish and maintain adequate internal control frameworks, including developing approved implementation plans for the grants. Providing grantees specific guidance on permitted types of humanitarian assistance and cost-sharing, and ensuring that USAID staff monitors grantee expenditures for these items. Developing and implementing a formal and structured approach to conducting site visits and other grant monitoring activities, and utilizing these activities to provide grantees with guidance and monitoring. We received comments from State and USAID, which are reprinted in appendixes II and III, respectively. State and USAID appreciated the professionalism with which we conducted our review and were gratified that we were able to report that dissidents in Cuba appreciated U.S. democracy assistance, and found this assistance to be useful in their work. In response to our recommendation, State said that, consistent with the Secretary of State’s recent foreign assistance reforms, it was taking steps to improve interagency communication and coordination for Cuba democracy assistance. These steps included providing USAID officials regular access to classified communications with USINT in Havana and State, and implementing regular meetings between DRL, the Bureau of Western Hemisphere Affairs’ Office of Cuban Affairs, the Office of the Cuba Transition Coordinator, and USAID. State also commented that— within the constraints imposed by implementing a democracy-building program in Cuba—DRL and the Office of Cuban Affairs would work closely with all grantees to identify creative ways to document the impact of Cuba programs. These new methods of documentation would attempt to measure impact beyond direct outputs (e.g., items delivered or persons trained). USAID said it was taking actions to improve its performance in managing, monitoring, and evaluating democracy assistance for Cuba. These actions would include better documentation of USAID grantee monitoring, improved interagency communications, and a review of all aspects of the USAID procurement system as it relates to the Cuba program. Subsequent to submitting its written comments, USAID offered additional comments regarding our recommendations. USAID concurred with our first, second, third, and fifth recommendations, as well as with the part of our fourth recommendation that USAID should ensure that its staff monitors grantee expenditures. USAID concurred, in part, with our recommendation to provide grantees specific guidance on permitted types of humanitarian assistance and cost-sharing. To avoid potentially making grant documents unwieldy and difficult to use, USAID plans to continue to reference additional regulatory material regarding allowable costs and other matters in its grants. However, USAID will review its standard grant provisions to ensure that grantees are provided clear guidance regarding how to access referenced regulatory materials. USAID also is considering providing technical assistance for grants management and grant and regulatory documents to Cuba program grantees in Spanish. State and USAID provided technical comments on a draft of this report, which we have incorporated where appropriate. In its technical comments, USAID raised some issues regarding some of our findings. However, we have worked with agency officials to resolve or clarify these matters. We will send copies of this report to the Secretary of State, the USAID Administrator, appropriate congressional committees, and other interested parties. Copies will be made available to others upon request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report examines (1) the roles and objectives of the agencies implementing U.S. democracy assistance targeted at Cuba and the characteristics and selection of the grantees receiving Department of State (State) and U.S. Agency for International Development (USAID) awards; (2) the types, amounts, beneficiaries, and methods used to deliver assistance for selected grantees in 2005; (3) USAID’s monitoring and oversight of these grantees; and (4) the availability of data to evaluate whether U.S. assistance has achieved its goals. During our review, we conducted fieldwork at USAID, State, and the Departments of Treasury (Treasury) and Commerce (Commerce) in Washington, D.C.; we also conducted work at the offices of selected grantees in Washington, D.C., and Miami, Florida. At these locations, we analyzed key records and interviewed agency officials and grantees to obtain an understanding of the processes used to select grantees, monitor their performance, assess the disbursement of funds, and evaluate project results. We also discussed Cuba democracy assistance with officials at the National Endowment for Democracy and the Council on Foreign Relations in Washington, D.C. We conducted fieldwork at the U.S. Interests Section (USINT) in Havana, Cuba, from late June–early July 2006, where we interviewed relevant U.S. government officials and observed their activities, such as sorting, delivering, and monitoring assistance. We interviewed several leading dissidents and human rights activists—including independent librarians and journalists—and family members of political prisoners. We also interviewed foreign-embassy officials. We conducted our work from August 2005 through September 2006 in accordance with generally accepted government auditing standards. To identify the roles and objectives of the implementing agencies, we analyzed (1) U.S. laws authorizing democracy assistance to Cuba and related records, such as agency officials’ statements and committee reports; (2) State and USAID policy and strategy records, such as agency strategic and performance plans, budget requests, and bureau and mission performance plans; and (3) the two reports of the Commission for Assistance to a Free Cuba and related records. We also interviewed USAID, Treasury, Commerce, and State officials (including State’s Cuba Transition Coordinator) about the objectives and roles of their agencies in providing assistance to Cuba. To examine the characteristics and selection of the grantees that received State or USAID awards in 1996–2005, we reviewed key grantee and agency records including annual reports, proposals, and Web sites for the 34 organizations that received State or USAID awards during that period; and the grants and cooperative agreements, agreement modifications, and related agency records for the 44 awards State and USAID made during that period. We analyzed this information to determine (1) the types and location of organizations that received awards; (2) whether these organizations had previously worked on democracy promotion activities; (3) the methods State and USAID used to identify and evaluate assistance proposals; and (4) selected characteristics of the awards, such as their initial amount and length, cost-sharing requirements, and any postaward modifications. To identify the types and amounts of assistance provided by grantees, beneficiaries of this assistance, and grantees’ delivery methods, we selected a judgmental sample of 10 grantees with active awards in 2005 (see table 9). These 10 grantees were implementing 14 grants or cooperative agreements in 2005 with a total estimated budget of nearly $52 million. In selecting the grantees, we considered a range of factors to ensure our sample included a mix of large, medium, and small awards; included a mix of types of nongovernmental organizations (NGO); and covered the range of U.S. democracy assistance targeted at Cuba. We focused our detailed analysis on USAID’s grantees and agreements because State’s grants were not awarded until mid-2005. We also considered the length of time grantees had been providing U.S. democracy assistance for Cuba to ensure grantees had several years experience working with USAID. The resulting sample accounts for over 76 percent of the State and USAID awards active in 2005 for U.S. democracy assistance targeted at Cuba. To identify these organizations’ program objectives, we analyzed grantee proposals, grants or cooperative agreements, internal authorization memorandums, and modification of assistance forms. We obtained grantees’ quarterly narrative and financial reports to USAID to identify grantees’ reported activities and to quantify the types and amounts of assistance these grantees reported sending to Cuba. To corroborate these data and to develop an understanding of grantees’ delivery methods, we interviewed representatives of these organizations in Washington, D.C., and Miami and, when possible, observed their activities. We also reviewed internal documents provided by these grantees, including procedures manuals, tracking databases and reports, and other records. We developed an electronic database to track and analyze selected terms of the agreements in our sample, including objectives, award amounts and dates, cost-sharing amounts, modifications, sub-grant agreements, and reported activities. To test the general reliability of quantities of assistance recorded for our sample, we compared these data with other documents provided by grantees (e.g., shipment logs, tracking databases, and internal reports), documents submitted by USAID, and data provided by USINT. We also used interviews with grantee representatives to corroborate these data. Based on these general comparisons, we determined grantees’ records were sufficiently reliable for the purposes of this report. To identify the types and amounts of assistance provided by grantees in our sample, we used our electronic database to track and summarize grantees’ individual activities, which we then used to categorize assistance types and amounts. In addition, we interviewed representatives of the grantees in our sample and select beneficiaries in Havana about their experiences. To assess USAID’s management and internal control for monitoring grantees, we reviewed grants and cooperative agreements, interviewed agency officials and select grantees, reviewed USAID and grantee policies and procedure manuals, performed walk-throughs of grantee disbursement processes, and reviewed grantee invoices and supporting documentation. For 10 grantees, we reviewed the internal controls and related residual fiscal accountability risk. Based on our initial reviews, we performed additional expenditure testing for 3 grantees that appeared to have poor control environments. To assess grantees’ potential fiscal accountability residual risk, we reviewed the adequacy of their internal controls according to the criteria contained in our Standards for Internal Control in the Federal Government. Our procedures did not specifically address whether grantees were complying with federal laws and regulations. However, grantees expending more than $500,000 in federal funds annually are subject to the Single Audit Act. Under this act, these grantees must receive an annual audit, which includes determining whether the grantee has complied with laws, regulations, and the provisions of contracts or grant agreements that may have a direct and material effect on each of its major programs. We focused our detailed analysis on USAID’s grant oversight and did not perform similar detailed analysis of State’s grant oversight because State’s grants were not awarded until mid-2005. We performed a detailed review of 5 of 14 grant agreements in our sample. We selected these agreements because they represented the range of Cuba program objectives outlined and were signed over an 8-year period between 1997 and 2005. A USAID official confirmed that all grant and cooperative agreements use standard language from document-generating software. The standard language is modified periodically under the direction of the USAID’s Office of Acquisition and Assistance. To assess the monitoring and reporting of program performance information evaluation, we examined USAID, Office of Management and Budget, and other federal government policies and guidance. We also reviewed our previous reports and expert panel reports on grant accountability to identify lessons learned. To better understand the challenges of evaluating democracy assistance, we reviewed relevant literature. We also analyzed USAID Cuba program documents, grantee agreements, and modifications to identify guidance provided on reporting performance data. We also analyzed grantee quarterly reports to identify how they reported program achievements. We also assessed evaluations of U.S. assistance to Cuba, such as one grantee’s evaluation of some independent NGOs, the PricewaterhouseCoopers evaluation of the USAID Cuba program, and associated program documents. We interviewed USAID Cuba program officials concerning their current and past program evaluation practices; program grantees in Miami and Washington, D.C., to identify the instructions and feedback they have received concerning program reporting and evaluation; USINT officials concerning their role in monitoring and reporting program performance information; and beneficiaries in Cuba about their views of the effectiveness of the U.S. democracy assistance they had received. We focused our detailed analysis on USAID’s program effectiveness because State’s grants were not awarded until mid-2005. Both State and USAID officials provided sensitivity reviews of a draft of this report, and we followed their direction in removing potentially sensitive or classified information. In addition to the contacts named above, Phillip Herr, Michael Rohrback, Bonnie Derby, Elizabeth Guran, Keith H. Kronin, Todd M. Anderson, Cara Bauer, Lynn Cothern, and Reid Lowe made key contributions to this report. Ernie Jackson, Lauren S. Fassler, and Arthur L. James, Jr., provided technical assistance.
U.S. law authorizes aid for nonviolent democratic change in Cuba. From 1996-2005, the Department of State (State) and the U.S. Agency for International Development (USAID) awarded grants totaling $74 million to support such change. A presidential commission recently recommended increasing funding for these efforts. This report examines (1) agency roles in implementing this aid and selection of grantees; (2) types of aid, recipients, and methods of delivery reported in 2005; (3) oversight of grantees; and (4) data about the impact of this aid. To address these objectives, we analyzed the activities and internal controls, and USAID's oversight and management of, 10 grantees with about 76 percent (in dollars) of total active awards for Cuba democracy aid. Our review focused on USAID because State's first awards were not made until mid-2005. The Department of State State and USAID implement U.S. democracy assistance for Cuba through an interagency process. However, communication between these agencies was sometimes ineffective, most critically about grantees' on-island activities. About 95 percent ($62 million) of USAID's total awards were made in response to unsolicited proposals; however, after 2004, both USAID and State used formal competition to select grantees. Dissidents in Havana said that U.S. assistance provided moral support and enhanced their ability to work for democracy. In 2005, the 10 grantees we reviewed delivered humanitarian and other aid, training, and information to human rights and political activists, independent librarians and journalists, and political prisoners and their families. Assistance shipped to Cuba included food, medicine, clothing, office equipment and supplies, shortwave radios, books, and newsletters. Grantees also conducted international advocacy for human and workers' rights in Cuba and planned for a future democratic transition. Given the Cuban government's repressive policies and opposition to U.S. democracy assistance, grantees employed a range of discreet delivery methods that varied in terms of security, flexibility, and cost. The U.S. Interests Section in Havana, Cuba, a State post, has played an important role in distributing the aid provided by some grantees. Internal controls--both over the awarding of Cuba program grants and oversight of grantees--do not provide adequate assurance that the grant funds are being used properly and that grantees are in compliance with applicable laws and regulations. Preaward reviews of grantees were not always completed before awards, and USAID did not follow up adequately after awards to correct weaknesses in grantee policies, procedures, and accounting systems identified by these reviews. In addition, standardized grant agreements did not provide sufficient details to support program accountability or the correction of the weaknesses identified by preaward reviews. The Cuba program office also did not adequately manage at-risk grantees and lacked formal review or oversight procedures for monitoring grantee activities. We performed limited testing for 10 grantees and identified questionable expenditures and significant internal control weaknesses with 3 grantees that USAID had not detected. The Cuban government's active opposition to U.S. democracy assistance presents a challenging operating environment for State and USAID. Although USAID and its grantees have some evaluation and anecdotal information about program results, they have focused on measuring and reporting program activities, such as the volume of food, medicine, or books sent to Cuba. USAID recently took several steps to collect better information about program results, such as increasing staff expertise and meeting more regularly with grantees.
As we noted in GAO’s strategic plan, the United States and other nations face increasingly diffuse threats. In the future, potential adversaries are more likely to strike vulnerable civilian or military targets in nontraditional ways to avoid direct confrontation with our military forces on the battlefield. The President’s December 2000 national security strategy states that porous borders, rapid technological change, greater information flow, and the destructive power of weapons now within the reach of small states, groups, and individuals make such threats more viable and endanger our values, way of life, and the personal security of our citizens. Hostile nations, terrorist groups, transnational criminals, and even individuals may target American people, institutions, and infrastructure with weapons of mass destruction and outbreaks of infectious disease. They may attempt to disrupt or destroy our information systems through cyber warfare. International criminal activities such as money laundering, arms smuggling, and drug trafficking can undermine the stability of social and financial institutions and the health of our citizens. As we witnessed in the tragic events of last week, some of the emerging threats can produce mass casualties. Others can lead to mass disruption of critical infrastructure and can hold serious implications for both our domestic and the global economy, as we saw when the New York Stock Exchange re- opened for trading this past Monday and the Dow Jones Industrial Average fell more than 600 points. Terrorist attacks also could compromise the integrity or delivery of water or electricity to our citizens, compromise the safety of the traveling public, and undermine the soundness of government and commercial data systems supporting a myriad of activities. A basic and fundamental role of the government under our Constitution is to protect America from both foreign and domestic threats. The government must be able to prevent and deter threats to our homeland as well as detect impending danger before attacks or incidents occur. However, it may not be possible to prevent, deter, and detect every threat, so steps should be taken to harden potential targets. We also must be ready to manage the crises and consequences of an event, to treat casualties, reconstitute damaged infrastructure, and move the nation forward. Finally, the government must be prepared to retaliate against the responsible parties in the event of an attack. Now I would like to turn to what the government could do to make our homeland more secure. First, I will discuss the need for clearly defined and effective leadership with a clear vision of what needs to be accomplished. Second, I will address the need for a coordinated national strategy and comprehensive threat assessment. Yesterday, we issued a report that discusses challenges confronting policymakers in the war on terrorism and offered a series of recommendations. One of these recommendations is that the government needs more clearly defined and effective leadership to develop a strategy for combating terrorism, to oversee development of a new national threat and risk assessment, and to coordinate implementation among federal agencies. Similar leadership also is needed to address the broader issue of homeland security. Specifically, a national focal point will be critical to articulate a vision for ensuring the security of the American homeland and to develop and implement a strategy to realize that vision. The entity that functions as the focal point should be dedicated to this function. In addition, the person who heads this entity should be dedicated full-time to this effort and consideration should be given to a term appointment in order to enhance continuity. In testimony on March 27, 2001, we stated that overall leadership and management efforts to combat terrorism are fragmented because there is no single focal point managing and overseeing the many functions conducted by more than 40 different federal departments and agencies.Also, our past work in combating terrorism has shown that the multitude of federal programs requires focus and attention to minimize redundancy of effort and eliminate confusion within the federal government and at the state and local level. Homeland security will rely on the concerted efforts of scores of agencies, which may exceed the number in the fight against terrorism. Consequently, the need for overall leadership is even more critical. At present, we do not have a national strategy specifically for ensuring homeland security. Thus, the strategy must establish the parameters of homeland security and contain explicit goals and objectives. It will need to be developed in partnership with Congress, the executive branch, state and local governments, and the private sector (which owns much of the critical infrastructure that can be targeted). Without such a strategy, efforts may be fragmented and cause confusion, duplication of effort, and ineffective alignment of resources with strategic goals. Consequently, clarifying the roles and responsibilities of the various levels of government and the private sector will be a critical function for the entity that is given oversight responsibility for homeland security efforts. The United States does not have a national threat and risk assessment to help guide federal programs for homeland security. A threat and risk assessment is a decision-making tool that helps to define the threats, to evaluate the associated risk, and to link requirements to program investments. In our March 2001 testimony on combating terrorism, we stated that an important first step in developing a strategy for combating terrorism is to conduct a national threat and risk assessment to define and prioritize requirements.. Combating terrorism is a major component of homeland security, but it is not the only one. It is essential that a national threat and risk assessment be undertaken that will address the full range of threats to the homeland. Results from hearings and other studies also underscore the importance of a national threat and risk assessment. For example, in a July 2001 letter to the vice president from several senators, the senators stated that federal programs to combat domestic terrorism are being initiated and expanded without the benefit of a sound national threat and risk assessment process. In a May 2001 Center for Strategic and International Studies’ report on homeland defense, the authors stated that an annual threat assessment would provide federal planners with the basis for assessing the emerging risk of attacks and developing an integrated analysis structure for planning. We recognize that a national-level threat and risk assessment will not be a panacea for all the problems in providing homeland security. However, we believe that such a national threat and risk assessment could provide a framework for action and facilitate multidisciplinary and multi- organizational participation in planning, developing, and implementing programs to enhance the security of our homeland. Given the tragic events of Tuesday, September 11, 2001, a comprehensive national-level threat and risk assessment that addresses all threats has become an urgent imperative. Now, I would like to discuss some elements that may need to be included in the development of the national strategy and a means to assign roles to federal, state, and local governments and the private sector. Three essential elements provide a basis for developing a national strategy: a risk assessment, vulnerability analysis, and infrastructure criticality analysis. This approach, developed by the Department of Defense for its antiterrorism program, could be an instructive model in developing a homeland security strategy. First, our nation must thoroughly assess the threats posed by nations, groups, or individuals and, to the extent possible, eliminate or reduce the threat. Second, we have to identify the vulnerabilities and weaknesses that exist in our infrastructure, operations, planning, and exercises and then identify steps to mitigate those risks. Third, we must assure our ability to respond to and mitigate the consequences of an attack. Given time and resource limitations, we must identify the most critical aspects of our infrastructure and operations that require the most immediate attention. Our strategy, to be comprehensive in nature, should include steps designed to reduce our vulnerability to threats, for example, by hardening targets to minimize the damage from an attack; use intelligence assets to identify threats; stop attacks before they occur; and manage the consequences of an incident. In addition, the strategy should incorporate mechanisms to assess resource utilization and program performance as well as provide for training, exercises, and equipment to respond to tragic events such as those that occurred last week. Because we may not be able to eliminate all vulnerabilities within our borders, prevent all threat activity, or be completely prepared to respond to all incidents, our strategy should focus finite national resources on areas of greatest need. Once a strategy is developed, all levels of government and the private sector will need to understand and prepare for their defined roles under the strategy. While the federal government can assign roles to federal agencies under the strategy, it will need to reach consensus with the other levels of government and with the private sector on their roles. In the 1990s, the world was concerned about the potential for computer failures at the start of the new millennium, an issue that came to be known as Y2K. The Y2K task force approach may offer a model for developing the public-private partnerships necessary under a comprehensive homeland security strategy. A massive mobilization with federal government leadership was undertaken in connection with Y2K which included partnerships with the private sector and international governments and effective communication to implement any needed corrections. The value of federal leadership, oversight, and partnerships was repeatedly cited as a key to success in addressing Y2K issues at a Lessons Learned summit held last year. Developing a homeland security plan may require a similar level of leadership, oversight, and partnerships with nearly every segment of American society—including individual U.S. citizens—as well as with the international community. In addition, as in the case of our Y2K efforts, Congress needs to take an active, ongoing, and crosscutting approach to oversight in connection with the design and implementation of the homeland security strategy. We at GAO have completed several congressionally requested efforts on numerous topics related to homeland security. I would like to briefly summarize some of the work that we have done in the areas of combating terrorism, aviation security, transnational crime, protection of critical infrastructure, and public health. Given concerns about the preparedness of the federal government and state and local emergency responders to cope with a large-scale terrorist attack involving the use of weapons of mass destruction, we have reviewed the plans, policies, and programs for combating domestic terrorism involving weapons of mass destruction. Our report, Combating Terrorism: Selected Challenges and Related Recommendations, was issued yesterday and updates our extensive evaluations in recent years of federal programs to combat domestic terrorism and protect critical infrastructure. Progress has been made since we first began looking at these issues in 1995. Interagency coordination has improved, and interagency and intergovernmental command and control now is regularly included in exercises. Agencies also have completed operational guidance and related plans. Federal assistance to state and local governments to prepare for terrorist incidents has resulted in training for thousands of first responders, many of whom went into action at the World Trade Center and at the Pentagon on September 11, 2001. However, some key elements remain incomplete. As a result, we recommended that the President designate a single focal point with responsibility and authority for all critical functions necessary to provide overall leadership and coordination of federal programs to combat terrorism. The focal point should oversee a national-level threat assessment on likely weapons of mass destruction that might be used by terrorists and lead the development of a national strategy to combat terrorism and oversee its implementation. Furthermore, we recommended that the Assistant to the President for Science and Technology complete a strategy to coordinate research and development to improve federal capabilities and avoid duplication. Now let me turn to aviation security. Since 1996, we have presented numerous reports and testimonies and reported on numerous weaknesses that we found in the commercial aviation security system. For example, we reported that airport passenger screeners do not perform well in detecting dangerous objects, and Federal Aviation Administration tests showed that as testing gets more realistic—that is, as tests more closely approximate how a terrorist might attempt to penetrate a checkpoint— screener performance declines significantly. In addition, we were able to penetrate airport security ourselves by having our investigators create fake credentials from the Internet and declare themselves law enforcement officers. They were then permitted to bypass security screening and go directly to waiting passenger aircraft. In 1996, we outlined a number of steps that required immediate action, including identifying vulnerabilities in the system; developing a short-term approach to correct significant security weaknesses; and developing a long-term, comprehensive national strategy that combines new technology, procedures, and better training for security personnel. Federal critical infrastructure-protection initiatives have focused on preventing mass disruption that can occur when information systems are compromised because of computer-based attacks. Such attacks are of growing concern due to the nation’s increasing reliance on interconnected computer systems that can be accessed remotely and anonymously from virtually anywhere in the world. In accordance with Presidential Decision Directive 63, issued in 1998, and other information-security requirements outlined in laws and federal guidance, an array of efforts has been undertaken to address these risks. However, progress has been slow. For example, federal agencies have taken initial steps to develop critical infrastructure plans, but independent audits continue to identify persistent, significant information security weaknesses that place virtually all major federal agencies’ operations at high risk of tampering and disruption. In addition, while federal outreach efforts have raised awareness and prompted information sharing among government and private sector entities, substantive analysis of infrastructure components to identify interdependencies and related vulnerabilities has been limited. An underlying deficiency impeding progress is the lack of a national plan that fully defines the roles and responsibilities of key participants and establishes interim objectives. Accordingly, we have recommended that the Assistant to the President for National Security Affairs ensure that the government’s critical infrastructure strategy clearly define specific roles and responsibilities, develop interim objectives and milestones for achieving adequate protection, and define performance measures for accountability. The administration currently is reviewing and considering adjustments to the government’s critical infrastructure-protection strategy that may address this deficiency. On September 20, 2001, we publicly released a report on international crime control and reported that individual federal entities have developed strategies to address a variety of international crime issues, and for some crimes, integrated mechanisms exist to coordinate efforts across agencies. However, we found that without an up-to-date and integrated strategy and sustained top-level leadership to implement and monitor the strategy, the risk is high; scarce resources will be wasted; overall effectiveness will be limited or not known; and accountability will not be ensured. We recommended that the Assistant to the President for National Security Affairs take appropriate action to ensure sustained executive-level coordination and assessment of multiagency federal efforts in connection with international crime. Some of the individual actions we recommended were to update the existing governmentwide international crime threat assessment, to update or develop a new International Crime Control Strategy to include prioritized goals as well as implementing objectives, and to designate responsibility for executing the strategy and resolving any jurisdictional issues. The spread of infectious diseases is a growing concern. Whether a disease outbreak is intentional or naturally occurring, the public health response to determine its causes and contain its spread is the same. Because a bioterrorist event could look like a natural outbreak, bioterrorism preparedness rests in large part on public health preparedness. In our review last year of the West Nile virus outbreak in New York, we found problems related to communication and coordination among and between federal, state, and local authorities. Although this outbreak was relatively small in terms of the number of human cases, it taxed the resources of one of the nation’s largest local health departments. In 1999, we reported that surveillance for important emerging infectious diseases is not comprehensive in all states, leaving gaps in the nation’s surveillance network. Laboratory capacity could be inadequate in any large outbreak, with insufficient trained personnel to perform laboratory tests and insufficient computer systems to rapidly share information. Earlier this year, we reported that federal agencies have made progress in improving their management of the stockpiles of pharmaceutical and medical supplies that would be needed in a bioterrorist event, but that some problems still remained. There are also widespread concerns that hospital emergency departments generally are not prepared in an organized fashion to treat victims of biological terrorism and that hospital emergency capacity is already strained, with emergency rooms in major metropolitan areas routinely filled and unable to accept patients in need of urgent care. To improve the nation’s public health surveillance of infectious diseases and help ensure adequate public protection, we recommended that the Director of the Centers for Disease Control and Prevention lead an effort to help federal, state, and local public health officials achieve consensus on the core capacities needed at each level of government. We advised that consensus be reached on such matters as the number and qualifications of laboratory and epidemiological staff as well as laboratory and information technology. Based on the tragic events of last week and our observations over the past several years, there are several key questions that need to be asked in addressing homeland security: 1. What are our vision and our national objectives to make the homeland more secure? 2. What essential elements should constitute the government’s strategy for securing the homeland? 3. How should the executive branch and the Congress be organized to address these issues? 4. How should we assess the effectiveness of any homeland security strategy implementation to address the spectrum of threats? Homeland security issues are now at the top of the national agenda, as a result of last week’s tragic events. As a result, it is clear that the administration has taken and is taking a variety of actions to identify responsible parties for last week’s attacks, manage the related consequences and mitigate future risks. Obviously, we have not been able to assess the nature and extent of this effort in the wake of last week’s events. We expect that we will be asked to do so in due course. Finally, Mr. Chairman, as you might expect, we have been inundated with requests to brief congressional committees and members on our present and pending work and to undertake new work. We are working with the congressional leadership to be sure we have focused our limited resources on the most important issues. We look forward to working with you and others to focus our work and to identify options for how best to proceed while holding responsible parties accountable for desired outcomes. This concludes my prepared statement. I would be happy to answer any questions that you may have. Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management (GAO-01-909, Sept. 19, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001) Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01- 14, Nov. 30, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Combating Terrorism: Action Taken but Considerable Risks Remain for Forces Overseas (GAO/NSIAD-00-181, July 19, 2000). Weapons of Mass Destruction: DOD’s Actions to Combat Weapons Use Should Be More Integrated and Focused (GAO/NSIAD-00-97, May 26, 2000). Combating Terrorism: Comments on Bill H.R. 4210 to Manage Selected Counterterrorist Programs (GAO/T-NSIAD-00-172, May 4, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Combating Terrorism: Issues in Managing Counterterrorist Programs (GAO/T-NSIAD-00-145, Apr. 6, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed (GAO/HEHS/AIMD-00-36, Oct. 29, 1999). Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism (GAO/T-NSIAD-00-50, Oct. 20, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, Sept. 7, 1999). Combating Terrorism: Analysis of Federal Counterterrorist Exercises (GAO/NSIAD-99-157BR, June 25, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs (GAO/NSIAD-99-151, June 9, 1999). Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Combating Terrorism: Issues to Be Resolved to Improve Counterterrorist Operations (GAO/NSIAD-99-135, May 13, 1999). Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives (GAO/T-NSIAD-99-112, Mar. 16, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, Mar. 11, 1999). Combating Terrorism: FBI's Use of Federal Funds for Counterterrorism-Related Activities (FYs 1995-98) (GAO/GGD-99-7, Nov. 20, 1998). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO/NSIAD-99-3, Nov. 12, 1998). Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program (GAO/T-NSIAD-99-16, Oct. 2, 1998). Combating Terrorism: Observations on Crosscutting Issues (GAO/T- NSIAD-98-164, Apr. 23, 1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments (GAO/NSIAD-98-74, Apr. 9, 1998). Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination (GAO/NSIAD-98-39, Dec. 1, 1997). Combating Terrorism: Federal Agencies' Efforts to Implement National Policy and Strategy (GAO/NSIAD-97-254, Sept. 26, 1997). Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas (GAO/NSIAD-97-207, July 21, 1997). Chemical Weapons Stockpile: Changes Needed in the Management Structure of Emergency Preparedness Program (GAO/NSIAD-97-91, June 11, 1997). Terrorism and Drug Trafficking: Responsibilities for Developing Explosives and Narcotics Detection Technologies (GAO/NSIAD-97-95, Apr. 15, 1997). Federal Law Enforcement: Investigative Authority and Personnel at 13 Agencies (GAO/GGD-96-154, Sept. 30, 1996). Terrorism and Drug Trafficking: Technologies for Detecting Explosives and Narcotics (GAO/NSIAD/RCED-96-252, Sept. 4, 1996). Terrorism and Drug Trafficking: Threats and Roles of Explosives and Narcotics Detection Technology (GAO/NSIAD/RCED-96-76BR, Mar. 27, 1996). Responses of Federal Agencies and Airports We Surveyed About Access Security Improvements (GAO-01-1069R, Aug. 31, 2001). Aviation Security: Additional Controls Needed to Address Weaknesses in Carriage of Weapons Regulations (GAO/RCED-00-181, Sept. 29, 2000). Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance (GAO/RCED-00-75, June 28, 2000). Aviation Security: Breaches at Federal Agencies and Airports (GAO/T- OSI-00-10, May 25, 2000). Aviation Security: Vulnerabilities Still Exist in the Aviation Security System (GAO/T-RCED/AIMD-00-142 Apr. 6, 2000). Aviation Security: Slow Progress in Addressing Long-Standing Screener Performance Problems (GAO/T-RCED-00-125 Mar. 16, 2000). Aviation Security: FAA's Actions to Study Responsibilities and Funding for Airport Security and to Certify Screening Companies (GAO/RCED- 99-53, Feb. 25, 1999). Aviation Security: Progress Being Made, but Long-term Attention Is Needed (GAO/T-RCED-98-190, May 14, 1998). Aviation Security: FAA's Procurement of Explosives Detection Devices (GAO/RCED-97-111R, May 1, 1997). Aviation Safety and Security: Challenges to Implementing the Recommendations of the White House Commission on Aviation Safety and Security (GAO/T-RCED-97-90, Mar. 5, 1997). Aviation Security: Technology's Role in Addressing Vulnerabilities (GAO/T-RCED/NSIAD-96-262, Sept. 19, 1996). Aviation Security: Urgent Issues Need to Be Addressed (GAO/T- RCED/NSIAD-96-251, Sept. 11, 1996) Aviation Security: Immediate Action Needed to Improve Security (GAO/T-RCED/NSIAD-96-237, Aug. 1, 1996). Aviation Security: Development of New Security Technology Has Not Met Expectations (GAO/RCED-94-142, May 19, 1994). Aviation Security: Additional Actions Needed to Meet Domestic and International Challenges (GAO/RCED-94-38, Jan. 27, 1994). Information Security: Serious and Widespread Weaknesses Persist at Federal Agencies (GAO/AIMD-00-295, Sept. 6, 2000). Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities (GAO-01-769T, May 22, 2001). Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities (GAO-01-232, Apr. 25, 2001). Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination (GAO/T-AIMD-00-268, July 26, 2000). Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials (GAO/GGD/OSI-00-139, July 11, 2000). Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000 (GAO/T-AIMD-00-229, June 22, 2000). Critical Infrastructure Protection: “I LOVE YOU” Computer Virus Highlights Need for Improved Alert and Coordination Capabilities (GAO/T-AIMD-00-181, May 18, 2000). Critical Infrastructure Protection: National Plan for Information Systems Protection (GAO/AIMD-00-90R, February 11, 2000). Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection (GAO/T-AIMD-00-72, Feb. 1, 2000). Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations (GAO/T-AIMD-00-7, Oct. 6,1999). Critical Infrastructure Protection: The Status of Computer Security at the Department of Veterans Affairs (GAO/AIMD-00-5, Oct. 4, 1999). Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences (GAO/AIMD-00-1, Oct. 1, 1999). Information Security: The Proposed Computer Security Enhancement Act of 1999 (GAO/T-AIMD-99-302, Sept. 30, 1999). Information Security: NRC’s Computer Intrusion Detection Capabilities (GAO/AIMD-99-273R, Aug. 27, 1999). Electricity Supply: Efforts Underway to Improve Federal Electrical Disruption Preparedness (GAO/RCED-92-125, Apr. 20, 1992). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, Sept. 11, 2000). Food Safety: Agencies Should Further Test Plans for Responding to Deliberate Contamination (GAO/RCED-00-3, Oct. 27, 1999). Emerging Infectious Diseases: Consensus on Needed Laboratory Capacity Could Strengthen Surveillance (GAO/HEHS-99-26, Feb. 5, 1999). International Crime Controls: Sustained Executive Level Coordination of Federal Response Needed (GAO-01-629, Sept. 20, 2001). Alien Smuggling: Management and Operational Improvements Needed to Address Growing Problem (GAO/GGD-00-103, May 1, 2000). Criminal Aliens: INS' Efforts to Identify and Remove Imprisoned Aliens Continue to Need Improvement (GAO/T-GGD-99-47, Feb. 25, 1999). Criminal Aliens: INS' Efforts to Remove Imprisoned Aliens Continue to Need Improvement (GAO/GGD-99-3, October 16, 1998). Immigration and Naturalization Service: Overview of Management and Program Challenges (GAO/T-GGD-99-148, July 29, 1999). Illegal Immigration: Status of Southwest Border Strategy Implementation (GAO/GGD-99-44, May 19, 1999). Illegal Immigration: Southwest Border Strategy Results Inconclusive; More Evaluation Needed (GAO/GGD-98-21, Dec. 11, 1997). Naturalization of Aliens: INS Internal Controls (GAO/T-GGD-97-98, May 1, 1997). Naturalization of Aliens: INS Internal Controls (GAO/T-GGD-97-57, Apr. 30, 1997). Naturalization of Aliens: Assessment of the Extent to Which Aliens Were Improperly Naturalized (GAO/T-GGD-97-51, Mar. 5, 1997).
The United States now faces increasingly diverse threats that put great destructive power into the hands of small states, groups, and individuals. These threats range from cyber attacks on critical infrastructure to terrorist incidents involving weapons of mass destruction or infectious diseases. Efforts to combat this threat will involve federal agencies as well as state and local governments, the private sector, and private citizens. GAO believes that the federal government must address three fundamental needs. First, the government needs clearly defined and effective leadership with a clear vision carry out and implement a homeland security strategy and the ability to marshal the necessary resources to get the job done. Second, a national homeland security strategy should be based on a comprehensive assessment of national threats and risks. Third, the many organizations that will be involved in homeland security must have clearly articulated roles, responsibilities, and accountability mechanisms. Any strategy for homeland security must reduce risk where possible, assess the nation's vulnerabilities, and identify the critical infrastructure most in need of protection. To be comprehensive, the strategy should include steps to use intelligence assets or other means to identify attackers and prevent attacks before they occur, harden potential targets to minimize the damage from an attack, and effectively manage the consequences of an incident.
Greenhouse gases can affect the climate by trapping energy from the sun that would otherwise escape the earth’s atmosphere. Various human and natural activities emit greenhouse gases, with the production and burning of fossil fuels for energy contributing around two-thirds of man-made global emissions in 2005 (see fig. 1). The remaining third includes emissions from industrial processes, such as steel production and semiconductor manufacturing; agriculture, including emissions from the application of fertilizers and from ruminant farm animals; land use, such as deforestation and afforestation; and waste, such as methane emitted from landfills. Carbon dioxide is the most important of the greenhouse gases affected by human activity, accounting for about three-quarters of global emissions in 2005, the most recent year for which data were available. Carbon dioxide (other) The 14 nations in our study differ greatly in the quantity of their greenhouse gas emissions, the sources of those emissions, and their per- capita incomes. Emissions in 2005 ranged from about 7 billion metric tons of carbon dioxide equivalent in China and 6 billion metric tons in the United States, to about 300 million metric tons in Malaysia. The contribution of various sectors to national emissions also differed across nations, with emissions from energy and industrial processes accounting for more than 70 percent of emissions in most industrialized nations and 20 percent or less of emissions in Indonesia and Brazil (see fig. 2). The Convention established a Secretariat that, among other things, supports negotiations, coordinates technical reviews of reports and inventories, and compiles greenhouse gas inventory data submitted by nations. The Secretariat has about 400 staff, located in Bonn, Germany, and its efforts related to national inventories are funded by contributions from the Parties. For the Secretariat’s core budget, Parties provided $52 million for the 2008-2009 budget cycle, of which the United States contributed $9.5 million ($3.76 million in 2008 and $5.75 million in 2009), excluding fees. The Convention requires Parties to periodically report to the Secretariat on their emissions of greenhouse gases resulting from human activities. Parties do not generally measure their emissions, because doing so is not generally feasible or cost effective, and instead estimate their emissions. To help Parties develop estimates, the IPCC developed detailed guidelines—which have evolved over time—describing how to estimate emissions. The general approach is to use statistics on activities, known as activity data, and estimates of the rate of emissions per unit of activity, called emissions factors. For example, to estimate emissions from passenger cars, the inventory preparers could multiply the number of gallons of gasoline consumed by all cars by the estimated quantity of emissions per gallon. The IPCC guidelines allow nations to use various methods depending on their data and expertise. In some cases, with adequate data, estimates of emissions can be as accurate as direct measurements, for example for carbon dioxide emissions from the combustion of fossil fuels which contribute the largest portion of emissions for many nations. The Parties agreed to the following five principles for inventories from Annex I nations: Transparent. Assumptions and methodologies should be clearly explained to facilitate replication and assessment of the inventory. Consistent. All elements should be internally consistent with inventories of other years. Inventories are considered consistent if a Party uses the same methodologies and data sets across all years. Comparable. Estimates should be comparable among Parties and use accepted methodologies and formats, including allocating emissions to the six economic sectors defined by IPCC—energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste. Complete. Inventories should cover all sources and sinks and all gases included in the guidelines. Accurate. Estimates should not systematically over- or underestimate true emissions as far as can be judged and should reduce uncertainties as far as practical. Annex I nations are to submit inventories annually consisting of two components—inventory data in a common reporting format and a national inventory report—both of which are publicly available on a Web site maintained by the Secretariat. The common reporting format calls for emissions estimates and the underlying activity data and emissions factors for each of six sectors—energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste. It also calls for data on the major sources that contribute to emissions in each sector. The inventory data are to reflect a nation’s most recent reporting year as well as all previous years back to the base year, generally 1990. The 2010 reporting format called for nearly 150,000 items of inventory data and other information from 1990 through 2008. The common format and underlying detail facilitate comparisons across nations and make it easier to review the data by, for example, enabling automated checks to ensure emissions were properly calculated and to flag inconsistencies in data reported over time. The national inventory report should explain the development of the estimates and data in the common reporting format and should enable reviewers to understand and evaluate the inventory. The report should include, among other things, descriptions of the methods used to calculate emissions estimates, the rationale for selecting the methods used, and information about the complexity of methods and the resulting precision of the estimates; information on quality assurance procedures used; discussion of any recalculations affecting previously submitted inventory data; and information on improvements planned for future inventories. The Secretariat coordinates an inventory review process that, among other things, assesses the consistency of inventories from Annex I nations with reporting guidelines. The purposes of this process are to ensure that Parties are provided with (1) objective, consistent, transparent, thorough, and comprehensive assessments of the inventories; (2) adequate and reliable information on inventories from Annex I Parties; (3) assurance that inventories are consistent with IPCC reporting guidelines; and (4) assistance to improve the quality of inventories. In supporting the inventory review process, the Secretariat provides scientific and technical guidance on inventory issues and coordinates implementation of Convention guidelines. Inventory reviews are supervised by the head of the reporting, data, and analysis program within the Secretariat. By June each year, the Secretariat checks each inventory for completeness and format, called an initial check, and conducts a preliminary assessment before submitting it to an inventory review team for examination. The Secretariat assembles inventory review teams composed of scientists and other experts from around the world to review inventories from all Annex I Parties according to the Convention’s review guidelines. The inventory review teams assess inventories in September by reviewing activity data, emissions factors, methodologies, and other elements of an inventory to determine if a nation has employed appropriate standards, methodologies, and assumptions to compute its emission estimates. From February through March, the inventory review teams develop inventory review reports outlining their findings. In accordance with the Convention’s principle of common, but differentiated responsibilities, non-Annex I nations’ inventories’ format and frequency differ from those for Annex I nations. The reporting guidelines, which have evolved over time, encourage non-Annex I nations to use the IPCC methodological guidelines in developing their inventories, but do not specify that they must be used. While they submit inventories to the Secretariat, non-Annex I nations’ inventories are not stand-alone documents. Rather, a non-Annex I nation’s inventory is a component of its national communication, a report that discusses steps the nation is taking or plans to take to implement the Convention. Non-Annex I nations do not have to use the common reporting format or submit a national inventory report. Moreover, they do not submit an inventory each year, but instead the Parties to the Convention determine the frequency of their submissions. Parties have not agreed on a regular frequency for non- Annex I nations to submit their inventories. According to expert inventory review teams, the 2009 greenhouse gas inventories of seven Annex I nations were generally comparable and of high quality, although some of their emissions estimates have substantial uncertainty. In contrast, we found that the most recent inventories from seven non-Annex I nations, although they met reporting guidelines, were of lower quality and generally not comparable. Finally, experts identified several barriers to improving inventory comparability and quality. All of the inventories submitted in 2009 by the seven selected Annex I nations were generally comparable and of high quality, according to the most recent inventory reviews conducted by expert review teams under the Convention. The reviews found that six of the seven nations— Australia, Canada, Japan, Russia, the United Kingdom, and the United States—used appropriate methodologies and data, employed reasonable assumptions, and did not systematically either over- or underestimate emissions in their 2009 inventories (covering data from 1990 through 2007). The one exception to this was Germany’s 2009 inventory, which the review team said did not follow guidelines for its agricultural emissions, in part because of its attempt to use newer methods. The change significantly reduced estimated emissions from agriculture, though the sector is a relatively small contributor to Germany’s total emissions. One inventory reviewer familiar with Germany’s 2009 inventory said its overall quality was fairly good. In addition, Germany appears to have addressed the issue of its agricultural emissions in its 2010 inventory submission by returning to its previous methods, which had the effect of increasing its estimates of emissions from agriculture. Experts said that the seven selected inventories were generally comparable, which means they generally used agreed-upon formats and methods. In addition, nine experts we interviewed said they were of high quality and did not have major flaws. These findings show significant improvement in the seven nations’ inventories since our 2003 report. For example, we reported in 2003 that both Germany’s 2001 submission (covering data through 1999) and Japan’s 2000 submission (covering data through 1998) lacked a national inventory report, a critical element that explains the data and methods used to estimate emissions. Nearly all Annex I nations—including Germany and Japan—now routinely submit this report. In addition, the review team found Russia’s 2009 inventory showed major improvements. For example, Russia included a full uncertainty analysis for the first time and improved its quality assurance and quality control plan. Since our 2003 report, these 7 selected nations, and 34 other Annex I Parties, have submitted about seven inventories, which were generally on time and more comprehensive than previous inventories (see fig. 3). The inventory review reports noted several potential problems that, while relatively minor, could affect the quality of emissions estimates. For example, the review of the 2009 U.S. inventory noted that assumptions about the carbon content of coal are outdated because they are based on data collected between 1973 and 1989. The effect on emissions estimates is not clear, but the carbon content of the coal burned as fuel may change over time, according to the inventory review report. Any such change would affect emissions, since coal is the fuel for about half of all U.S. electricity generation. The U.S. inventory also used a value from a 1996 agricultural waste management handbook to estimate nitrous oxide emitted from livestock manure. The inventory review noted that livestock productivity, especially for dairy cows, has increased greatly since 1996, which would also increase each animal’s output of nitrous oxide emissions. Using the IPCC’s methodology for calculating emissions from excreted nitrogen, we estimated that this would lead to an underestimate of roughly 4.7 percent of total nitrous oxide emissions and 0.2 percent of total greenhouse gas emissions. Finally, the review of Russia’s 2009 inventory noted that it did not include carbon dioxide emissions from organic forest soils, which the inventory review report said could be significant. The inventory reviews and one expert we interviewed attributed many of the potential underestimations to a lack of data or an adequate IPCC-approved methodology and said that nations were generally working to address the issues. Even though the review teams found these seven inventories generally comparable and of high quality, the nations reported substantial uncertainty in many of the emissions estimates in their inventories. The term “uncertainty” denotes a description of the range of values that could be reasonably attributed to a quantity. All of the Annex I nations’ inventories we reviewed contained quantitative estimates of uncertainty. As shown in table 1, six of the seven nations reported uncertainties for their overall estimates between plus or minus 1 and 13 percent, and Russia reported overall uncertainty of about plus or minus 40 percent. That equates to an uncertainty of 800 million metric tons of carbon dioxide equivalent, slightly more than Canada’s total emissions in 2007. Russia’s relatively large uncertainty estimate could stem from several factors, such as less precise national statistics. In addition, Russia generally used aggregated national data rather than data that account for variation within the nation. This would increase uncertainty because aggregated data do not account for important differences that affect emissions, such as different types of technology used in the energy sector. Japan and Australia reported very low uncertainty in 2009. The inventory review report noted that Japan’s estimate was lower than estimates from other nations, but neither the report nor Japan’s inventory provides a full explanation. The review team for Australia said that its uncertainty ranges were generally consistent with typical uncertainty ranges reported for its sectors. Despite high levels of uncertainty in some instances, the inventory review teams found the seven inventories to be generally of high quality because the teams judge quality based on consistency with guidelines rather than strictly on the precision of the estimates. The uncertainty of emissions estimates also varies among the different sectors of a nation’s economy. For example, uncertainty is relatively low for estimates of carbon dioxide emissions from the combustion of fossil fuels because the data on fuel use are generally accurate and the process that generates emissions is well understood. Uncertainty is much higher for certain categories within agriculture and land-use. For example, some nations report that the uncertainty in their estimates of nitrous oxide emissions from agricultural soils is greater than 100 percent, in some cases much greater. According to a March 2010 report by a National Research Council committee, this results from scientific uncertainty in emission factors. Table 2 shows the contribution of the most important sources of uncertainty in the U.S. inventory. The sources of uncertainty in the other six Annex I nations’ inventories follow a broadly similar pattern: the largest sources of uncertainty are either large sources of emissions—such as fossil fuel combustion and land use—or small but highly uncertain categories—such as agricultural soils. Shortcomings in inventory reporting guidelines may decrease the quality and comparability of emissions estimates for land use, according to two experts we interviewed. For example, the guidelines state that nations should report all emissions from “managed forests,” but they have broad latitude in assigning forested land to this category. This choice may have a major effect on emissions; one expert said that it would be possible for some nations with large forested areas, such as Brazil, to offset all their emissions from deforestation by designating large areas of protected forest as managed and taking credit for all of the carbon dioxide absorbed by those forests. To address this potential inconsistency, the National Research Council committee report recommended taking inventory of all land-based emissions and sinks for all lands, not just man-made emissions on managed lands. Others said that designating land as managed forest remains the most practical way to estimate man-made emissions and removals because other methods are not well developed. Inventories from the non-Annex I nations we reviewed met the Convention’s relevant reporting guidelines. All of the seven non-Annex I nations we reviewed—Brazil, China, India, Indonesia, Malaysia, Mexico, and South Korea—had submitted their first inventories. In addition, Mexico submitted its second, third, and fourth inventories, and South Korea submitted its second. Secretariat officials said the other selected nations could submit their second inventories, as part of their national communications, over the next few years. The reporting guidelines call for non-Annex I nations to estimate emissions for 1990 or 1994 in their first submission, and for 2000 in their second submissions, and to include estimates for carbon dioxide, methane, and nitrous oxide in all submissions. We found that all selected non-Annex I nations reported for relevant years and these three gases, but we did not assess whether nations used appropriate methodologies and assumptions to develop these estimates. However, the seven inventories were generally not comparable and were of lower quality than inventories from Annex I nations in four ways: 1. Inventories from select non-Annex I nations were outdated. The most recent inventories from selected Annex I nations estimate emissions for 1990-2008. However, except for Mexico and South Korea, the most recently submitted inventories from selected non-Annex I nations are for emissions for 1994. (See figure 4.) 2. Some selected non-Annex I nations’ inventories do not estimate emissions of all gases. As shown in figure 4, inventories from China, India, Indonesia, and Malaysia did not include estimates of the emissions of synthetic gases. Independent estimates show that while synthetic gases were only 1 percent of global emissions in 2005, the emissions of synthetic gases increased by 125 percent between 1990 and 2005. Their emissions have also grown substantially in some non- Annex I nations, such as China, which had the largest absolute increase in synthetic gas emissions among all non-Annex I nations between 1990 and 2005, according to information from the International Energy Agency (IEA). 3. Select non-Annex I nations’ inventories, to varying degrees, lacked critical elements. We assessed inventories for several elements that, according to reporting guidelines, can improve the quality and transparency of inventories. First, only Brazil and Mexico provided a quantitative analysis of the uncertainty of their estimates. Second, we found that all inventories lacked adequate documentation of methodologies, emission factors, and assumptions and that most lacked descriptions of quality assurance and quality control measures. Third, none of the select nations reported in a comparable format, instead using different formats and levels of aggregation. For example, China estimated some methane emissions from various agricultural subsectors but grouped some of these estimates into only one category. In contrast, South Korea estimated these same emissions but reported them in separate categories. Overall, the lack of documentation and of a common reporting format limited our ability to identify and compare estimates across nations. Finally, only Mexico included an analysis of its key categories of emissions. 4. National statistics from some select non-Annex I nations are less reliable. According to three experts we interviewed and literature, some non-Annex I nations have less reliable national statistics systems than most Annex I nations. These systems are the basis for emissions estimates, and experts noted that the estimates are only as good as the underlying data. For example, researchers estimated that the uncertainty of carbon dioxide emissions from China’s energy sector was as high as 20 percent. In contrast, reported uncertainties in estimates of carbon dioxide emissions from fossil fuel use in many developed nations are less than 5 percent. In addition, the International Energy Agency noted a relatively large gap between its energy statistics and those used in the national inventories of some non-Annex I nations, highlighting a need for better collection of data and reporting of energy statistics by some non-Annex I nations. Emissions from Fuel Combustion, 2009 Edition, I.5. between 1990 and 2005, which was about the annual emissions of Canada, Germany, Japan, and Russia in 2005 combined. Recognizing the importance of information from non-Annex I nations, in March 2010, a National Research Council committee recommended that Framework gorous inventory reporting and Convention Parties extend regular, rigorous inventory reporting and review to developing nations. review to developing nations. National Research Council, Verifying Greenhouse Gas Emissions, 6. Experts we interviewed identified several barriers to improving the comparability and quality of inventories. First, 10 of the 12 experts who provided views about barriers said that a lack of data and scientific knowledge makes some types of emissions difficult to estimate for both Annex I and non-Annex I nations. For example, current estimates of emissions related to biological processes, such as those from agriculture and land use, can be uncertain because of limited data. Specifically, nations do not always collect data on livestock nutrition, which can affect methane emissions. In addition, emissions related to some biological processes are difficult to estimate because they are not fully understood or are inherently variable. Emissions related to agriculture, for example, depend on the local climate, topography, soil, and vegetation. In March 2010, a National Research Council committee recommended further scientific research and data collection to reduce the uncertainties in estimates of agriculture, forestry, and land-use emissions. Such emissions are important, contributing about one quarter of total global emissions in 2005, the most recent year for which global data were available. They are particularly important for some non-Annex I nations, where they can be the largest sources of emissions. In Brazil and Indonesia, for example, agriculture and land-use emissions accounted for about 80 percent of total emissions in 2005. Second, 11 experts said that non-Annex I nations have limited incentives to produce better inventories. The current international system encourages Annex I nations with commitments under the Kyoto Protocol to improve their inventories. This is because their ability to participate in the Kyoto Protocol’s flexibility mechanisms—which provide a cost- effective way to reduce emissions—is linked to, among other things, the quality of certain aspects of their inventories. Late submissions, omissions of estimates, or other shortcomings can all affect nations’ eligibility to use these mechanisms. Therefore, low-quality inventories can affect nations’ ability to lower the costs of achieving their emissions targets. While four experts we interviewed said that this linkage between inventories and the flexibility mechanisms in the Kyoto Protocol has driven improvements in many Annex I nations’ inventories, incentives for non-Annex I nations are limited. Furthermore, four experts said that some non-Annex I nations may avoid additional international reporting because they see it as a first step toward adopting commitments to limit emissions. In addition, experts and the national communications of selected non- Annex I nations identified several other barriers to improving the quality and comparability of inventories from non-Annex I nations, including: Less stringent reporting guidelines and lack of review. Reporting guidelines differ between Annex I and non-Annex I nations. Non-Annex I nations do not need to annually submit inventories or to report on as many gases, for as many years, with as much detail, or in the same format as Annex I nations. They also do not have to follow all IPCC methodological guidelines, although they are encouraged to do so. Six experts said that this less stringent reporting regime has contributed to the lack of quality and comparability in inventories from non-Annex I nations. In addition, non-Annex I nations have not benefited from the feedback of technical reviews of their inventories, according to one expert. Financial and other resource constraints. Though eight experts generally said that many non-Annex I nations may lack needed financial and other resources, they differed on the magnitude and importance of additional international support. Non-Annex I nations may lack resources to improve data collection efforts, conduct additional research, or establish national inventory offices. The developed nations of Annex I provided the majority of about $80 million that has been approved for the latest set of national communications, which include inventories, from non-Annex I nations. However, one expert said that this has not been sufficient to fully support the activities needed. In their national communications, China and India indicated needing funding to, for example, improve data collection. Two experts said that improving non-Annex I nations’ inventories may require significant resources. On the other hand, others said that the funds involved may be relatively small, or that financial constraints may not be significant, at least for major non-Annex I nations. For example, according to a report from a National Research Council committee, significant improvements in inventories from 10 of the largest emitting developing nations could be achieved for about $11 million over 5 years. While experts disagreed about the importance of additional funding, three said that international funding should support capacity development in each nation. They said that more continuous support would improve on the current, project-based method of funding, which encourages nations to assemble ad-hoc teams that collect data, write a report, and then disband. Lack of data and nation-specific estimates of emissions factors. According to four experts and the Convention Secretariat’s summary of constraints identified by non-Annex I nations in their initial national communications, the lack or poor quality of data or a reliance on default emissions factors limit the quality of inventories. Most non-Annex I nations identified that missing or inadequate data was a major constraint for estimating emissions in at least one sector. For example, Indonesia reported that it did not estimate carbon emissions from soils because the data required were not available. Though inventory guidelines encourage the use of nation-specific emissions factors that reflect national circumstances, most non-Annex I nations use default values provided by the IPCC. The reliance on default values can increase uncertainties of estimates because national circumstances can differ significantly from the defaults. For example, Denmark’s nation-specific emission factor for methane emissions from sheep is twice as large as the default. Thus, if Denmark had used the default value, it would have underestimated its emissions from sheep by half. Experts said that the process for reviewing inventories from Annex I nations has several notable strengths. They also identified three limitations, which may present challenges in the future. Moreover, we found that although the review process includes steps to help ensure the quality of reviews, there is no independent assessment of the process’ operations. Finally, there is no review process for inventories from non- Annex I nations. Eight of the experts we interviewed said the process for reviewing inventories from Annex I nations has several notable strengths that enable it to generally meet its goals of providing accurate information on the quality of inventories and helping nations improve their inventories. (Figure 6 below depicts the inventory review process.) Experts identified four broad categories of strengths: Rigorous review process. Five experts said the rigorous review process gives them confidence that review teams can identify major problems with inventory estimates. For example, the Secretariat and review teams compare data, emission factors, and estimates from each inventory (1) from year to year, (2) with comparable figures in other inventories, and (3) with data from alternative sources, such as the International Energy Agency (IEA) and the United Nations Food and Agriculture Organization. Reviewers also ensure methods used to estimate emissions are appropriate and meet accepted guidelines. In addition, IEA officials inform the inventory review process by reviewing energy data in inventories and independently identifying issues for review teams to consider further. Qualified and respected reviewers. Three experts we interviewed said that well-qualified and widely respected inventory reviewers give the process credibility. Secretariat officials told us that a relatively small number of people in the world have the expertise to evaluate inventories without further training. Parties nominate reviewers, including leading scientists and analysts, many of whom are also inventory developers in their home nations. Reviewers must take training courses and pass examinations that ensure they understand inventory guidelines and appropriate methodologies before serving on a review team. Two experts said reviewers’ experience and qualifications allow them to assess the strengths and weaknesses in inventories, including whether nations use appropriate methodologies. This is particularly important because some nations use advanced or nation-specific approaches, which can be difficult to assess. Capacity building. Three experts said the inventory review process builds expertise among reviewers from developed and developing nations. Specifically, they said the review process brings inventory specialists together from around the world, where they learn from each other and observe how various nations tackle challenges in compiling their inventories. Two experts said that reviewers return home and can use the knowledge and contacts gained from their review team experiences to improve their national inventories. Constructive feedback. Two experts said that the inventory reviews provide constructive feedback to improve inventories from Annex I nations. This feedback includes identifying both major and minor shortcomings in inventories. Secretariat officials said that review teams, when they identify issues, must also offer recommendations for addressing them. For example, reviewers noted Russia’s 2009 use of default assumptions for much of its uncertainty analysis, and recommended that Russia develop values that better match the methods and data used in making the emissions estimates. For these and other reasons, three experts we interviewed said that the review process has helped improve the quality of inventories from Annex I nations. Secretariat officials said that when review teams point out discrepancies or errors, many nations revise and resubmit estimates to correct problems. For example, Australia revised its estimates of carbon dioxide emissions from croplands after a review team pointed out that changes in croplands management affect emissions. Australia’s revisions decreased estimated emissions from croplands in 1990 by 138 percent, meaning the revisions had the effect of moving croplands from an estimated source of greenhouse gas emissions to a sink removing greenhouse gases from the atmosphere. For nations with Kyoto Protocol commitments, review teams may adjust estimates if they are not satisfied with a response to their findings. For example, the team reviewing Greece’s 2006 inventory concluded that estimates in several categories were based on methods, data, and emissions factors that did not adhere to reporting guidelines. The review team was not satisfied with Greece’s response, and recommended six adjustments to Greece’s estimates. These adjustments lowered Greece’s official baseline energy sector emissions by 5 percent, from 82 million to 78 million metric tons of carbon dioxide equivalent. Experts, literature, and several nations identified some limitations of the review process, which may present challenges in the future if, for example, the process is expanded to incorporate non-Annex I nations. First, six experts we interviewed said the process does not independently verify emissions estimates or the quality of the underlying data. Review teams primarily ensure the consistency of inventories with accepted standards but do not check underlying activity data, such as the amount of fuel burned. Review teams do compare underlying data with those reported in other sources, but these other sources are not fully independent because they also come from the nations that supply the inventories. Two experts said that more thorough verification might involve comparing estimates to observed measurements or independently constructing estimates from raw data. However, such approaches may be costly and, as a National Research Council committee reported, the other methods currently available do not allow independent verification of estimates. Furthermore, one expert said that the review of emissions estimates from agricultural soils and land-use sectors may be especially limited because of a lack of data and the inherent difficulty in measuring these emissions. The inability to more thoroughly assess inventories may reduce the reliability of review findings. For example, the inventory review process may have overlooked a significant shortcoming in at least one review. Specifically, in 2009, the national audit office of one Annex I nation found that its national inventory estimates may understate actual emissions by about a third because the inventory preparers used questionable statistics. The relevant agencies in that nation generally agreed with the audit office’s recommendations based on its assessment. The review for that inventory, however, did not identify this issue. Second, four experts we interviewed and several nations have expressed concerns about inconsistency across reviews, though the magnitude of this potential problem is unclear. The concerns relate to the potential for review teams to inconsistently apply standards when assessing an inventory. Secretariat officials said the process of reviewing inventories involves some degree of subjectivity, since reviewers use professional judgment in applying inventory review guidelines to a specific inventory. As a result, review teams might interpret and apply the guidelines differently across nations or over time. Four experts we spoke with, as well as several nations, have raised such concerns. For example, the European Community reported that some nations have received, on occasion, contradictory recommendations from inventory review teams. Secretariat officials said lead reviewers are ultimately responsible for consistent reviews but that Secretariat staff assist the review teams during the process, and two Secretariat staff read through all draft inventory reports, in part to identify and resolve possible inconsistencies. In addition, lead reviewers develop guidance on consistency issues at annual meetings. The magnitude of this potential problem is unclear, in part because it has not been evaluated by an independent third party. Third, three experts and officials we interviewed said there are not enough well-qualified reviewers to sustain the process. Three experts and Secretariat officials said that they did not know whether this shortage of available experts has affected the overall quality of reviews. The Secretariat has, in the past, reassigned staff and reviewers from work on national communications to the review of inventory reports, and it provides training to all reviewers to increase capacity and retain qualified reviewers. However, Secretariat officials said it may be difficult to sustain the quality of reviews in the future if the inventory review process is expanded to include inventories from non-Annex I nations without receiving additional resources, since this would substantially increase the demands on the review process. The review process includes steps to help ensure the quality of reviews, but we found that its quality assurance framework does not independently assess the process. Secretariat officials said that lead reviewers oversee the drafting of review reports, and review officers, lead reviewers, and review teams maintain a review transcript to keep track of potential issues they have identified with inventories, of nations’ responses to those issues, and of their resolution. However, lead reviewers, in the report of their 2009 meeting, expressed concern that these review transcripts are sometimes incomplete and are not always submitted to the Secretariat. In providing information on their experience with the review process and recommendations for improvements, the nations of the European Community suggested in late 2008 that the review process would benefit from establishing clear quality assurance and quality control procedures as well as from an annual analysis of its performance in relation to its objectives. Secretariat officials said they designated a Quality Control Officer who, along with the supervisor of the review process, reads all draft review reports and may identify problems and check underlying information in reports. Furthermore, Secretariat officials said that lead reviewers meet annually to discuss the review process, assess and prepare guidance about specific issues or concerns about the review process, and develop summary papers to report to Parties. Nonetheless, the review process lacks an independent assessment of its operation. We examined several other review processes and found that periodic external assessments by independent entities can provide useful feedback to management and greater assurance that the review processes are working as intended. Inventory guidelines call for Annex I nations to carry out quality assurance activities for their own inventories, including a planned system of reviews by personnel not directly involved in the process. Though some United Nations and Framework Convention oversight bodies have the ability to assess the inventory review process, none have done so. The Secretariat has internal auditors, but they have not audited the inventory review process and Secretariat officials said they did not know of any plans to do so. Although the Compliance Committee of the Kyoto Protocol has reviewed aspects of the review process, issuing a report with information on consistency issues, this report was not a systematic review and was not developed by people independent of the review process. As stated earlier, inventories from non-Annex I nations do not undergo formal reviews. The Secretariat compiled a set of reports summarizing inventory information reported by non-Annex I nations, such as inventory estimates, national circumstances, and measures to address climate change. However, Secretariat officials said they had not assessed the consistency of non-Annex I nations’ inventories with accepted guidelines. These officials also said that they did not plan to compile another report covering non-Annex I nations’ second inventories because the Parties have not agreed to this. An expert we interviewed said that the quality of inventories from non-Annex I nations is unknown because their inventories have not been formally reviewed. Two experts said that some non-Annex I nations have resisted increased scrutiny of their inventories because of sovereignty concerns, meaning that nations do not want to disclose potentially sensitive information or data to other political bodies. The growth in greenhouse gas emissions along with lower quality inventories in some non-Annex I nations is likely to increase the pressure for a public review of their inventories in the future. Most experts we interviewed said that the inventory system for Annex I and non-Annex I nations is generally sufficient for monitoring compliance with current agreements. However, they said that the system may not be sufficient for monitoring non-Annex I nations’ compliance with future agreements that include commitments for them to reduce emissions. Eleven of the experts we interviewed said the inventory system— inventories and the process for reviewing them—is generally sufficient for monitoring compliance with current agreements, though five raised some concerns. All 11 of the experts who provided their views on the implications of the inventory system expressed confidence that inventories and the Convention’s inventory review process are suitable for monitoring Annex I nations’ compliance with existing commitments to limit emissions. In part, this is because emissions in many Annex I nations primarily relate to energy and industrial activity, which can be more straightforward to estimate and monitor than emissions from land use and agriculture. Nevertheless, five experts raised at least one of two potential challenges facing the current system. First, three said they were cautious until they see how the system performs under the more demanding conditions of submitting and reviewing inventories that will show whether nations have met their binding emission targets under the Kyoto Protocol. When inventories are for years included in the Protocol’s commitment period, nations may be more concerned about meeting emissions targets, and review teams may face pressure to avoid negative findings. Second, three experts said that flexibilities in the current inventory system or difficulties in measuring and verifying emissions from some agriculture and land-use segments could create complications for international emissions trading under the Kyoto Protocol. Emissions trading under the Kyoto Protocol allows nations with emissions lower than their Kyoto targets to sell excess allowances to nations with emissions exceeding their targets. Though Parties to the Kyoto Protocol developed and agreed to the current system, three experts indicated that ensuring greater comparability of estimates between nations and types of emissions might be useful for emissions trading. For non-Annex I nations, eight experts said that their lower quality inventories and lack of review do not present a current problem since these nations do not have international commitments to limit their emissions. Seven of the experts said that the inventory system is sufficient to support international negotiations. To develop agreements, two experts said, negotiators need information on current and historic emissions from the nations involved. Annex I nations submit this information in their annual emissions inventories, the most recent of which cover emissions from 1990 to 2008. Although emissions estimates in most non-Annex I nations’ inventories are outdated, seven experts said that there are enough independent estimates to provide negotiators with adequate information. State officials said that independent estimates are useful, but official national inventories would be preferable because they can lead to more constructive discussions and can help create capacity in nations to better measure emissions. In international negotiations, State has emphasized the need for better information on emissions from all high-emitting nations, including non-Annex I nations. Different types of commitments would place different demands on the inventory system. Thus, the implications of the state of the inventory system for a future agreement will largely depend on the nature of that agreement. For Annex I nations, eight experts said that future commitments were likely to resemble current commitments and therefore the inventory system is likely to be sufficient. However, for non-Annex I nations, if future agreements include commitments to limit emissions, the current system is not sufficient for monitoring their compliance, according to nine experts. This is because non-Annex I nations do not submit inventories frequently, the quality of their inventories varies, and they do not undergo an independent technical review. Additional reporting and review could pose challenges since it could take time for non-Annex I nations to improve their inventories and Secretariat officials said that adding non-Annex I nations to the current inventory review process could strain the capacity of that system. Some types of commitments by non-Annex I nations could be especially difficult to monitor and verify, according to experts. In the nonbinding 2009 Copenhagen Accord, many nations submitted the actions they intended to take to limit their greenhouse gas emissions, with Annex I nations committing to emissions targets for 2020 and non-Annex I nations announcing various actions to reduce emissions. Experts identified several challenges with monitoring the implementation of some of the actions proposed by non-Annex I nations (see table 3). For example, two experts said that monitoring emissions reductions from estimates of future business-as-usual emissions may prove challenging. They said this is because such actions may require Parties to estimate reductions from a highly uncertain projection of emissions that would have otherwise occurred. Parties would also have to develop and agree on guidelines to estimate and review business-as-usual emissions in addition to actual emissions. Similarly, monitoring reductions in the intensity of greenhouse gas emissions—emissions per unit of economic output, or gross domestic product—could pose challenges because of uncertainties in estimates of gross domestic product. One expert said that these challenges arise because the Parties to the Convention created the current inventory system to monitor compliance and evaluate progress among Annex I nations with national targets. This expert added that Parties to a new agreement may need to supplement the system to support the types of actions under consideration by non-Annex I nations. Eight of the experts we interviewed said that Parties to a future agreement could overcome or mitigate many of the challenges related to inventories. For example, two experts said that Parties could design agreements that rely less on emissions estimates that are inherently uncertain or difficult to verify. For example, quantitative targets could apply only to sectors or gases that are relatively easy to measure and verify, such as carbon dioxide emissions from the burning of fossil fuels. Three experts said that barriers other than the inventory system pose greater challenges to designing and reaching agreements on climate change. For example, nations disagree on the appropriate emissions limits for developed and developing nations. According to three experts, such disagreements were more of an obstacle to a comprehensive agreement in the latest round of negotiations in Copenhagen than were inventory issues. In addition, one expert pointed out that Parties to international agreements generally have limited ability to get other Parties to comply. For example, at least one nation with a binding emissions target under the Kyoto Protocol is unlikely to meet its target based on current inventory estimates and policies, according to this expert. Nations may be reluctant to agree to an international agreement until they have some assurance that other nations will follow through on their commitments. High quality and comparable information on national greenhouse gas emissions is critical to designing and implementing international responses to climate change. The nations we reviewed meet their inventory reporting obligations, and review reports indicate this has resulted in generally high quality inventories from the seven highest emitting Annex I nations. However, the current inventory system does not request high quality emissions information from non-Annex I nations, which account for the largest and fastest growing share of global emissions. We found that the inventories from seven selected high emitting non-Annex I nations were generally outdated, not comparable, and of lower quality than inventories from Annex I nations. The existing gap in quality and comparability of inventories across developed and developing nations makes it more difficult to establish and monitor international agreements, since actions by both developed and developing nations will be necessary to address climate change under future international agreements. As a recent National Research Council committee study pointed out, extending regular reporting and review to more nations may require external funding and training, but the resources needed for the largest emitting developing nations to produce better inventories is relatively modest. While our work suggests that the current inventory review process has notable strengths, we identified limitations that may present challenges in the future. For example, some experts and nations have reported concerns about inconsistent reviews and that resources may not be sufficient in the future. Stresses on the review process are likely to increase as review teams begin to review inventories that cover years in which some nations have binding emissions targets and if inventories from non-Annex I nations are subjected to inventory review under a future agreement. The Convention Secretariat has internal processes in place to help ensure quality reviews, but no systematic independent review to assess the merits of concerns about the consistency of reviews or to assess the need for additional qualified reviewers in the future. Addressing these issues could benefit the Secretariat by further enhancing confidence in its processes and ensuring that it has the resources necessary to maintain high quality reviews. We are making two recommendations to the Secretary of State: 1. Recognizing the importance of high quality and comparable data on emissions from Annex I and non-Annex I Parties to the Convention in developing and monitoring international climate change agreements, we recommend that the Secretary of State continue to work with other Parties to the Convention in international negotiations to encourage non-Annex I Parties, especially high-emitting nations, to enhance their inventories, including by reporting in a more timely, comprehensive, and comparable manner, and possibly establishing a process for reviewing their inventories. 2. To provide greater assurance that the review process has an adequate supply of reviewers and provides consistent reviews, we recommend that the Secretary of State, as the U.S. representative to the Framework Convention, work with other Parties to the Convention to explore strengthening the quality assurance framework for the inventory review process. A stronger framework could include, for example, having an independent reviewer periodically assess the consistency of inventory reviews and whether the Secretariat has sufficient resources and inventory reviewers to maintain its ability to perform high quality inventory reviews. We provided State, the Convention Secretariat, and EPA with a draft of this report for review and comment. State agreed with our findings and recommendations and said that the department has been working with international partners in negotiations and through bilateral and multilateral partnerships to support and promote improved inventory reporting and review. State’s comments are reproduced in appendix III. The Convention Secretariat provided informal comments and said that it appreciated our findings and conclusions. The Secretariat said that the report provided a comprehensive overview of the existing system for reporting and reviewing inventories under the Convention and the Kyoto Protocol, as well as very useful recommendations on how this system could evolve in the future and steps to be taken to that end. The Secretariat noted our acknowledgement of the strengths of the inventory review process for Annex I nations. In addition, the Secretariat commented on our discussion of the limited availability of statistics against which to compare inventory data, saying that this lack of data does not imply that its review process lacks independent verification and that its review teams rely on available statistics in conducting their reviews. The Secretariat also said that the disparities in inventory quality across Annex I and non-Annex I nations should be viewed in the context of the “common but differentiated responsibilities” of developed and developing nations under the Convention. In addition, EPA and the Convention Secretariat provided technical comments and clarifications, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of State, Administrator of EPA, Executive Secretary of the Convention Secretariat, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our review provides information on: (1) the comparability, quality, and barriers to improving inventories submitted by developed and developing nations to the United Nations Framework Convention on Climate Change (the Convention); (2) the strengths and limitations of the Convention’s inventory review process; and (3) the views of experts on the implications for agreements to reduce greenhouse gas emissions. To address all of these objectives, we reviewed relevant literature and Convention documents; met with officials from the Environmental Protection Agency (EPA), Department of State (State), the Convention Secretariat, and others to understand inventories, the inventory review process, and international negotiations; and summarized the views of experts on these issues. Specifically, to address the first objective, we selected a nonprobability sample of 14 nations, seven Annex I nations—Australia, Canada, Germany, Japan, Russia, the United Kingdom, and the United States—and seven non- Annex I nations—Brazil, China, India, Indonesia, Malaysia, Mexico, and South Korea—based on the size of their emissions (including emissions from land-use and land-use change and forestry). We selected the largest emitting Annex I nations. For non-Annex I nations, we selected the largest emitting nations who had submitted inventories based on data available at the time. We omitted Myanmar because it did not submit an inventory to the Convention. We also ensured coverage of major variations in selected nation’s income and sectoral structure of their economies. To illustrate this variation, we used the World Bank’s data on per capita income levels, and data from the World Resources Institute and Convention Secretariat on emissions from the energy and industrial processes sectors. The selected 14 nations represented about two thirds of the world’s greenhouse gas emissions not related to land use and forestry in 2005. Our findings are not generalizable to other nations because the selected nations are not necessarily representative. To assess the comparability and quality of inventories from Annex I nations, we summarized the results of the Convention’s 2009 reviews of inventories from selected Annex I nations, the most recent reviews available. We did not independently assess the validity of data, assumptions, or methodologies underlying the inventories we reviewed. Though we identified some limitations with the inventory review process, we believe that reviews provide reasonable assessments of the comparability and quality of inventories from selected Annex I nations. For non-Annex I nations, we assessed whether the latest inventories from selected nations included estimates for all major greenhouse gases (carbon dioxide, methane, nitrous oxide, hydroflurocarbons, sulfur hexafluoride, and perfluorocarbons), for all sectors (energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste) and various years, and checked for inclusion of key inventory characteristics, including descriptions of uncertainty and quality assurance and quality control measures, adequate documentation to support estimates, a comparable format, and analysis to identify emissions from key sources. Though inventory guidelines do not call for all of these from non-Annex I nations, we believe they are indicative of the quality and comparability of inventories. We did not independently assess emissions estimates from non-Annex I nations. We used the quality principles agreed to by Parties for Annex I nations—transparency, consistency, comparability, completeness, and accuracy—as the basis of our review of all inventories and in our discussions with experts. We also provide information on the reported uncertainty of emissions estimates, a more objective indicator of their precision, and on the timeliness of inventory submissions. To identify barriers to improving inventories, we reviewed relevant literature, including national communications from the seven selected non-Annex I nations, and summarized the views of our expert group. To address the second objective, we summarized the results of semi- structured interviews with experts and Secretariat officials. We reviewed Convention documentation about the inventory review process, including Compliance Committee and Subsidiary Body for Implementation reports. To address all three objectives, we summarized findings in the literature and the results of semi-structured interviews with experts. First, we identified 285 experts from our review of the literature and recommendations from U.S. and international government officials and researchers. From this list, we selected 15 experts based on (1) the relevance and extent of their publications, (2) recommendations from others in the inventory field, and (3) the extent to which experts served in the Consultative Group of Experts (a group assembled by the Convention to assist non-Annex I nations improve their national communications), as lead reviewers in the Convention’s inventory review process, or were members of the National Research Council’s committee on verifying greenhouse gas emissions. Finally, to ensure coverage and range of perspectives, we selected experts who had information about key sectors, like the agriculture and energy sectors, came from both Annex I and non- Annex I nations and key institutions, and provided perspectives from both those who were involved in the inventory review process and from those not directly involved in preparing or reviewing inventories. Appendix II lists the experts we interviewed, which included agency and international officials, researchers, and members of inventory review teams. We conducted a content analysis to assess experts’ responses and grouped responses into overall themes. The views expressed by experts do not necessarily represent the views of GAO. Not all of the experts provided their views on all issues. We identify the number of experts providing views where relevant. During the course of our review, we interviewed officials, researchers, and members of inventory review teams from State, EPA, and the Department of Energy in Washington, D.C.; the Convention Secretariat’s office in Bonn, Germany; and from various think tanks, nongovernmental organizations, and international organizations. We conducted this performance audit from September 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michael Hix (Assistant Director), Russell Burnett, Colleen Candrl, Kendall Childers, Quindi Franco, Cindy Gilbert, Jack Hufnagle, Michael Kendix, Thomas Melito, Kim Raheb, Ben Shouse, Jeanette Soares, Kiki Theodoropoulos, Rajneesh Verma, and Loren Yager made key contributions to this report. Climate Change: Observations on Options for Selling Emissions Allowances in a Cap-and-Trade Program. GAO-10-377. Washington, D.C.: February 24, 2010. Climate Change Policy: Preliminary Observations on Options for Distributing Emissions Allowances and Revenue under a Cap-and- Trade Program. GAO-09-950T. Washington, D.C.: August 4, 2009. Climate Change Trade Measures: Estimating Industry Effects. GAO-09-875T. Washington, D.C.: July 8, 2009. Climate Change Trade Measures: Considerations for U.S. Policy Makers. GAO-09-724R. Washington, D.C.: July 8, 2009. Climate Change: Observations on the Potential Role of Carbon Offsets in Climate Change Legislation. GAO-09-456T. Washington, D.C.: March 5, 2009. Climate Change Science: High Quality Greenhouse Gas Emissions Data are a Cornerstone of Programs to Address Climate Change. GAO-09-423T. Washington, D.C.: February 24, 2009. International Climate Change Programs: Lessons Learned from the European Union’s Emissions Trading Scheme and the Kyoto Protocol’s Clean Development Mechanism. GAO-09-151. Washington, D.C.: November 18, 2008. Carbon Offsets: The U.S. Voluntary Market is Growing, but Quality Assurance Poses Challenges for Market Participants. GAO-08-1048. Washington, D.C.: August 29, 2008. Climate Change: Expert Opinion on the Economics of Policy Options to Address Climate Change. GAO-08-605. Washington, D.C.: May 9, 2008. International Energy: International Forums Contribute to Energy Cooperation within Constraints. GAO-07-170. Washington, D.C.: December 19, 2006. Climate Change: Selected Nations’ Reports on Greenhouse Gas Emissions Varied in Their Adherence to Standards. GAO-04-98. Washington, D.C.: December 23, 2003. Climate Change: Information on Three Air Pollutants’ Climate Effects and Emissions Trends. GAO-03-25. Washington, D.C.: April 28, 2003. International Environment: Expert’s Observations on Enhancing Compliance With a Climate Change Agreement. GAO/RCED-99-248. Washington, D.C.: August 23, 1999. International Environment: Literature on the Effectiveness of International Environmental Agreements. GAO/RCED-99-148. Washington, D.C.: May, 1999. Global Warming: Difficulties Assessing Countries’ Progress Stabilizing Emissions of Greenhouse Gases. GAO/RCED-96-188. Washington, D.C.: September 4, 1996.
Nations that are Parties to the United Nations Framework Convention on Climate Change periodically submit inventories estimating their greenhouse gas emissions. The Convention Secretariat runs a review process to evaluate inventories from 41 "Annex I" nations, which are mostly economically developed nations. The 153 "non-Annex I" nations are generally less economically developed and have less stringent inventory reporting guidelines. The Department of State (State) represents the United States in international climate change negotiations. GAO was asked to report on (1) what is known about the comparability and quality of inventories and barriers, if any, to improvement; (2) what is known about the strengths and limits of the inventory review process; and (3) views of experts on implications for current and future international agreements to reduce emissions. GAO analyzed inventory reviews and inventories from the seven highest-emitting Annex I nations and seven of the highest emitting non-Annex I nations. GAO also selected and interviewed experts. Recent reviews by expert teams convened by the Secretariat found that the 2009 inventories from the selected Annex I nations--Australia, Canada, Germany, Japan, Russia, the United Kingdom, and the United States--were generally comparable and of high quality. For selected non-Annex I nations--Brazil, China, India, Indonesia, Malaysia, Mexico, and South Korea--GAO found most inventories were dated and of lower comparability and quality. Experts GAO interviewed said data availability, scientific uncertainties, limited incentives, and different guidelines for non-Annex I nations were barriers to improving their inventories. The lack of comparable, high quality inventories from non-Annex I nations is important because they are the largest and fastest growing source of emissions, and information about their emissions is important to efforts to address climate change. There are no inventory reviews for non-Annex I nations. Experts said the inventory review process has notable strengths for Annex I nations as well as some limitations. The review process, which aims to ensure nations have accurate information on inventories, is rigorous, involves well-qualified reviewers, and provides feedback to improve inventories, according to experts. Among the limitations experts identified is a lack of independent verification of estimates due to the limited availability of independent statistics against which to compare inventories' data. Also, GAO found that the review process's quality assurance framework does not independently assess concerns about a limited supply of reviewers and inconsistent reviews, which could pose challenges in the future. Experts said Annex I nations' inventories and the inventory review process are generally sufficient for monitoring compliance with current agreements to reduce emissions. For non-Annex I nations, however, experts said the current system may be insufficient for monitoring compliance with future agreements, which may require more reporting. As part of ongoing negotiations to develop a new climate change agreement, State has emphasized the need for better information on emissions from high-emitting non-Annex I nations. While improving the inventory system is important to negotiations, some experts said disagreements about emissions limits for developed and developing nations pose a greater challenge. GAO recommends that the Secretary of State work with other Parties to the Convention to (1) continue encouraging non-Annex I Parties to improve their inventories and (2) strengthen the inventory review process's quality assurance framework. State agreed with GAO's findings and recommendations.
The Department of Agriculture’s Forest Service manages about 192 million acres of land—nearly 9 percent of the nation’s total surface area (about the size of California, Oregon, and Washington State combined) and about 30 percent of all federal lands. In fiscal year 1996, revenue generated from the sale or use of resources and lands within the National Forest System totaled about $0.9 billion. Over $2.0 billion in appropriations and over $0.4 billion in trust funds were available to manage the system’s 155 national forests. The Forest Service’s motto is “caring for the land and serving people.” Laws guiding the management of the national forests require the Forest Service to manage its lands under the principles of multiple use and sustained yield to meet the diverse needs of the American people. The Forest Service is required to plan for six renewable surface uses—outdoor recreation, rangeland, timber, watersheds and water flows, wilderness, and wildlife and fish. In addition, the agency’s guidance and regulations require the Forest Service to consider the production of nonrenewable subsurface resources—such as oil, gas, and hardrock minerals—in its planning. Under the Organic Administration Act of 1897, the national forests are to be established to improve and protect the forests within their boundaries or to secure favorable water flow conditions and provide a continuous supply of timber to citizens. The Multiple-Use Sustained-Yield Act of 1960 added the uses of outdoor recreation, range, watershed, and fish and wildlife. The act also requires the agency to manage its lands to provide high levels of all of these uses to current users while sustaining undiminished the lands’ ability to produce these uses for future generations (the sustained-yield principle). Under the National Forest Management Act of 1976 (NFMA) and its implementing regulations, the Forest Service is to (1) recognize wilderness as a use of the forests and (2) maintain the diversity of plant and animal communities (biological diversity). The Forest Service must comply with the requirements of the National Environmental Policy Act of 1969 (NEPA). NEPA and its implementing regulations specify procedures for integrating environmental considerations through environmental analyses and for incorporating public input into the agency’s decision-making process. NEPA requires that a federal agency prepare a detailed environmental impact statement (EIS) for every major federal action that may significantly affect the quality of the human environment. The EIS is designed to ensure that important effects on the environment will not be overlooked or understated before the government makes a commitment to a proposed action. In planning and reaching decisions, the Forest Service must also comply with the requirements of other environmental statutes, including the Endangered Species Act, the Clean Water Act, the Clean Air Act, the Wilderness Act, and the Migratory Bird Treaty Act, as well as other laws, such as the National Historic Preservation Act. The Forest Service is subject to more than 200 laws affecting its activities and programs. Many laws governing national forest management and planning, dating back to the Organic Administration Act of 1897, have implied or stated that economics should be included in managing the national forests. The Forest and Rangeland Renewable Resources Planning Act of 1974 (known as RPA) requires the Forest Service to (1) periodically analyze trends in supply and demand and to report on investment opportunities in comprehensive assessments of the nation’s renewable resources conducted every 10 years; (2) discuss investment priorities and provide data for examining cost accountability in programs prepared every 5 years to respond to the trends and opportunities identified in the assessments; (3) use an interdisciplinary approach, including economics, in land and resource management planning; and (4) consider economics and financing in building the transportation system for the National Forest System. NFMA added numerous subsections to RPA. As noted by the Congressional Research Service, under NFMA, the Forest Service is required to include economic factors both in general and in the following specific conditions: (1) when considering various resource management systems, (2) when determining where even-aged timber management is allowed, and (3) when identifying lands not suited for timber production. In addition, RPA requires that the Forest Service prepare an annual report assessing its accomplishments and progress in implementing the RPA program. NFMA requires that the annual report include a comparison of returns to the government with estimated expenditures for a representative sample of timber sales. The Federal Land Policy and Management Act of 1976, as amended, generally requires federal agencies to obtain fair market value for the use of federal lands. This act and the Mineral Leasing Act, as amended, generally require federal agencies to obtain fair market value for the use of federal lands for rights-of-way for oil and gas pipelines, power lines, and communications lines. Title V of the Independent Offices Appropriation Act of 1952, as amended, authorizes federal agencies to issue regulations to assess a fair fee for a service or thing of value provided to an identifiable recipient beyond that provided to the general public. The Office of Management and Budget’s (OMB) Circular A-25 implements the Independent Offices Appropriation Act’s fee requirements. The circular classifies charges into two categories—special services and leases or sales. When providing special services, such as reviewing and processing permits or leases, federal agencies are to recover the costs of providing the services, resources, or goods. When the government sells or leases goods, resources, or real property, agencies are to establish user fees to recover the fair market value of the goods, resources, or services provided. Under the provisions of the Independent Offices Appropriation Act and OMB Circular A-25, federal agencies are to obtain fair market value in the absence of specific legislation to the contrary. The Forest Service is prohibited by law from obtaining a fair return for certain goods or recovering costs for certain services. For example, the agency provides recreation through numerous recreation facilities that it manages directly, including about 3,000 campgrounds, over 120,000 miles of hiking trails, and thousands of picnic areas and boating sites. However, according to the Land and Water Conservation Fund Act of 1964 (P.L. 88-578), the Forest Service can charge fees only for the use of (1) boat launching facilities that offer services such as mechanical or hydraulic boat lifts and (2) campgrounds that offer certain amenities such as toilet facilities, drinking water, refuse containers, and tent or trailer spaces. The House Committee on the Budget has an ongoing interest in the Forest Service’s management of the nation’s 155 forests, including efforts by the agency to be more cost-effective and businesslike in its operations. To assist the Committee in its deliberations and oversight, the Chairman asked us to identify (1) the lessons that can be learned from efforts by nonfederal land managers to generate revenue and/or become financially self-sufficient from the sale or use of natural resources on their lands and (2) legal and other barriers that may preclude the Forest Service from implementing similar efforts on its lands. As agreed with the Chairman’s office, we limited our review to the efforts of seven judgmentally selected nonfederal land managers located throughout the United States (see fig. 1.1): (1) the about 2.9 million acres of trust lands managed by Washington State’s Department of Natural Resources; (2) the 1.6 million-acre Fort Apache Indian Reservation in Arizona, home to the White Mountain Apache tribe; (3) the 125 parks, sites, and natural areas, encompassing over 669,000 acres, managed by the Texas Parks Division of the Texas Parks and Wildlife Department; (4) the 201,000-acre Deseret Land and Livestock ranch located in Utah and owned and managed by the Church of Jesus Christ of Latter-day Saints; (5) The Nature Conservancy’s 55,000-acre Niobrara Valley Preserve in Nebraska; (6) the National Audubon Society’s 27,000-acre Paul J. Rainey Wildlife Sanctuary in Louisiana; and (7) International Paper’s 16,000-acre Southlands Experiment Forest in Georgia. These land managers were selected primarily because they appeared to be (1) generating revenue or making a profit from one or more of the six renewable surface uses that the Forest Service is legislatively mandated to sustain on its lands and/or from nonrenewable subsurface resources that the agency is required to consider in its planning and (2) maintaining the long-term health of the land and resources by emphasizing environmental management and protection. To identify the lessons that can be learned from efforts by nonfederal land managers to generate revenue and/or become financially self-sufficient from the sale or use of natural resources on their lands, we interviewed officials and obtained and reviewed relevant documents and data on their (1) revenue-generating programs and activities, (2) missions and goals, (3) degree of financial self-sufficiency, (4) environmental protection and management, and (5) accountability for expenditures and results. The managers’ revenue-generating programs and activities are summarized in appendix I and discussed in more detail in appendixes II through VIII. To identify legal and other barriers that may preclude the Forest Service from implementing similar efforts on its lands, we relied extensively on prior GAO reports and testimonies. In addition, we provided the agency and the Department of Agriculture’s Office of General Counsel with, and received comments on, the approaches and techniques being used by the nonfederal land managers included in our review to generate revenue and/or become financially self-sufficient from the sale or use of natural resources on their lands. We also interviewed, and obtained and reviewed relevant documents and data from, responsible officials in Forest Service headquarters (Washington Office) as well as on selected forests, including the Apache-Sitgreaves in Arizona and the Wasatch-Cache in Utah. We performed our work from October 1996 through January 1998 in accordance with generally accepted government auditing standards. In conducting our work, we did not independently verify or test the reliability of the data provided by the nonfederal land managers or the Forest Service. We provided each of the nonfederal land managers with a draft of the appendix discussing their particular effort and made changes in response to their comments. We then obtained comments on a draft of the entire report from the Forest Service. The agency’s comments are presented in appendix IX. Of the seven uses that the Forest Service is legislatively mandated to sustain or consider in its decision-making, the seven nonfederal land managers whose efforts we reviewed are generating revenue from one or more of five—timber, outdoor recreation, wildlife and fish, rangeland, and subsurface resources. While not always attaining financial self-sufficiency, these nonfederal land managers are employing a variety of sometimes innovative approaches and techniques to generate revenue or reduce costs from the sale or use of natural resources on their lands. Rather than applying a one-size-fits-all approach or technique, the managers have (1) usually tailored their efforts to meet either a clear mission to make a profit over time or an incentive to generate revenue for other mission-related goals and objectives and (2) often been delegated the discretion and flexibility to make choices while being held accountable for their expenditures and results. Although most of the nonfederal land managers whose efforts we reviewed are attempting to increase revenue and/or decrease costs from the sale or use of natural resources on their lands, their success in becoming financially self-sufficient has varied and their revenue has not always covered the costs of providing the goods or services. Moreover, many of the managers are also generating revenue from programs and activities not related to natural resources, including a casino, commercial real estate, land sales, contributions, and investments. However, some of the more innovative approaches and techniques being employed by these land managers appeared to increase revenue or decrease costs from the sale or use of natural resources on certain lands or under certain conditions. Timber and related activities generated the most revenue for Washington State’s Department of Natural Resources and for International Paper on its Southlands Experiment Forest. Timber also generated most of the revenue that the White Mountain Apache tribe derived from natural resources. Natural gas production was the dominant revenue-generating use on the Audubon Society’s Paul J. Rainey Wildlife Refuge, livestock grazing generated the most revenue on The Nature Conservancy’s Niobrara Valley Preserve, livestock grazing and recreational hunting for big-game wildlife species provided virtually all of the revenue on the Church of Jesus Christ of Latter-day Saints’ Deseret Land and Livestock ranch, and recreational entrance and user fees produced the most revenue for the Texas Parks Division. Some of the more innovative approaches and techniques being employed to generate revenue or reduce costs from the sale or use of natural resources are discussed below. Managers from Washington State’s Department of Natural Resources and International Paper’s Southlands Experiment Forest emphasize the production of goods and services on some lands or within certain programs or functions while setting aside other lands, programs, or functions for non-revenue-generating activities, such as conservation, resource protection, and research. For example, the department divides its lands and programs between those that are managed primarily to generate long-term sustainable revenue for its trust beneficiaries and those that are managed primarily to meet regulatory objectives for the protection of resources on public and private lands. The department has also developed several programs to (1) transfer, sell, or exchange trust lands that have a low capability of generating revenue or are more suited for conservation or non-revenue-generating recreation and (2) purchase or otherwise acquire replacement lands capable of generating revenue. As a result, between 1981 and 1994, the department transferred, sold, exchanged, purchased, or otherwise acquired 355,000 acres, or 11 percent of its land base, including transferring about 59,000 acres from commodity production to conservation status since 1989. The state legislature provided funds to the department to compensate the trust for the fair market value of the lands transferred to conservation status. In addition, officials from Washington State’s Department of Natural Resources informed us that as opportunities become available, they attempt to optimize short- and long-term income within acceptable levels of risk by shifting to the highest and best land uses in selected geographic areas. For instance, during the past 25 years, the agency has converted more than 34,000 acres of drylands to higher revenue-producing lands by competitively leasing lands for commercial uses and by replacing livestock with more profitable agricultural uses, such as growing wheat and other dryland grains. While International Paper manages Southlands Experiment Forest to generate revenue, the company’s annual budget separates the forest’s research and policy functions from the forest’s revenue-generating timber operations. This separation reflects the company’s recognition that the forest’s research and policy functions sometimes require operational decisions that do not seek to maximize revenue. Managers on the White Mountain Apache tribe’s Fort Apache Indian Reservation, the Church of Jesus Christ of Latter-day Saints’ Deseret Land and Livestock ranch, and the Southlands Experiment Forest were managing game species as a profitable resource. For example, a hunter can pay over $24,000 on the Fort Apache Indian Reservation and up to $8,500 on the Deseret ranch for a trophy bull elk, and the hunts generate more than $850,000 in annual income for the tribe and most of the $340,000 in annual net income from Deseret’s wildlife program. About 25 percent of Southlands’ revenue is generated by recreational hunting, and, according to International Paper officials, is indicative of the company’s efforts to generate revenue from growing timber stands. Although grazing revenue varies with such factors as weather conditions and the price of beef cattle, the livestock grazing programs on Deseret and The Nature Conservancy’s Niobrara Valley Preserve contributed significantly to the land units’ financial self-sufficiency. For example, on average, livestock grazing provides at least 80 percent of Niobrara’s total revenue, and, over about the last 10 years, revenue from all activities on the preserve have been sufficient to cover both operating and capital costs, other than the costs to acquire the land, which were paid by the Conservancy. Deseret and Niobrara are two of a small but growing number of ranches that practice what is often referred to as “time-control” or “time-managed” grazing. On Deseret, this management practice involves developing an annual written plan to (1) set the time of year and limit the length of time that cattle are allowed to graze in an area by moving them among fenced pastures rather than allowing them to graze on open rangeland and (2) rest pastures every year by not allowing cattle to graze on them. Trust lands managed by Washington State’s Department of Natural Resources are funded from total revenue and generate considerable net income for the trust beneficiaries, primarily from timber sales and related activities. The department has initiated several efforts to increase net income from its timber program. For example, according to department officials, they have increased timber revenue by identifying and marketing high-value trees, such as those that can be used as utility or transmission poles or as logs for the log home industry (“merchandising” timber). Since 1990, this practice has generated about $41 million in additional revenue at a cost of about $2 million in staff salaries. The department is also performing both precommercial and commercial thinning, and, to a lesser extent, pruning and fertilizing timber stands to spur tree growth. In fiscal year 1996, tree sales resulting from commercial thinning generated some $17 million in revenue. In addition, according to officials from Washington State’s Department of Natural Resources, they have (1) increased the efficiency of their timber sale appraisal system by adopting an approach that looks only at prior comparable sales; (2) stopped reimbursing contractors for constructing logging roads, thus reducing the costs to monitor the roads’ construction as well as avoiding reimbursing contractors for inefficient road construction practices; (3) initiated lump-sum bidding procedures in which all timber within a stand is sold, thus lowering the costs of monitoring the buyer’s removal of timber; (4) replaced oral bidding of timber sales with sealed bids to avoid artificially suppressing the highest bid value; and (5) pilot-tested contracting with a company to harvest timber and then having the department, rather than the company, market the logs. According to department officials, this last effort—called contract logging—has increased the department’s return by eliminating the middle man. It also gives the department more control over the timing and environmental impact of logging operations. Since fiscal year 1994, managers of Texas state parks (1) have increased entrance and campground fees, sometimes by 100 percent; (2) are managing retail stores previously contracted to concessionaires and have opened new ones; (3) have installed park-leased soft drink machines; and (4) have increased the number of fee-based interpretative and tourist-oriented programs. As a result, park-generated net income—primarily from entrance and campground fees—grew from $14.8 million in fiscal year 1993 to $18.5 million in fiscal year 1995, an increase of 25 percent. Similarly, the White Mountain Apache tribe charges fees for amenity-based recreation on the Fort Apache Indian Reservation, including hiking, camping, boating, river rafting, and snow skiing. In addition, the tribe requires outdoor recreation permits to travel on the reservation’s unpaved roads. As a result, recreational fees provide a relatively stable source of revenue to the tribe. A use that the Forest Service is legislatively mandated to sustain—watersheds and water flows—played an important role in generating revenue from other uses, such as irrigating rangelands; sustaining recreational fishing; and providing boating, river rafting, and other outdoor recreational activities. For example, according to the manager of the Deseret Land and Livestock ranch, the ranch is financially self-sufficient, in part, because it has a substantial water right that predates Utah’s statehood, as well as most other state water rights. The water is used to irrigate pastures that represent less than 4 percent of the ranch’s acreage but provide over 55 percent of the total cattle forage. Recognizing the importance of water to generating revenue from other uses, Washington State’s Department of Natural Resources has obtained and is continuing to pursue water rights from the state’s water and irrigation districts as well as other surface water and groundwater irrigation rights and contracted for water from the federal Columbia Basin Irrigation Project. The department has also developed irrigation infrastructure (drilling wells and laying pipes). The water has been used to convert many acres of drylands to irrigated farmlands, grape vineyards, and apple orchards and to significantly increase the earning potential of the department’s agricultural lands within central Washington by replacing some livestock with more profitable agricultural crops. As a result of these and other efforts, revenue from the department’s agricultural program has grown by nearly 200 percent in the last 15 years, according to department officials. Both Washington State’s Department of Natural Resources and the White Mountain Apache tribe have entered into agreements with federal regulatory agencies to reduce costs and to provide more regulatory certainty and predictability to revenue-generating timber and other programs. Specifically, in January 1997, the department signed a habitat conservation plan with two federal regulatory agencies. This plan covers 1.6 million acres, or 76 percent of the department’s 2.1 million acres of forestland. The agreement includes a “no surprise policy” under which the federal government will not ask for more land or mitigation funding from the state even if a species protected by the plan continues to decline. Furthermore, the subsequent listing of a species as endangered or threatened under the Endangered Species Act will not result in additional mitigation requirements. Similarly, the White Mountain Apache tribe has assumed responsibility from federal regulatory agencies for accommodating the objectives of federal environmental laws, especially the Endangered Species Act, on the Fort Apache Indian Reservation. In December 1994, the Chairman of the tribe and the Director of the Department of the Interior’s Fish and Wildlife Service signed an innovative statement of relationship between the tribe and the agency that recognizes the tribe as the primary manager of the reservation with the institutional capability to ensure that economic activity does not have an adverse impact on species listed under the Endangered Species Act, as well as on sensitive wildlife and plants. In addition, in June 1997, the Secretary of the Interior signed an order that clarifies the responsibilities of the Department when the implementation of the Endangered Species Act affects federally recognized Indian lands, tribal trust resources, or the exercise of tribal rights. The order contains a provision stating that the United States defers to tribal conservation management plans. Both federal regulatory and tribal officials agree that these agreements will greatly reduce the time and costs associated with accommodating environmental objectives. Other efforts to increase net income through savings included reducing the number of salaried employees by increasing the use of volunteers and prison inmates. For instance, in fiscal years 1994 and 1995, Texas state park managers reduced the number of salaried employees and increased the number of campground volunteers and hosts. They also increased their use of prison inmates to perform routine cleaning, renovation, and improvements at park facilities, as well as other services. Finally, they reduced the number of months worked by seasonal employees. In fiscal year 1995, volunteers donated about 490,000 hours of work valued at $2.6 million, equal to the work of about 238 full-time employees. The estimated value of the inmates’ labor was about $2.4 million over 2 fiscal years, according to Texas Parks Division officials. None of the approaches or techniques for increasing revenue or decreasing costs used by the nonfederal land managers in our review were legislatively mandated or otherwise required. Rather, these approaches and techniques usually resulted because the managers had either a clear mission to make a profit over time or an incentive to generate revenue for other mission-related goals and objectives. Moreover, many of the approaches or techniques seemed to be applicable in only certain geographical areas or under certain conditions, thus requiring that the nonfederal managers be given the discretion and flexibility to make choices while being held accountable for their expenditures and results. The primary goal of private businesses, such as the Deseret Land and Livestock ranch and International Paper, is to make a profit. For example, Deseret ranch is expected not only to be financially self-sufficient but also to earn a 5-percent return on investment on its operations. Armed with this clear mission priority, managers on both the ranch and International Paper’s Southlands Experiment Forest have initiated efforts to increase revenue and decrease costs. Having a clear mission priority to generate long-term sustainable revenue has produced similar results for public agencies. For example, trust lands and programs managed by Washington State’s Department of Natural Resources are funded from total revenue and generate considerable net income for the designated trust beneficiaries. Conversely, lands managed by the department that have been set aside for conservation and non-revenue-generating recreation, as well as programs to protect public resources, are not expected to generate revenue and are supported primarily by legislatively appropriated funds. Businesses and agencies that emphasize making a profit often establish incentives to increase revenue. For example, Deseret ranch’s employees have two financial incentive plans—one based on the ranch’s net income and the other based on annual, individual performance goals. Lands and programs managed by Washington State’s Department of Natural Resources to generate long-term sustainable revenue for the designated trust beneficiaries are funded solely from a percentage of the total revenue they generate, thus providing employees with an incentive to maximize revenue. And, when the Texas Parks and Wildlife Department established a financial incentive by returning a portion of any increased revenue or decreased costs to the park where the revenue or savings was generated, state park managers responded by increasing revenue by 25 percent and reducing expenditures for operations by almost 10 percent over 2 years, according to Texas Parks Division officials. The need to generate revenue for other mission-related goals and objectives can also provide an incentive. For example, the Niobrara Valley Preserve can spend money to fulfill The Nature Conservancy’s biological diversity goal only when it raises money. In addition, the Conservancy requires all of its preserves to strive for financial self-sufficiency and allows them to retain most of the revenue that they generate. Since bison and cattle, on average, provide at least 80 percent of the preserve’s total revenue, Niobrara has an incentive to generate revenue from grazing. Virtually all of the nonfederal land managers whose efforts we reviewed have the discretion and flexibility to (1) explore innovative entrepreneurial ideas or conduct research to increase profits and (2) choose where and when to apply the results. One result has been that they have tailored their approaches and techniques for generating revenue or reducing costs to their particular geographical areas or conditions. However, this freedom to make choices is often accompanied by oversight by the parent organization, the beneficiaries of the revenue generated, or others to ensure accountability for expenditures and results. For example, the managers of the Deseret Land and Livestock ranch and the Southlands Experiment Forest have the freedom to try innovative approaches and techniques to increase net income. However, the ranch manager is held accountable for his expenditures and results by the church and the forest manager by the company and its stockholders. Washington State’s Department of Natural Resources is held accountable for the expenditures and results of its management of state trust lands by the designated trust beneficiaries, including state school districts, colleges, and universities, as well as other public agencies and charitable institutions within the state. Thus, when the department makes a decision that may reduce current income and return on investment over the short term, it must show these beneficiaries that it has exercised skill and care in protecting trust resources (the “prudent person” doctrine), ensured equal treatment for all generations (the “intergenerational equity” principle), and not foreclosed reasonably foreseeable future sources of income by actions taken today. Each of these principles may reduce current income and return on investment over the short term but may be viewed over the long term as having been the most prudent course. Washington State’s Department of Natural Resources also has two initiatives under way that address accountability. One is the Asset Stewardship Program, which is to look at the current and possible future mix of assets to determine which mix will best generate long-term revenue for the trust beneficiaries. As part of this initiative, the department plans to (1) set standards for evaluating the mix of assets on the basis of their profitability, biological diversity, carrying capacity, and overall positioning and (2) develop measurement tools to monitor the assets’ ecological, social, and economic performance. The agency also hopes to develop a longer-term management framework that will give its managers flexibility to respond to future population and other changes that affect the management of state lands and programs. The second effort is the department’s March 1997 long-term Strategic Plan—the “10-Year Direction”—which, among other things, sets out major goals, objectives, and specific strategies to achieve them. According to department officials, the plan (1) will be consistent with a statewide performance budgeting system now being developed and (2) parallels the Asset Stewardship Plan by identifying specific targets for managing various trust assets. Some of the approaches and techniques being employed by the nonfederal land managers whose efforts we reviewed appear to have the potential to increase revenue or decrease costs from the sale or use of natural resources on certain Forest Service lands, or within certain programs and activities, under certain conditions. The Forest Service is, to a limited extent, employing a few of these approaches and techniques, such as performing both precommercial and commercial thinning on some lands suitable for commercial timber harvesting, and is not prohibited by law from using other approaches and techniques, including selling logs and other cut roundwood products. In addition, the agency had reduced staffing from about 46,000 permanent positions in fiscal year 1992 to about 39,400 in fiscal year 1996, or by about 14 percent. However, generating revenue and reducing costs are not mission priorities for the agency, and managers lack both flexibility to make choices and accountability for results. The low priority assigned to increasing revenue and decreasing costs results, in part, from the importance or emphasis given to ecological, social, and other values and concerns. Statutory language implies that maximizing revenue should not be the overriding criterion in managing the national forests. Requirements in environmental and planning laws and their judicial interpretations have increasingly required the Forest Service to shift its emphasis from uses that generate revenue to those that do not. In recent years, legislative and administrative decisions have set aside or withdrawn an increasing percentage of Forest Service lands for conservation and, in keeping with the existing legislative framework, the Forest Service is moving away from, rather than toward, financial self-sufficiency. The agency is required to continue providing certain goods and services at less than their fair market value. Finally, certain congressional expectations and revenue-sharing provisions serve as disincentives to either increasing revenue or decreasing costs. When the Congress has provided the Forest Service with the authority to obtain fair market value for goods or recover costs for services, the agency often has not done so. The Forest Service also has not always acted to contain costs, even when requested to do so by the Congress. Underlying these shortcomings is the failure to hold the agency adequately accountable for its performance for increasing revenue or decreasing costs. The Forest Service’s recent strategic plan, which is intended to form the foundation for holding the Forest Service accountable for its performance, contains no goals or performance measures for obtaining fair market value or for reducing or containing costs. “Multiple use means . . . that some land will be used for less than all the resources; and harmonious and coordinated management of the various resources, each with the other, without impairment of the productivity of the land, with consideration being given to the relative values of the various resources, and not necessarily the combination of uses that will give the greatest dollar return or the greatest unit output.” (Emphasis added by GAO.) Thus, according to the Congressional Research Service, the Congress expected economic values to affect the management of the national forests but “specifically ruled out maximizing receipts or outputs as the overriding economic criterion.” In addition, the National Forest Management Act of 1976, which provides guidance for forest planning, requires the Secretary of Agriculture to promulgate regulations that “insure that timber will be harvested from National Forest System lands only where . . . the harvesting system to be used is not selected primarily because it will give the greatest dollar return or the greatest unit output of timber. . . .” Thus, according to the Congressional Research Service, the act provides that maximizing returns or volume cannot be the only criterion for determining the harvesting system to be used. Requirements in environmental and planning laws and their judicial interpretations limit the Forest Service’s ability to generate revenue. In particular, section 7 of the Endangered Species Act represents a congressional design to give greater priority to the protection of endangered species than to the primary missions of the Forest Service and of other federal agencies. When proposing a project, such as a timber sale, the Forest Service bears the burden of demonstrating that its actions will not be likely to jeopardize listed species. Other laws enacted primarily during the 1960s and 1970s—such as the Clean Water Act, the Clean Air Act, the Migratory Bird Treaty Act, and the National Forest Management Act—and their judicial interpretations and implementing regulations also establish minimum requirements for these components of natural systems. In response to these requirements, the Forest Service has, during the last 10 years, increasingly shifted the emphasis under its broad multiple-use and sustained-yield mandate from revenue-generating uses (primarily producing timber) to uses that do not generate revenue (primarily sustaining wildlife and fish and their habitats). For example, in the states of Washington, Oregon, and California, federal lands, managed primarily by the Forest Service, represent almost half (47.8 percent) of the total lands suitable for commercial timber harvesting. In western Washington State, western Oregon, and northern California, 24.5 million acres of federal land were available for commercial timber harvesting. However, about 7.6 million acres, or 31 percent of the available acreage, have been set aside or withdrawn as habitat for species that live in old-growth forests, including the threatened northern spotted owl, or as riparian reserves to protect watersheds. To protect the forests’ health, only limited timber harvesting and salvage timber sales are allowed in some of these areas. In addition, requirements for maintaining biological diversity under the National Forest Management Act—as well as for meeting standards for air and water quality under the Clean Air and Clean Water acts, respectively—may limit the timing, location, and amount of harvesting that can occur. Moreover, harvests from these lands could be further reduced by plans to protect threatened and endangered salmon. Requirements in environmental and planning laws have also necessitated the use of more costly and time-consuming timber-harvesting methods.For example, in June 1992, the Forest Service announced plans to reduce the amount of timber harvested by clear-cutting by as much as 70 percent from fiscal year 1988 levels in order to manage the national forests in a more environmentally sensitive manner. This policy change has increased the timber program’s costs, since clear-cutting is a relatively economical method of harvesting. However, according to the Forest Service, these increased costs may be offset to some unknown degree by reductions in the number of administrative and legal challenges to individual timber sales. The Forest Service’s ability to generate revenue is not static and changes on the basis of new information and events, such as the listing of a species as endangered or threatened; the results of analyses, monitoring, and evaluation; and new judicial interpretations. For example, the Forest Service is required by the National Environmental Policy Act to assess activities occurring outside the national forests in deciding which uses to emphasize on its lands. A January 1997 habitat conservation plan between Washington State’s Department of Natural Resources and two federal regulatory agencies and similar agreements—which now cover 18 million acres of state and private land—require that any additional mitigation deemed necessary to protect listed species covered by the plans first be accomplished on federal lands. Therefore, while these agreements are expected to reduce costs and to provide more regulatory certainty and predictability on nonfederal lands, they may increase costs and regulatory uncertainty on Forest Service lands. Some Forest Service officials believe that future assessments are likely to show that the national forests are assuming a growing proportion of the responsibility for protecting wildlife and fish and that endangered and threatened species and their habitats are increasingly being concentrated on federal lands. In recent years, legislative and administrative decisions have set aside or withdrawn an increasing percentage of Forest Service lands for conservation. In keeping with the existing legislative framework, the Forest Service’s management approach has increasingly emphasized non-revenue-generating uses over other uses that can and have generated revenue. For example, an increasing percentage of Forest Service lands has been set aside by the Congress or administratively withdrawn for conservation—as wilderness, wild and scenic rivers, national monuments, and recreation. Only limited timber sales and oil and gas leasing—both of which are usually offered in competitive auction—are allowed in some of these areas. In 1964, less than 9 percent (16 million acres) of the national forests’ acreage was managed as wilderness, wild and scenic rivers, and national monuments and for recreation. By 1994, this figure had increased to 26 percent (almost 50 million acres). (See fig. 3.1.) According to the Forest Service, of the 96 million acres within national forests that contain timber suitable for commercial harvesting, 49 million acres, or 51 percent, have been approved for timber harvesting in the agency’s forest plans. Another 35 million acres, or 36 percent, have been approved for other uses, such as wildlife habitat and soil and watershed management, while the remaining 12 million acres, or 13 percent, have been formally withdrawn for other uses, such as wilderness areas. (See fig. 3.2.) Most of the federal acreage that has been set aside for conservation is located in 12 western states. In western Washington, western Oregon, and northern California, 11.4 million acres—or 47 percent of the 24.5 million acres of federal land available for commercial timber harvesting—have been set aside or withdrawn for conservation. Added to the about 7.6 million acres in these three states that have been set aside or withdrawn as habitat for old-growth forest species and as riparian reserves, 77 percent of the federal lands in the three states that were available for commercial timber harvesting have been set aside or withdrawn, primarily to meet environmental requirements or achieve conservation purposes. Setting aside lands for environmental or conservation purposes has reduced both the volume of timber sold from Forest Service lands and receipts from timber sales. Timber volume and receipts have also been reduced by (1) an increasing knowledge of the importance of naturally functioning systems—such as watersheds, airsheds, soils, and vegetative and animal communities—to the long-term sustainability of other forest uses, including timber production and (2) an increasing recognition that past Forest Service management decisions have led to degraded aquatic habitats, declining populations of some wildlife species, and increased forest health problems. In addition, the thrust of the Forest Service’s timber sales program is changing from primarily supplying commercially valuable timber to the wood-using industry in response to the nation’s demand for wood to using timber sales as a “tool” for achieving land stewardship objectives that require manipulating the existing vegetation. To achieve a land stewardship objective—such as promoting the forests’ health, creating desired wildlife habitat, and reducing fuels and abnormally dense undergrowth that have accumulated in many forests and have increased the threat of unnaturally catastrophic fires—often necessitates preparing sales that include a mixture of both low- and high-value material, further reducing receipts from timber sales. Historically, the volume of timber sold from Forest Service lands in western Washington, western Oregon, and northern California constituted from a third to a half of all Forest Service timber sales. However, the volume of timber sold from this region declined from 4.3 billion board feet in 1989 to 0.9 billion board feet in 1994, a decrease of about 80 percent. Nationwide, the volume of timber sold from Forest Service lands decreased from over 11.3 billion board feet in 1988 to 3.4 billion board feet in 1996, a decrease of about 70 percent. (See fig. 3.3.) During this time, timber sales receipts decreased from $1.4 billion to $0.6 billion, or by 57 percent. (See fig. 3.4.) Timber sales receipts (dollars in billions) Like the acreage available for timber harvesting, the acreage available for oil and gas leasing has declined. According to a 1997 study by a consortium of oil and gas trade and professional associations, the amount of federal land in eight western states open to oil and gas leasing declined from 114 million acres in 1983 to fewer than 33 million acres in 1997, a drop of more than 60 percent. Of the 82 million acres of Forest Service land included in the study, 71 million acres, or 86 percent, are subject to a restriction that either forbids oil and gas development entirely or imposes stringent requirements on surface occupancy and access, according to the consortium. As the acreage available for commodity uses has decreased, the American public has increased its recreational use of the national forests substantially, according to the Forest Service. This demand is expected to increase steadily over the next 50 years, requiring the agency to spend more time and resources on this use. However, the Forest Service is currently prohibited by law from charging fees for the use of most recreational sites and areas and from obtaining fair market value for the use of other areas that it manages directly. The decision not to charge fees for the use of most recreational sites and areas directly managed by the Forest Service reflects a long-standing philosophy of free access to public lands. Other legislative requirements that limit the generation of revenue on Forest Service lands also reflect this philosophy or a desire to promote the economic stability of certain historic commodity uses. As a result, the Forest Service is required to continue to provide certain goods and services at less than their fair market value. The number of visitor days in national forests has grown from about 25 million in 1950 to over 340 million in 1996. (See fig. 3.5.) Compared with timber and minerals, recreation generates substantially less revenue. According to the Forest Service, it collected only about 7 cents per visit in receipts and special use fees in 1993. Among the factors contributing to this low rate of return is that the agency is prohibited by law from charging fees for the use of most recreational sites and areas that it manages directly. In addition, the Omnibus Parks and Public Lands Management Act of 1996 (P.L. 104-333) included a new fee system for ski areas that was developed by the ski industry. As noted in an April 1993 report, this system does not ensure that fees collected from ski areas reflect fair market value. The Forest Service’s inability to obtain a fair return for the recreational opportunities provided on national forests can distort comparisons of revenue and operating costs. For example, in a June 1997 report, we compared the operations of a state forest (the Bladen Lakes) and two national forests (the Nantahala and Pisgah) in North Carolina. The state forest generated enough revenue to make it almost financially self-sufficient, while the two national forests generated enough to cover only about 4 percent of their operating costs. Whereas the state forest emphasized the sale of timber and other forest products, the national forests emphasized the provision of non-revenue-generating visitor services. The Forest Service manages half of the nation’s big-game and coldwater fish habitat. However, federal statutes and regulations have narrowly defined the instances in which the Forest Service can charge fees for noncommercial recreational activities, such as hunting and fishing, on its lands, and the agency generally defers to state laws regulating these activities. For example, while a hunter can pay over $24,000 on the Fort Apache Indian Reservation and up to $8,500 on the Deseret Land and Livestock ranch for a trophy bull elk, Forest Service managers on the Apache-Sitgreaves and Wasatch-Cache forests—which abut the reservation and the ranch, respectively—cannot charge individuals for hunting on their lands. Thus, while receipts from trophy bull elk hunts on the 1.8 million acres within the Fort Apache Indian Reservation and the Deseret ranch totaled about $1.2 million a year (about 66 cents an acre), outfitter-guide operations, including big-game hunting, on the 192 million acres of Forest Service lands generate only $2 million a year (about 1 cent per acre). Other legislative requirements—reflecting a philosophy of free access to public lands or a desire to promote the economic stability of certain historic commodity uses—also limit the generation of revenue on Forest Service lands. For example, the Mining Law of 1872 was enacted to promote the exploration and development of domestic mineral resources as well as the settlement of the western United States. Under the act’s provisions, the federal government receives no financial compensation for hardrock minerals, such as gold and silver, extracted from Forest Service and other federal lands. In 1990, hardrock minerals worth at least $1.2 billion were extracted from federal lands, while known, economically recoverable reserves of hardrock minerals remaining on federal lands were valued at $64.9 billion. In contrast, the 11 western states that lease state-owned lands for mining purposes impose a royalty on minerals extracted from those lands. The 104th Congress considered, but did not enact, several bills that would have imposed royalties on hardrock minerals extracted from federal lands. A bill to impose royalties on hardrock minerals extracted from federal lands has also been introduced in the 105th Congress. If the Congress were to adopt an 8-percent royalty on gross profits, as proposed in two bills in the 104th Congress, the Congressional Budget Office estimates that the government would receive $184 million in fiscal years 1998-2002. Similarly, the formula that the Forest Service uses to charge for grazing livestock on its lands keeps fees low to promote the economic stability of western livestock grazing operators with federal permits. In a June 1991 report, we compared the existing grazing fee formula with alternatives that had been jointly developed by the Forest Service and the Bureau of Land Management. We noted that evaluating the soundness of any formula depends on the primary objective to be achieved and that deciding among objectives involves policy trade-offs more than analytical solutions. Nevertheless, the fees were too low to cover the government’s costs of managing the grazing program. Congressional expectations and revenue-sharing provisions have sometimes served as disincentives to either increasing revenue or decreasing costs from the sale or use of natural resources on Forest Service lands. Establishing annual output targets and sharing revenue that has been generated before deducting the costs of providing goods or services furnish two examples. For instance, to prepare and administer timber sales, the Forest Service relies primarily on annual appropriations based on such criteria as the anticipated volume of timber to be offered for sale, and the Forest Service’s performance measures are based on the volumes of timber offered for sale. In addition, congressional expectations for the agency’s timber program are often expressed as timber sale targets. To meet these expectations and targets, the Forest Service may not always recover its costs to prepare and administer the sales. When the Forest Service is allowed to retain a portion of the revenue it generates, it does so without deducting its costs, which are funded from annual appropriations. By law, states and counties also often share in revenue before deducting the full costs of providing the goods or services. Thus, neither the agency nor the states and counties have an incentive to control costs. For example, from fiscal year 1992 through fiscal year 1994, the Forest Service spent about $1.3 billion to prepare and administer timber sales. During that period, the agency collected nearly $3 billion in timber sales receipts. Instead of being required to return the money it spent to the Treasury, the Forest Service was allowed to retain about $1.7 billion, or 57 percent, in various funds and accounts for specific purposes, such as the reforestation of harvested areas, preparation and administration of salvage timber sales, removal of brush, control of erosion, and building of roads that provide access to the timber sales areas, as provided for by law. Another $887 million, or 30 percent, was distributed to the states in which the forests are located. The funds can be used by the states to benefit roads and schools in the counties where the receipts were earned. The remaining $437 million was deposited in, or transferred to, the General Fund of the Treasury. (See fig. 3.6.) Similarly, 50 percent of the total revenue from livestock grazing on national forests and grasslands is returned to the Forest Service to fund various range improvements, such as fences and water developments, and 25 percent is distributed to the states, even though the revenue does not cover the agency’s costs of managing its grazing program. Under the Mineral Leasing Act (30 U.S.C. 181 et seq., as amended), 50 percent of the revenue for federal onshore minerals is distributed to the state in which the production occurred. Another 10 percent is distributed to the General Fund of the Treasury and the remaining 40 percent goes to a reclamation fund used for the construction of irrigation projects.However, in 1991, after the passage of Interior’s appropriations bill, states receiving revenue from federal onshore minerals development began paying a portion of the costs to administer the onshore leasing laws—a practice known as “net receipts sharing.” Net receipts sharing became permanent with the passage of the Omnibus Budget Reconciliation Act of 1993, which effectively requires that the federal government recover from the states about 25 percent of the prior year’s federal appropriations allocated to mineral-leasing activities. In fiscal year 1996, 41 states received about $481 million in revenue from the development of federal onshore minerals. The states paid the federal government about $22 million as their portion of the costs to administer the onshore leasing laws. Sharing revenue after deducting these costs may provide a strong incentive for the states to ensure that the costs are contained or reduced. The Congress has given the Forest Service the authority to obtain fair market value for some goods or to recover costs for some services. However, the agency has not always taken advantage of this authority, as the following examples from our prior work show. In June 1997, we reported that the sealed bid auction method is significantly and positively related to higher bid premiums on timber sales. However, the Forest Service used oral bids at single-bidder sales rather than sealed bids, resulting in an estimated decrease in timber sales receipts of $56 million from fiscal year 1992 through fiscal year 1996. In December 1996, we reported that, in many instances, the Forest Service has not obtained fair market fees for commercial activities on the national forests, including resort lodges, marinas, and guide services, or for special noncommercial uses, such as private recreational cabins and special group events. Fees for such activities are the second largest generator of revenue for the agency, after timber sales. The Forest Service’s fee system, which sets fees for most commercial uses other than ski operations, had not been updated for nearly 30 years and generally limited fees to less than 3 percent of a permittee’s gross revenue. In comparison, fees for similar commercial uses of nearby state-held lands averaged 5 to 15 percent of a permittee’s total revenue.In December 1996, we also reported that although the Forest Service had been authorized to recover the costs incurred in reviewing and processing all types of special-use permit applications since as far back as 1952, it had not done so. On the basis of information provided by the agency, we estimated that in 1994 the costs to review and process special-use permits were about $13 million. In April 1996, we reported that the Forest Service’s fees for rights-of-way for oil and gas pipelines, power lines, and communications lines frequently did not reflect fair market value. Agency officials estimated that in many cases—particularly in high-value areas near major cities—the Forest Service may have been charging as little as 10 percent of the fair market value. The Forest Service has been aware for some time of the need to improve its efforts to obtain fair market value for goods or recover costs for services. However, it has studied and restudied issues without reaching closure. For example, in 1987 and 1995, the agency developed draft regulations that, if enacted, would have allowed forest managers to recover the costs incurred in reviewing and processing special-use permit applications. However, the draft regulations were never finalized or published because, according to Forest Service headquarters officials, the staff resources assigned to develop and publish the regulations were diverted to other higher-priority tasks. The Forest Service has not always acted to contain costs, even when requested to do so by the Congress. Instead, consistent with its tendency to study and restudy issues without reaching closure, the agency has not established a clear sequence or schedule to improve its performance. Studies comparing federal and state costs to manage programs such as timber and leasable minerals have been frustrated by significant differences in legislative and regulatory requirements and guidance, types of lands managed, and funding sources. However, reviews of the Forest Service’s internal processes and procedures, as well as comparisons with the Bureau of Land Management’s operations, have identified opportunities to improve operational efficiency at virtually every organizational level within the Forest Service. For example, in April 1997 we reported that, according to an internal agency report, inefficiencies within the Forest Service’s decision-making process cost up to $100 million a year at the individual project level alone and that delays in finalizing forest plans, coupled with delays in finalizing agencywide regulations and in reaching decisions for individual projects, can total a decade or longer. The process used by the Forest Service to revise the land management plan for the Tongass National Forest in southeastern Alaska illustrates the results of the agency’s not being held accountable for making timely, orderly, and cost-effective decisions. The Forest Service originally planned to spend 3 years revising the plan. At the end of 3 years, the agency had spent about $4 million. However, the Forest Service spent another 7 years and $9 million revising the plan. Approved forest plans sometimes do not satisfy the requirements of environmental and planning laws. For example, from October 1992 through June 1996, the Forest Service paid almost $6.5 million in claims for timber sales contracts that were suspended or canceled to protect endangered or threatened species. As of October 1996, the agency had pending claims with potential damages of about $61 million, and it could incur at least an additional $198 million in damages. Some of the contracts were suspended or canceled because the Forest Service had not developed plans that satisfied environmental and planning requirements. Moreover, although the Bureau of Land Management had repeatedly revised its timber sales contract to minimize its liability when it must suspend or cancel a timber sales contract to protect threatened or endangered species, the Forest Service had not. Since the late 1980s, the Forest Service had been developing new regulations and a new timber sales contract that would limit the government’s liability on canceled timber sales contracts and redistribute the risk between the agency and the purchaser. However, the Forest Service had not finalized either the regulations or the contract, and agency officials believe that additional congressional appropriations may be required to help pay for pending and future claims. Similarly, the Forest Service could incur significant costs because the Eldorado National Forest in northern California failed to comply with the requirements of planning and environmental laws. Forest officials decided to proceed with a number of timber sales on the basis of cursory, out-of-date environmental assessments that did not adequately analyze the sales’ potential effects on fish, wildlife, plants, cultural resources, and water quality and did not consider significant new information, as required under regulations implementing the National Environmental Policy Act. The contracts that were awarded have since been suspended. As a result, the Forest Service could incur $30 million in potential damages. Concerned with the escalating costs of the Forest Service’s timber program, the Congress, in fiscal year 1991, asked the agency to develop a multiyear program to reduce the costs of its timber program by not less than 5 percent per year. However, in April 1997, the Forest Service was preparing to undertake the third major examination of its timber program in the last 4 years. Meanwhile, the costs associated with preparing and administering timber sales remain significantly higher than in fiscal year 1991. (See fig. 3.7.) Similarly, in mid-1996, the Forest Service began a study to streamline its commercial and noncommercial recreation special-use permit process. However, similar attempts to improve the process had been made in prior years but had met with little success. For example, a National Task Force on Special-Use Management, done in 1993 and 1994, addressed issues similar to the Forest Service’s streamlining effort. The task force identified numerous problems with the program and suggested ways to streamline the permit process and make the program more consistent throughout the agency. But none of the task force’s recommended actions were adopted by the agency. These and other findings led us to conclude in July 1997 that inefficiency and waste within the Forest Service have cost taxpayers hundreds of millions of dollars and that opportunities for economic gains have been lost through indecision and delay. We noted that past efforts by the Forest Service to change its behavior have not been successful and that decision-making within the agency is broken and in need of repair. The Forest Service has not obtained fair market value for goods, recovered costs for services, or improved operational efficiency because it has not been held accountable for increasing revenue or decreasing costs. Holding it accountable would require measuring its performance against revenue-generating and cost-reducing goals and objectives. However, the Forest Service’s September 30, 1997, strategic plan, developed to comply with the requirements of the Government Performance and Results Act of 1993 (Results Act), contains no goals or performance measures for obtaining fair market value or for reducing or containing costs. In 1994, we suggested that obtaining a better return for the sale or use of natural resources on federal lands and finding ways to reduce costs should be considered in developing a strategy to reform the Forest Service and that the agency would need to work closely with the Congress to accomplish these objectives. In our April 1997 report on the Forest Service’s decision-making and our July 1997 testimony on the Forest Service’s implementation of the Results Act, we noted the act requires every executive department and agency to develop a strategic plan that includes long-term general goals and objectives—or strategic goals—that form the foundation for holding it accountable for its performance. The Department of Agriculture submitted its first strategic plan—which included a strategic plan for the Forest Service—to the Office of Management and Budget and the Congress on September 30, 1997, as the act required. The Forest Service’s strategic plan contains no goals or performance measures for generating revenue or reducing costs. For instance, the plan’s objective for recreation does not identify a performance measure for obtaining a fair return for commercial and noncommercial recreation special-use permits. In addition, although the plan says that the agency intends to ensure that “taxpayers receive a fair return for the use and sale of wood fiber” from national forests, it does not identify any performance measures that could be used to hold the agency accountable for obtaining a fair return for timber and other forest products. Holding a public agency accountable for increasing revenue or decreasing costs from the sale or use of natural resources is not without precedent. As noted in chapter 2, Washington State’s Department of Natural Resources is looking at its current and possible future mix of assets to determine which mix will best generate long-term revenue for its trust beneficiaries. As part of this effort, the department plans to (1) set standards for evaluating the mix of assets on the basis of their profitability, biological diversity, carrying capacity, and overall positioning and (2) develop measurement tools to monitor the trust assets’ ecological, social, and economic performance. Among the Forest Service’s mission priorities, generating revenue and reducing costs rank below both protecting resources and providing goods and services. Efforts by the Forest Service to implement approaches and techniques to increase revenue or decrease costs—similar to many of the approaches and techniques being employed by the nonfederal land managers whose efforts we reviewed—would face a formidable array of statutory, regulatory, and other barriers. These barriers—which limit the Forest Service’s ability to move toward financial self-sufficiency and limit managers’ flexibility to make choices—include (1) language in federal statutes implying that maximizing revenue should not be the overriding criterion in managing the national forests, (2) requirements in environmental and planning laws necessitating a shift in the Forest Service’s management emphasis to uses that do not generate revenue, (3) legislative and administrative decisions setting aside an increasing percentage of Forest Service lands for conservation, (4) requirements that the agency continue to provide certain goods and services at less than their fair market value, and (5) disincentives embedded in laws and congressional expectations. Although the Forest Service’s ability to generate revenue or recover costs is limited, the Congress has provided the agency with the authority to obtain fair market value for certain goods or recover costs for certain services. However, the Forest Service often has not done so, nor has it always acted to contain costs, even when requested to do so by the Congress. Underlying these shortcomings is the failure to hold the agency adequately accountable for its performance for increasing revenue or decreasing costs. Revising the strategic plan that it developed to comply with the requirements of the Results Act to include goals and performance measures for obtaining fair market value and for reducing or containing costs would provide the necessary first step for holding the Forest Service accountable for its performance. If the Congress believes that increasing revenue or decreasing costs from the sale or use of natural resources should be mission priorities for the Forest Service, it will need to work with the agency to identify legislative and other changes that are needed to clarify or modify the Congress’s intent and expectations for revenue generation relative to ecological, social, and other values and concerns. Because the Forest Service has not exercised its authority to obtain fair market value for certain goods and to recover costs for certain services and has not always acted to contain costs, even when requested to do so by the Congress, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to revise the strategic plan that the agency developed to comply with the requirements of the Results Act to include goals and performance measures for obtaining fair market value for goods, recovering costs for services, and containing expenses as the necessary first step in holding the Forest Service accountable for its performance. We provided copies of a draft of this report to the Forest Service for its review and comment. The agency’s comments appear in appendix IX. The Forest Service (1) agreed with the report’s conclusions and recommendations, (2) stated that the report fairly presents relevant factors that must be understood when comparing land managers or land management within different time periods, and (3) noted that it has made some progress in increasing revenue and improving financial accountability. We revised the report to recognize that the agency is employing some of the approaches and techniques used by the nonfederal land managers to increase revenue and has reduced staffing.
Pursuant to a congressional request, GAO reviewed: (1) the efforts by nonfederal land managers to generate revenue or become financially self-sufficient from the sale or use of natural resources on their lands; and (2) legal and other barriers that may preclude the Forest Service from implementing similar efforts on its lands. GAO noted that: (1) the nonfederal land managers whose efforts GAO reviewed--while not always attaining financial self-sufficiency--are using a variety of sometimes innovative approaches and techniques to generate revenue or reduce costs from the sale or use of natural resources on their lands; (2) none of the approaches or techniques are legislatively mandated or otherwise required; (3) rather, the land managers have: (a) usually tailored their efforts to meet either a clear mission to make a profit over time or an incentive to generate revenue for other mission-related goals and objectives; and (b) often been delegated the discretion and flexibility to explore innovative entrepreneurial ideas or conduct research to increase profits and to choose where and when to apply the results while being held accountable for their expenditures and performance; (4) generating revenue and reducing costs are not mission priorities for the Forest Service; (5) in keeping with its existing legislative framework, the agency is moving away from, rather than toward, financial self-sufficiency; (6) increasingly, legislative and administrative decisions--such as setting aside an increasing percentage of Forest Service lands for conservation as wilderness, wild and scenic rivers, and national monuments--and judicial interpretations of statutory requirements have required the Forest Service to shift its emphasis from uses that generate revenue to those that do not; and (7) furthermore: (a) the agency is required to continue providing certain goods and services--such as recreational sites, hardrock minerals, and livestock grazing--at less than their fair-market value; and (b) certain congressional expectations and legislative provisions--including those that require sharing revenue before deducting the costs of providing the goods or services--serve as disincentives to either increasing revenue or decreasing costs.
Historically, before the invention of automated fingerprint identification systems, paper fingerprint cards were used by law enforcement agencies to report arrest information to state repositories and to the FBI. The process was time-consuming, given that the local arresting agency mailed the fingerprint cards to the state repository, which mailed the information to the FBI—and, in return, the FBI’s response (based on a search of national records) would be mailed back to the state repository, which would then mail the information to the local arresting agency. Automation offered the potential to reduce submission and processing times from weeks (or longer) to hours. According to the FBI, prior to IAFIS implementation, a 6-month turnaround time for responses from the national level was not unusual—whereas, under IAFIS, for criminal fingerprints submitted electronically, the system can provide a response within 2 hours. IAFIS is a national, computerized system for storing, comparing, and exchanging fingerprint data in a digital format. As mentioned previously, most fingerprint data stem from arrests made by local and state law enforcement agencies, which take the suspect’s fingerprints manually (using ink and paper cards) or electronically (using Livescan equipment). Then, a copy of the fingerprints is forwarded (by mail or electronically) to the applicable state repository and, in turn, to the FBI for processing in IAFIS, which is the world’s largest biometric database (see fig. 1). In practice, a combination of both manual and electronic methods is used in submitting fingerprints to the FBI. For example, local law enforcement agencies may take fingerprints manually on paper cards and mail them to the state repository, and the state may then convert them to an electronic format before forwarding them to the FBI. Alternatively, some local law enforcement agencies with Livescan equipment forward fingerprints electronically to state repositories, which—because they do not yet have electronic transmission capability—print out paper copies of the fingerprints and mail them to the FBI. For both paper and electronic criminal fingerprint submissions, law enforcement agencies can indicate on the submission whether they want the FBI to provide them with the results of searching the fingerprints against the IAFIS database. If the agency does want a response and IAFIS finds a match, the FBI provides the submitting agency with the individual’s FBI identification number, which the agency can use to retrieve the related criminal history record. If no match is found, then the FBI creates a new FBI identification number for the individual and adds the fingerprints to the IAFIS database. Nationally, there is no standard requirement regarding the types or categories of criminal offenses for which fingerprints must be taken by local and state law enforcement agencies, nor is there any standard time frame requirement (after the arrest) for submitting the fingerprints to state criminal history repositories. However, according to FBI officials, virtually all states require the fingerprinting of persons arrested for serious offenses. Also, according to FBI officials, the time frame requirement for submitting the fingerprints to criminal history repositories varies among the states—generally ranging from a specific number of hours or days to a nonspecific standard such as “promptly” or “without undue delay.” Because complete information is integral to the capability of IAFIS to provide accurate identification and criminal history services, the FBI encourages all law enforcement agencies to submit criminal fingerprints to IAFIS. Except for arrests related to crimes against children, there is no federal statutory requirement for state criminal history repositories to submit criminal fingerprints to the FBI. However, in accordance with FBI guidance, all states voluntarily submit fingerprints for criterion offenses— that is, any offense punishable by imprisonment for a term exceeding 1 year (generally felonies and serious misdemeanors). FBI policy calls for the submissions to be made through a designated agency (the respective state’s criminal history repository) rather than directly from local agencies to the FBI. Centralized submissions from each state are intended to help ensure that the states’ repositories are complete and that all agencies adhere to technical and quality standards. There are no established time frame criteria or requirements for the submission of fingerprints from the states to the FBI. Ultimately, IAFIS was intended to eliminate the need for contributing law enforcement agencies to prepare and mail paper fingerprint cards to the FBI for processing and thereby improve the speed and accuracy of the fingerprint identification process. That is, the FBI’s goal is to achieve electronic (paperless) processing of all fingerprint data—and to provide a response within 2 hours to users who submit criminal fingerprints electronically. Maximizing the benefits of rapid responses under IAFIS depends largely on how quickly criminal fingerprints are submitted by local and state law enforcement agencies after arrests are made. IAFIS processing of criminal fingerprints is important to local and state law enforcement agencies not only for updating national criminal records databases but also for obtaining an individual’s complete criminal history—and, at times, for obtaining positive identification of arrestees and for immediate warrant notification. It is not unusual for arrested persons to use someone else’s name or an alias and have false identification documents. Also, many offenders are extremely mobile, committing crimes in more than one state. According to the FBI, an estimated 31 percent of criminal fingerprints processed by the Bureau involve multistate offenders—that is, offenders who have been arrested in more than one state. The importance of IAFIS to law enforcement agencies is apparent in the high number of requests for information from the system. Overall, law enforcement agencies submitting criminal fingerprints generally want to know the results of searching the fingerprints against IAFIS databases. For the recent 8-month period we studied (October 2002 through May 2003), law enforcement agencies wanted a response from the FBI for 78 percent of the approximately 5.3 million criminal fingerprint sets submitted. With the search results, law enforcement agencies can positively identify an arrestee and obtain an arrestee’s criminal history record. This information can be used by various justice system officials as a basis for making fundamental decisions about detention, charging, bail, and sentencing. For the remaining 22 percent of submissions, law enforcement agencies did not request a response from the FBI; rather, the fingerprints were submitted to update IAFIS databases. The extent to which law enforcement agencies use IAFIS responses to either positively identify arrestees or obtain an arrestee’s criminal history record is unclear. Law enforcement officials in the five states we visited told us that their agencies generally do not use IAFIS for a quick identification response because (1) local or state law enforcement agencies usually can identify the arrested individuals, most of whom are repeat offenders, and (2) all states currently have their own automated fingerprint identification systems or belong to regional automated fingerprint identification systems that can positively identify arrestees. Instead, these officials noted that submitting fingerprints is important for updating IAFIS databases so that future inquirers receive complete information. Furthermore, law enforcement officials in the states we visited noted that in those cases where a quick identification response is needed from IAFIS but the arresting agency does not have access to Livescan equipment that can electronically submit fingerprints, the FBI allows agencies to fax fingerprints to the FBI for processing. According to the FBI, these fax requests for rapid fingerprint identification account for less than 1 percent of the total number of fingerprints received. In any event, there are instances where quick identification responses from IAFIS are important. In designing IAFIS, the FBI estimated that the system would “prevent the release of the 10,000 to 30,000 fugitives freed each year because of the extended delays in establishing their true identities and warrant status.” More recently, in response to our inquiry, FBI officials could not confirm this estimate or provide data on the extent to which IAFIS has prevented the inappropriate release of fugitives. However, as examples, the FBI provided us summary information regarding two actual cases where quick identification responses from IAFIS prevented the release of individuals who gave false names when they were arrested and were wanted fugitives from another jurisdiction (see fig. 2). As contrasting examples, the FBI also provided us summary information regarding two actual cases where the arresting agencies released individuals from custody before making fingerprint submissions or receiving the IAFIS responses, which indicated that the released persons had used false names and were wanted fugitives from another jurisdiction (see fig. 3). The frequency of such incidents—that is, cases where a local or state law enforcement agency releases an arrestee from custody and subsequently receives an IAFIS identification response showing cross- jurisdictional criminal history and outstanding warrants—is not known. The capability of being able to quickly obtain positive identification of arrestees is becoming increasingly important—not only because many offenders have multistate records but also because of identity theft or identity fraud, which has been characterized by law enforcement as the fastest-growing type of crime in the United States. Furthermore, homeland security concerns add to the importance of quick positive identification capability. For example, in June 2002 congressional testimony, we noted that, in addition to using identity theft or identity fraud to enter the United States illegally and seek job opportunities, some aliens have used fraudulent identification documents in connection with serious crimes, such as narcotics trafficking and terrorism. Also, during our current review, the Chairman of the International Association of Chiefs of Police’s Criminal Justice Information Systems Committee told us that the electronic processing of fingerprint data is the most important component of the criminal justice information system and that the timeliness of submission and how long it takes to enter fingerprints into the automated system is an issue that could have serious consequences. Although obstacles remain, much progress has been made in electronic processing, as discussed in the following section. Local and state law enforcement agencies have made progress toward the FBI’s goal of electronic (paperless) processing of criminal fingerprints in the IAFIS environment, although there is room for substantial improvement. For the recent 8-month period we studied (October 2002 through May 2003), the overall average submission time for criminal fingerprints was 40 days—whereas prior to the implementation of IAFIS, average submission times were significantly higher (e.g., 118 days in 1997). Also, since the implementation of IAFIS, the number of fingerprints submitted electronically by state agencies as a percentage of total criminal fingerprints received by the FBI has increased annually. And for a large number of criminal fingerprints, local and state law enforcement agencies have demonstrated the ability to make submissions to IAFIS the same day as the date of the arrest. However, for many jurisdictions, delays in submitting fingerprints to IAFIS have been attributable to various factors, including lack of automation, competing priorities and resource constraints, backlogs of paper fingerprint cards to be processed, and other factors. In practice, large portions of the lengthy submission times associated with paper fingerprint cards probably represent inactivity, or “holding,” rather than actual “processing.” Nationally, since the implementation of IAFIS in July 1999, the overall timeliness of criminal fingerprint submissions has improved. For the approximately 5.3 million criminal fingerprints entered into IAFIS from October 2002 through May 2003—a total that encompasses both paper fingerprint card and electronic submissions—the average submission time was about 40 days after the date of arrest. In contrast, figure 4 shows that before the implementation of IAFIS, the average number of days from arrest to when the FBI received the fingerprints was about two or three times longer than 40 days. Despite improvements in average submission times since the implementation of IAFIS, some criminal fingerprint requests continued to reflect large lag times before being submitted. For example, of the approximately 5.3 million criminal fingerprint submissions entered into IAFIS from October 2002 through May 2003, about 535,000 (or 10 percent) were entered more than 90 days after the date of arrest. And, of this percentage, over one-half were entered into IAFIS more than 150 days after the date of arrest. In commenting on a draft of this report, the Department of Justice noted that while our use of the term “entered into IAFIS” accurately measures the end of local and state processing, it should not be construed to represent the point in time when the fingerprint record was physically entered into the IAFIS database. The department noted that the time intervals presented in this report—computed by the FBI’s Criminal Justice Information Services (CJIS) Division—were measured at the time the fingerprint records were received electronically by IAFIS. The Department added that the type of fingerprint submission, priority, workload, and time of day would influence the actual time the records were processed and entered into the IAFIS database. According to FBI officials, the agency has a goal of processing electronic fingerprint submissions and sending a response within 2 hours of receipt. For fiscal year 2002, the FBI reported that it responded to 90.3 percent of the electronic criminal submissions within 2 hours of receipt. Thus, the end of state processing and the actual entry of the fingerprints into the system are within hours of each other in most cases. Since the implementation of IAFIS in July 1999, the number of fingerprints submitted electronically by state agencies as a percentage of total criminal fingerprints received by the FBI has increased annually. As figure 5 shows, for example, 45 percent of criminal fingerprint submissions received by the FBI in 1999 from state central repositories were electronic; whereas, in the first 4 months of 2003, 70 percent of such criminal fingerprint submissions were electronic. According to FBI data as of April 2003, 42 states and the District of Columbia were routinely submitting some portion of their criminal fingerprints to the FBI electronically. Additional states soon may have the capability to submit criminal fingerprints electronically. For instance, two of the five states we visited in summer 2003 (Connecticut and Nevada) had not begun routinely submitting criminal fingerprints to the FBI electronically but expected to do so in the future. Specifically, officials from Nevada said that their state was developing such capability and anticipated that it would be available by the end of 2003. Similarly, officials from Connecticut said that their state was upgrading technology to provide electronic submission capability by the end of 2004. For a large number of criminal fingerprints, local and state law enforcement agencies have demonstrated the ability to make submissions to IAFIS the same day as the date of the arrest. Of the approximately 5.3 million criminal fingerprints entered into IAFIS from October 2002 through May 2003, over 1.5 million (29 percent) were entered on the same day as the date of the arrest. Such same-day submissions are achievable when the entire process is electronic, with law enforcement taking fingerprints using Livescan devices that transmit the fingerprints electronically to the state criminal history repository—which, in turn, transmits the fingerprints electronically to the FBI. Electronic processing allows for the fastest submission of fingerprints to IAFIS and supports the FBI’s goal of paperless processing of criminal fingerprint data. The Atlanta Police Department’s electronic fingerprint process illustrates how quickly fingerprint data can be submitted to IAFIS. The Atlanta Police Department takes criminal fingerprints using Livescan devices and forwards the fingerprint data electronically to the Georgia Crime Information Center. After processing the fingerprint data, the Georgia Crime Information Center transmits the fingerprint data via its computer systems directly to IAFIS. According to FBI data for the period January through May 2003, the Atlanta Police Department submitted a total of 7,895 sets of criminal fingerprints. Of this total, 46 percent were entered into IAFIS the same day as the date of the arrest. And 95 percent of the total criminal fingerprint submissions for this period were entered within 1 day after the date of the arrest. As mentioned previously, most criminal fingerprints are not entered into IAFIS the same day as the date of arrest and may reflect time lags of 90 days or more. For many jurisdictions, time lags in submitting fingerprints are attributable to various factors, including a lack of automation, competing priorities and resource constraints, and backlogs of paper fingerprint cards to be processed. Given these circumstances, large portions of lengthy submission times associated with paper fingerprint cards probably represent inactivity, or “holding,” rather than actual “processing.” The most significant factor causing delays in criminal fingerprint submissions is lack of electronic processing capability. Generally, law enforcement agencies that serve large populations have access to technology that allows electronic capture and transmission of criminal fingerprint data. For example, the most recent local law enforcement data collected by BJS (in a July 2000 survey) indicated that a majority of police departments serving populations of 50,000 or more reported they regularly used digital imaging technology for fingerprints, and a majority of sheriffs’ offices serving populations of 100,000 or more reported they regularly used such technology. However, the BJS report also indicated that law enforcement agencies in less populated areas may have to use paper fingerprint cards and manual processes. As a result, the BJS report noted that, overall, only 11 percent of all police departments nationwide and 27 percent of all sheriffs’ offices reported they regularly used digital imaging technology for fingerprints. Also, given competing priorities and resource constraints, local law enforcement agencies may not always see an urgent need to voluntarily submit paper fingerprint cards quickly, particularly if the arrestee is a repeat offender whose identity is already known. A representative of the National District Attorneys Association told us that, given the staff time and other costs involved, law enforcement agencies on a tight budget may not submit fingerprints quickly without a good reason to do so, even though submission would add to the national database. Local law enforcement agencies that use manual processes may hold fingerprint cards until a number are collected and then mail the batch to the state criminal history repository. For example, according to Missouri State Highway Patrol officials, some local agencies mail batches of paper criminal fingerprints cards every other week to the state criminal history repository. Broader perspectives on submission time frames are presented in an August 2003 BJS report. Basing its conclusions on a survey (conducted in January through July 2002) of state criminal history repository administrators, BJS reported wide variances among states regarding submission of paper fingerprint cards. For example, whereas Livescan fingerprint data often were received by repositories within 1 day or less after the arrest (sometimes only hours), one state’s repository reported receiving paper fingerprint cards 7 to 30 days (on average) after the date of arrest, another repository reported receiving cards up to 90 days after arrest, and another reported an average submission time of 169 days. During our review, FBI officials told us that their data systems cannot track the time from the date of arrest to when fingerprints (either paper or electronic) arrive at the state repositories for processing. Therefore, we could not determine what portion of submission times was attributable to the submitting law enforcement agency versus the state criminal history repository. In its August 2003 report, BJS also indicated that 26 states reported they had backlogs (as of year end 2001) in processing criminal fingerprint cards. Generally, the size and “age” of such backlogs, according to a BJS survey, largely are a function of resources available for processing the fingerprint cards. One state noted, for instance, that because of a lack of funding to pay contract staff responsible for data entry and clerical functions associated with fingerprint card processing, it had a backlog of 7,500 cards in the latter part of 2001, but the backlog was eliminated in June 2002 after state funds were reinstated. Law enforcement officials we contacted also said that their jurisdictions lacked the necessary personnel to quickly process fingerprint submissions. For example, Missouri State Highway Patrol officials said that the agency has had several fingerprint technician positions vacant over the last several years, resulting in a backlog of unprocessed fingerprint cards. Poor-quality fingerprints, inaccurate or incomplete textual information, and other technical aspects of submissions are additional factors that can delay entry of fingerprint data into IAFIS. According to FBI officials, about 5 to 6 percent of criminal fingerprint submissions are initially rejected for these reasons. Local and state law enforcement officials we contacted told us that it generally is not possible to resubmit fingerprints that were rejected for poor quality because the individuals may no longer be in custody. However, these officials said they generally resubmit fingerprints that were rejected because of inaccurate or incomplete textual information. FBI officials told us that when significant rejection patterns occur, the FBI works with the submitting law enforcement agencies to address the causes. Finally, as discussed in the following section, the timeliness of criminal fingerprint submissions can be affected by an increasing workload associated with the processing of “civil” fingerprints—that is, fingerprint- based background checks conducted for employment or other noncriminal justice purposes. In recent years, to encourage law enforcement agencies to submit criminal fingerprints electronically to IAFIS, the FBI has provided states with network connections, promoted the benefits of IAFIS at national conferences, and provided states with other technical assistance. In each of the five states we visited, the practices or plans for extending automation capabilities appeared to be based on practical or cost-benefit considerations, such as giving priority to placing Livescan equipment with local law enforcement agencies serving the most populous areas. Also, BJS has provided states with federal grants to help automate criminal fingerprint submissions. According to the local and state officials we contacted, continuation of federal technical and funding assistance is essential for achieving further improvements in the timeliness of criminal fingerprint submissions. In addition, to help mitigate competing workload demands stemming from increasing volumes of fingerprints submitted for civil or noncriminal justice purposes, such as employment background checks, the National Crime Prevention and Privacy Compact Council is considering a need to broaden the authority of private companies to process such fingerprints. Although the FBI continues to accept paper submissions, the FBI’s goal is to achieve a completely paperless system, with all fingerprints being submitted electronically. In 1998, to help achieve this goal, the FBI provided IAFIS network connections to each state through the CJIS Wide Area Network. These network connections provide each state with a link to support a fully automated fingerprint submission process, including electronic access to IAFIS. To further support the automation of criminal fingerprint submissions, the FBI has participated in various national conferences conducted by organizations such as the International Association for Identification, the National Sheriffs’ Association, and the International Association of Chiefs of Police. The FBI has also hosted two national conferences on IAFIS and has provided technical assistance to various local and state law enforcement agencies through workshops and site visits. Local and state law enforcement officials we contacted expressed a need for these initiatives to continue in the future. For example, Georgia Bureau of Investigation officials said that continued training by the FBI is essential to improve the quality of fingerprints and the timeliness of submissions. Also, Missouri State Highway Patrol officials said that previous FBI technical training has been valuable and that further training is still needed. In the five states we visited, the plans or practices for extending automation capabilities appeared to be based on practical or cost-benefit considerations. Generally, to allocate Livescan equipment, priority placements were made to local law enforcement agencies serving the most populous areas. For example, according to Georgia Bureau of Investigation officials, 88 percent of the state’s felony and serious misdemeanor offense arrests in 2002 occurred within the geographic jurisdictions of agencies that had access to Livescan machines. New Mexico Department of Public Safety officials told us that the nine Livescan machines available to law enforcement agencies in New Mexico are used to record fingerprints for about 65 percent of the criminal arrests in the state. Under the National Criminal History Improvement Program (NCHIP)—a grant program administered by BJS and designed to ensure that accurate records are available for use in law enforcement—states can receive funds to improve their ability to electronically provide criminal fingerprints to the FBI. NCHIP funds support a broad range of activities and programs to facilitate the electronic transfer of criminal fingerprints to the FBI, such as (1) ensuring compatibility of state criminal history and arrest records systems with FBI records systems, (2) establishing records management systems to improve the quality and completeness of criminal history and arrest information maintained by the state and provided to the FBI, and (3) providing training and hosting conferences and seminars for local and state criminal justice officials on issues related to improvements in and automation of criminal history and arrest records. According to BJS data for fiscal years 1999 through 2003, 44 states and the District of Columbia received a total of $31 million in NCHIP grants to improve local law enforcement and state criminal history repository access to electronic fingerprint transmission technology and IAFIS (see app. II). For instance, Georgia Bureau of Investigation officials said that the state used NCHIP funding in 1999 to provide smaller law enforcement agencies a cost-effective approach to electronically submit fingerprints. The funds were used, for example, to purchase card-scanning equipment to digitally convert paper fingerprint cards for electronic transmission to the state repository. On the other hand, local and state law enforcement officials we contacted said that fingerprints are not all submitted electronically because states still lack funding to purchase, operate, and maintain the necessary equipment. The officials said that law enforcement agencies generally do not resist the idea of converting to an electronic process but are limited financially in their capabilities to do so. For example, Missouri State Highway Patrol officials said that an obstacle to additional automation has been funding. According to these officials, while NCHIP is making funding available for purchasing Livescan machines, some local law enforcement agencies cannot afford the ongoing network and maintenance costs needed to support an automated system. In commenting on a draft of this report, the BJS Director indicated that NCHIP funds can and have frequently been used by the states for the maintenance of automated fingerprint systems. The Director added that if local agencies are not receiving funds for maintenance, it is probably because the state has not requested NCHIP funds for that purpose or has set its own priorities for which localities will receive such support. Generally, according to FBI officials, there is a continuing need for (1) additional Livescan devices; (2) the upgrade of automated fingerprint identification systems at the state level that are compatible with IAFIS; and (3) research for fingerprint imaging, Livescan, and other automated systems that will ensure interoperability of state and FBI systems. However, the FBI officials noted that given the budget problems that many states are now experiencing and the high cost of Livescan machines, investment in this technology may not be a priority for the states. State and FBI officials told us that the timeliness of criminal fingerprint submissions can be slowed by an increasing workload associated with the processing of fingerprint submissions for civil or noncriminal justice purposes, such as employment background checks. The numbers of criminal fingerprint submissions and civil fingerprint submissions to the FBI have increased annually in most years since 1992. As figure 6 shows, during 1996 to 2002, the number of criminal fingerprint submissions was exceeded by the number of civil fingerprint submissions in 5 of the 7 years. For example, in the most recent year (2002), criminal fingerprint submissions totaled 8.4 million, whereas civil fingerprint submissions totaled 9.1 million. The growth in civil fingerprint submissions is partly attributable to, among other factors, federal legislation that encouraged states to enact statutes authorizing fingerprint-based national searches of criminal history records of individuals seeking paid or volunteer positions with organizations serving children, the elderly, or the disabled. More recently, another factor has been homeland security concerns. For instance, because of the relatively unfettered access that taxicabs have to city infrastructure, including the airport, the Atlanta Police Department has begun running fingerprint-based criminal history background checks on all of the city’s approximately 3,500 taxicab drivers. To help mitigate workload demands, some states have begun awarding contracts to private companies to provide civil fingerprinting services. Currently, private companies are involved in the collection of fingerprints but do not have the legal authority to access criminal history information or make fitness determinations for employment. However, since February 2003, the National Crime Prevention and Privacy Compact Council—the 15-member entity (composed of state and federal officials) that administers the use and exchange of criminal history records for noncriminal justice uses—has been working to develop a rule to provide such authority. That is, the proposed rule would enable state and federal government agencies to contract with private companies to not only collect fingerprints but also have access to criminal history information and make fitness determinations for employment. According to the FBI, the rule is anticipated to be finalized by the middle of calendar year 2004 and will incorporate appropriate guidelines and controls. Local and state law enforcement agencies have made progress toward the FBI’s goal of electronic (paperless) processing of criminal fingerprints in the IAFIS environment. For example, all states have either established or are working to establish interoperability between their state automated fingerprint identification systems and IAFIS that allows for the electronic submission of criminal fingerprints to the FBI. However, there is still room for substantial improvement. Gaps exist in law enforcement agencies’ access to Livescan technology. Given budget and other resource constraints at all levels of government, it may be unrealistic to expect that 100 percent electronic processing and submission eventually will be achieved for all jurisdictions. Smaller law enforcement agencies, for example, may have difficulty justifying the cost of operating or maintaining Livescan equipment and a telecommunications linkage to the state’s central repository. For local agencies without access to Livescan equipment and for state agencies that cannot currently submit fingerprints electronically to IAFIS, the potential may exist for improving the timeliness of processing and submitting paper fingerprint cards. The “potential” rests on reducing the time that criminal fingerprints are waiting, or being held for processing, which is likely to be a resource issue. Theoretically, for example, these cards could be processed and mailed forward on a daily basis rather than held for batch processing. Additionally, the processing of noncriminal fingerprints could be handled by contractors, which could free up law enforcement personnel to process criminal fingerprints in a more timely manner. Ultimately, such decisions may involve unique circumstances and, thus, perhaps are best left to agency-by-agency determinations. Overall, the effect of less than universal electronic processing is unclear. In many cases, for instance, a same-day or quick response from the FBI may not be needed. On the other hand, although such instances are not readily quantifiable, there are cases where a local or state law enforcement agency has released an arrestee from custody and subsequently received an IAFIS identification response showing cross-jurisdictional criminal history or outstanding warrants. In the absence of electronic processing, the number of such instances may be partly mitigated by the manual procedure whereby law enforcement can directly fax fingerprints to the FBI. However, the effectiveness of this exception-basis procedure depends largely on officers having sufficient experience to recognize a need for expedited manual processing. Federal technical and funding assistance continues to support ongoing efforts to make additional progress in the automation of fingerprint submissions. The need for positive, fingerprint-based identifications— providing linkages to complete criminal history records—is not likely to diminish in the foreseeable future, given that significant numbers of arrestees have multistate criminal histories, the incidence of identity theft or identity fraud is growing, and homeland security concerns and noncriminal justice demands are increasing. On December 9, 2003, we provided a draft of this report for review and comment to the Department of Justice. In its written comments, dated January 7, 2004, Justice said the report was accurate and provided some technical clarifications, which we incorporated in this report where appropriate. Also, one Justice component (BJS) commented that the draft report presented a narrow description of the role of NCHIP in upgrading the ability of states to provide fingerprints electronically to the FBI. Specifically, the BJS Director noted that the allowable uses for NCHIP funds extended far beyond the purchase of Livescan machines and covered the continuum from fingerprint capture through the transmission of the images to the FBI. We added additional information to the applicable report section to reflect this perspective. Further, the BJS Director commented that much progress has been made in the automation of criminal fingerprint submissions under NCHIP. According to the Director, NCHIP performance measures calculated and tracked as part of the administration and oversight of the program indicate that The number of arresting agencies reporting arrests electronically to the state criminal history repositories has increased significantly, from 493 agencies in 1997 to 2,594 agencies in 2001. Arrest information is reaching state criminal history repositories faster, with submission times from the arresting agency to the state agency dropping from an average of 14 days in 1997 to 11 days in 2001. State repositories are processing arrest information faster, with average times to post arrest data into the criminal history record dropping from 32 days in 1995 to 13 days in 2001. State criminal history backlogs of unprocessed fingerprint cards dropped from an estimated 711,000 in 1997 to an estimated 354,300 in 2001. The Director noted that these statistics are based on data collected for BJS in biennial surveys conducted by SEARCH. Because of our reporting time frames, these specific statistics were not included in the data reliability assessments described in appendix I. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or Danny Burton at (214) 777-5600. Other key contributors to this report were Amy Bernstein, Michele Fejfar, Ann H. Finley, Jason Kelly, George Quinn, Deena Richart, and Jason Schwartz. At the request of the Ranking Minority Member, Senate Committee on Appropriations, we addressed the following questions regarding the submission of criminal fingerprints by local and state law enforcement agencies to the Federal Bureau of Investigation (FBI) for processing by the Integrated Automated Fingerprint Identification System (IAFIS): Why is IAFIS processing of criminal fingerprints important to local and state law enforcement agencies? What progress have local and state law enforcement agencies made toward the FBI’s goal of achieving electronic (paperless) fingerprint processing after an arrest has been made, and what factors have influenced this progress? What efforts are being made to improve the timeliness of criminal fingerprint submissions from local and state law enforcement agencies? To address these questions, we visited the FBI’s Criminal Justice Information Services Division (Clarksburg, WV), which manages IAFIS. We interviewed FBI officials and reviewed available statistics, studies, and other relevant information. We analyzed FBI data by state on criminal fingerprint submission volumes and times for fingerprints entered into IAFIS from October 2002 through May 2003. Our analysis focused on criminal fingerprint submissions for arrests made since the implementation of IAFIS on July 28, 1999, and covered both automated and manual (paper) submissions. We also obtained the FBI’s reports of criminal fingerprint submission times in 1993, 1995, and 1997, and the total numbers of criminal and civil fingerprints submitted annually during 1992 through 2002. Also, we obtained funding amounts from Bureau of Justice Statistics (BJS) officials regarding the amount of National Criminal History Improvement Program (NCHIP) grant funding awarded in fiscal years 1999 through 2003 to the states and the District of Columbia for use in automating fingerprint processes. Further, we discussed the fingerprint submission issues with representatives of the International Association of Chiefs of Police, the National Sheriffs’ Association, the Major County Sheriffs’ Association, the National District Attorneys Association, and SEARCH (the National Consortium for Justice Information and Statistics). Also, we discussed the fingerprint submission issues with (and analyzed any statistics or other information maintained by) state law enforcement agencies (e.g., state police department and judicial system representatives) in five states—Connecticut, Georgia, Missouri, Nevada, and New Mexico. We selected these states to reflect a range of various factors or considerations—that is, the volume of fingerprint submissions, the “age” of such submissions (i.e., the average amount of time from when the fingerprints were taken to when they were entered into IAFIS), and level of automation in the state’s criminal justice information system, as well as to encompass different geographic areas of the nation. Further, in each of the five states, we discussed the fingerprint submission issues with relevant local agencies (e.g., city police department or county sheriff’s office) in at least one local jurisdiction. Generally, for travel cost reasons (among other considerations), the local jurisdictions selected were located in or near the respective state’s capital. To assess the reliability of the FBI’s October 2002 through May 2003 criminal fingerprint submission data, we (1) reviewed existing documentation related to the data sources, (2) electronically tested the data to identify obvious problems with completeness or accuracy, and (3) interviewed knowledgeable agency officials about the data. We determined that the data were sufficiently reliable for the purposes of this report. To assess the reliability of (1) the FBI’s reports of criminal fingerprint submission times in 1993, 1995, and 1997; (2) the total numbers of criminal and civil fingerprints submitted annually during 1992 through 2002; and (3) the percentages of electronic fingerprint submissions, we interviewed knowledgeable agency officials about the data and reviewed existing documentation related to the data sources. To assess the reliability of the results of the BJS surveys of local law enforcement, sheriff’s offices, and state criminal history repository administrators, we reviewed existing documentation related to the data sources. To assess the reliability of the BJS NCHIP grant funding amounts and the FBI estimate of multistate offenders, we interviewed knowledgeable agency officials. We determined that the data were sufficiently reliable for the purposes of this report. Appendix II: National Criminal History Improvement Program Grant Funding for AFIS/Livescan (Fiscal Years 1999-2003) This appendix summarizes Bureau of Justice Statistics (BJS) data regarding National Criminal History Improvement Program (NCHIP) grant funding received by states and the District of Columbia for automated fingerprint identification system (AFIS) and Livescan activities in fiscal years 1999 through 2003 (see table 1). According to BJS, the dollar amounts in table 1 are based on actual amounts awarded and the proposed AFIS/Livescan activities listed in grant applications from the states and the District of Columbia. A BJS official told us that some of the 12 states that received no grant funding for AFIS/Livescan activities during this time period did receive NCHIP funding for such activities in the earlier years of the program, beginning in 1995.
By positively confirming identifications and linking relevant records of arrests and prosecutions, fingerprint analysis provides a basis for making fundamental criminal justice decisions regarding detention, charging, bail, and sentencing. In 1999, the FBI implemented the Integrated Automated Fingerprint Identification System (IAFIS)--a computerized system for storing, comparing, and exchanging fingerprint data in a digital format. The FBI's goal under IAFIS is to ultimately achieve paperless processing and to provide a response within 2 hours to users who submit criminal fingerprints electronically. Maximizing the benefits of rapid responses under IAFIS depends largely on how quickly criminal fingerprints are submitted by local and state law enforcement agencies. Concerns have been raised that, after arrests are made by some local or state law enforcement agencies, periods of up to 6 months may elapse before the criminal fingerprints are submitted for entry into IAFIS. GAO examined (1) the importance of IAFIS processing to local and state law enforcement agencies, (2) the progress these agencies have made toward the goal of paperless fingerprint processing, and (3) efforts being made to improve the timeliness of criminal fingerprint submissions. IAFIS processing of criminal fingerprints is important to local and state law enforcement not only for updating national databases but also for obtaining an individual's criminal history and, at times, for obtaining positive identification of arrestees. For a recent 8-month period (October 2002 through May 2003) that GAO reviewed, law enforcement agencies wanted a response from the FBI for 78 percent of the approximately 5.3 million sets of criminal fingerprints submitted to IAFIS. The extent to which these responses were used to either positively identify arrestees or obtain criminal history records is unknown. However, the FBI provided GAO with examples of how IAFIS responses prevented the premature release of individuals who had used false names at arrest and were wanted in other jurisdictions. Law enforcement agencies have made progress toward the FBI's goal of paperless processing of criminal fingerprints, although there is room for substantial improvement. The percentage of criminal fingerprints submitted electronically by state repositories to the FBI increased from 45 percent in 1999 to 70 percent in 2003. Also, for the recent 8-month period GAO reviewed, the overall average submission time for criminal fingerprints was 40 days (an average that encompasses both paper and electronic submissions)--whereas, before IAFIS, average submission times were much higher (e.g., 118 days in 1997). Although much progress has been made, many jurisdictions lack automation and have backlogs of paper fingerprint cards to be processed, in part because of competing priorities and resource constraints. Numerous efforts have been made to help improve the timeliness of criminal fingerprint submissions to IAFIS. To facilitate electronic processing, federal technical and financial assistance has encouraged law enforcement agencies to purchase optical scanning (Livescan) equipment for taking fingerprints and to establish automated systems compatible with FBI standards. GAO noted that the need for quick, fingerprint-based identifications--positively linking individuals to relevant criminal history records--is becoming increasingly important. Reasons for such importance are the mobility of criminals (many of whom have multistate records), the growing incidence of identity theft or identity fraud, the significance of homeland security concerns, and increasing demands stemming from background checksrequired for employment or other noncriminal justice purposes.
The radio frequency spectrum is the part of the natural spectrum of electromagnetic radiation lying between the frequency limits of 3 kilohertz (kHz) and 300 gigahertz (GHz). Not all spectrum has equal value. The spectrum most highly valued generally consists of frequencies between 225 MHz and 3700 MHz, as these frequencies have properties well suited to many important wireless technologies, such as mobile phones, radio, and television broadcasting. According to NTIA, as of September 2012, federal agencies had exclusive access to about 18 percent of these high- value frequencies, and nonfederal users had exclusive licenses to about 33 percent. The remainder of this spectrum is allocated to shared use. However, in many cases in these shared bands, federal or nonfederal uses may dominate and actual sharing is nominal. NTIA has concluded that overall, approximately 43 percent of these high-value frequencies are predominantly used by federal operations. Federal agencies use spectrum to help meet a variety of missions, including emergency communications, national defense, land management, and law enforcement. Over 60 federal agencies and departments combined have over 240,000 frequency assignments. Agencies and departments within the Department of Defense have the most assignments, followed by the Federal Aviation Administration, the Department of Justice, the Department of Homeland Security, the Department of the Interior, the Department of Agriculture, U.S. Coast Guard, the Department of Energy, and the Department of Commerce, respectively. These federal agencies and departments hold 94 percent of all federally assigned spectrum. Nonfederal entities (which include commercial companies and state and local governments) also use spectrum to provide a variety of services. For example, state and local police departments, fire departments, and other emergency services agencies use spectrum to transmit and receive critical voice and data communications, while commercial entities use spectrum to provide wireless services, including mobile voice and data, paging, broadcast radio and television, and satellite services (see fig. 1). In the United States, responsibility for spectrum management is divided between NTIA and FCC. NTIA and FCC jointly determine the amount of spectrum allocated for federal, nonfederal, and shared use. After this allocation occurs, in order to use spectrum, nonfederal users must follow rules and obtain authorizations from FCC to use specific spectrum frequencies, and federal users must follow rules and obtain frequency assignments from NTIA. In order for nonfederal users to share federal spectrum, NTIA and FCC are jointly involved in the process. The nonfederal party petitions FCC, and FCC in turn coordinates rulemakings and licenses with NTIA through IRAC. NTIA manages sharing between federal users on a day-to-day basis. If federal users are requesting frequency assignments in exclusive nonfederal or shared bands, that request is coordinated through IRAC with FCC. If sharing is solely between nonfederal users in exclusive nonfederal bands, sharing is generally governed by FCC rules and does not go through NTIA, unless there could be out-of-band interference. In addition to its spectrum allocation and authorization duties, NTIA serves as the President’s principal advisor on telecommunications and information policy and manages federally assigned spectrum, including preparing for, participating in, and implementing the results of international radio conferences, as well as conducting extensive research and technical studies through its research and engineering laboratory, the Institute for Telecommunication Sciences. NTIA has authority to issue rules and regulations as may be necessary to ensure the effective, efficient, and equitable use of spectrum both nationally and internationally. It also has authority to develop long-range spectrum plans to meet future spectrum requirements for the federal government. Spectrum sharing can be defined as the cooperative use of common spectrum. In this way, multiple users agree to access the same spectrum at different times or locations, as well as negotiate other technical parameters, to avoid adversely interfering with one another. For sharing to occur, users and regulators must negotiate and resolve where (geographic sharing), when (sharing in time), and how (technical parameters) spectrum will be used (see fig. 2). Spectrum sharing also occurs with unlicensed use of spectrum, since it is accessible to anyone using wireless equipment certified by FCC for those frequencies. Equipment such as wireless microphones, baby monitors, and garage door openers typically share spectrum with other services on a non-interference basis using low power levels to avoid interference with higher priority uses. In contrast with most licensed spectrum use, unlicensed spectrum users have no regulatory protection against interference from other licensed or unlicensed users in the band. However, unlicensed use is regulated to ensure that unlicensed devices do not cause undue interference to operations with a higher priority. For example, in the 5 GHz band, wireless fidelity (Wi-Fi) devices share a band with military radar subject to the condition that the Wi-Fi devices are capable of spectrum sensing and dynamic frequency selection; if radar is detected, the unlicensed user must immediately vacate the channel. As the federal agency authorized to develop national spectrum policy, NTIA has been directed to conduct several projects focused on reforming governmentwide federal spectrum management and promoting efficiency among federal users of spectrum; however, we reported in 2011 that its efforts in this area had resulted in limited progress toward improved spectrum management. NTIA has authority to, among other things, establish policies concerning assigning spectrum to federal agencies, coordinate spectrum use across federal agencies, and promote efficient use of spectrum by federal agencies in a manner which encourages the most beneficial public use. As such, NTIA has a role in ensuring that federally allocated spectrum is used efficiently. According to NTIA’s Redbook and agency officials, efficient use includes ensuring that federal agencies’ decisions to use spectrum to support government missions have been adequately justified and that all viable tradeoffs and options have been explored before making the decision to use spectrum- dependent technology, and ensuring that these tradeoffs are continuously reviewed to determine if the need for spectrum has changed over time. NTIA’s primary guidance to federal agencies is technical guidance provided through NTIA’s Redbook concerning how to manage assigned spectrum. In 2003, the Bush Administration directed NTIA to develop strategic plans, and in March 2008, NTIA issued its report on federal spectrum use entitled the Federal Strategic Spectrum Plan. While the intent of the Federal Strategic Spectrum Plan was to identify the current and projected spectrum requirements and long-range planning processes for the federal government, we reported in 2011 that the final plan is limited in these areas. For example, the plan does not identify or include quantitative governmentwide data on federal spectrum needs. Instead, NTIA’s plan primarily consists of a compilation of the plans submitted by 15 of the more than 60 agencies that use federal spectrum. Additionally, due to the fact that they contained limited information regarding future requirements and technology needs, NTIA concluded that its “long-range assumptions are necessarily also limited.” Furthermore, NTIA’s plan did not contain key elements and best practices of strategic planning. NTIA’s primary spectrum management operations include authorizing federal frequency assignments and certifying spectrum-dependent equipment for federal users; however, these processes are primarily focused on interference mitigation as determined by IRAC and do not focus on ensuring the best use of spectrum across the federal government. In 2011, we found that the process as established by federal regulations for review and approval of frequency assignment and system certification was technical in nature, focusing on ensuring that the new frequency or system that an agency wants to use would not interfere with another agency’s operations. According to NTIA officials, this focus on day-to-day spectrum activities, such as interference mitigation, is due to the agency’s limited resources. This focus, while important, makes limited consideration about the overall best use of federally allocated spectrum. Therefore, NTIA’s current processes provide limited assurance that federal spectrum use is evaluated from a governmentwide perspective to ensure that decisions will meet the current and future needs of the agencies, as well as the federal government as a whole. NTIA’s data management system is antiquated and lacks transparency and internal controls. In 2011, we reported that NTIA collects all federal spectrum data in the Government Master File (GMF), which according to NTIA officials is an outdated legacy system that was developed primarily to store descriptive data. These data are not detailed enough to support the current analytical needs of NTIA or other federal users, as the system was not designed to conduct such analyses. NTIA does not generate any data, but maintains agency-reported spectrum data in the GMF, which are collected during the frequency assignment and review processes. NTIA’s processes for collecting and verifying GMF data lack key internal controls, including those focused on data accuracy, integrity, and completeness. Control activities such as data verification and reconciliation are essential for ensuring accountability for government resources and achieving effective and efficient program results. In 2011, we reported that NTIA’s data collection processes lack accuracy controls and do not provide assurance that data are being accurately reported by agencies. Rather, NTIA expects federal agencies to supply accurate and up-to-date data submissions, but it does not provide agencies with specific requirements on how to justify that the agencies’ spectrum assignments will fulfill their mission needs. NTIA is developing a new data management system—the Federal Spectrum Management System (FSMS)—to replace the GMF. According to NTIA officials, the new system will modernize and improve spectrum management processes by applying modern information technology to provide more rapid access to spectrum and make the spectrum management process more effective and efficient. NTIA projects that FSMS will improve existing GMF data quality, but not until 2018. According to NTIA’s FSMS transition plan, at that time data accuracy will improve by over 50 percent. However, in the meantime it is unclear whether important decisions regarding current and future spectrum needs are based on reliable data. In response to the government initiatives to make a total of 500 MHz of spectrum available for wireless broadband, in 2010 NTIA (1) identified 115 MHz of federally allocated spectrum to be made available for wireless broadband use within the next 5 years, referred to as the Fast Track Evaluation, and (2) developed an initial plan and timetable for repurposing additional spectrum for broadband, referred to as the 10-Year Plan. Fast Track Evaluation. NTIA and the Policy and Plans Steering Group (PPSG) identified and recommended portions of two frequency bands, totaling 115 MHz of spectrum within the ranges of 1695–1710 MHz and 3550–3650 MHz to be made available for wireless broadband use. For each of these bands, NTIA reviewed the number of federal frequency assignments within the band, the types of federal operations and functions that the assignments support, and the geographic location of federal use. Since clearing these bands of federal users and relocating incumbent federal users to new bands was not an option in the given time frame, the bands that NTIA recommended be made available will be opened to geographic sharing by incumbent federal users and commercial broadband. 10-Year Plan. By a presidential memorandum, NTIA was directed to collaborate with FCC to make available 500 MHz of spectrum over the next 10 years, suitable for both mobile and fixed wireless broadband use, and complete by October 1, 2010, a specific plan and timetable for identifying and making available the 500 MHz for broadband use. publicly released this report in November 2010. In total, NTIA and the National Broadband Plan identified 2,264 MHz of spectrum to analyze for possible repurposing, of which 639 MHz is exclusively used by the federal government and will be analyzed by NTIA. Additionally, NTIA will collaborate with FCC to analyze 835 MHz of spectrum that is currently located in bands that are shared by federal and nonfederal users. Furthermore, NTIA has stated that it plans to seek advice and assistance from CSMAC, its federal advisory committee comprised of industry representatives and experts, as it conducts analyses under the 10-Year Plan. Unleashing the Wireless Broadband Revolution, 75 Fed. Reg. 38387. previously evaluated for reallocation, and in 2001, we reported that at the time adequate information was not currently available to fully identify and address the uncertainties and risks of reallocation. Industry stakeholders, including wireless service providers, representatives of an industry association, and a think tank representative we contacted in 2011 expressed concerns over the usefulness of the spectrum identified by NTIA in the Fast Track Evaluation, since most of the spectrum identified (100 of the 115 MHz) is outside the range considered to have the best propagation characteristics for mobile broadband. Overall, there has been limited interest in the bands above 3 GHz for mobile broadband use because, according to industry stakeholders, there has been minimal development of mobile broadband in bands above 3 GHz and no foreseeable advances in this area at this time. According to industry representatives, the 1755–1780 MHz band that NTIA considered as part of the Fast Track Evaluation has the best characteristics for mobile broadband use, and it is internationally harmonized for this use. NTIA did not select this band to be made available in the 5-year time frame due to the large number of federal users currently operating there. However, NTIA identified it as the first band to be analyzed under the 10-Year Plan to determine if it can be made available for commercial broadband use. An industry stakeholder has stated that the 1695–1710 MHz band identified by NTIA in the Fast Track Evaluation is the second-best alternative for wireless broadband if the 1755–1780 MHz band were not made available; however, the 1695– 1710 MHz band is not currently used internationally for wireless broadband, which may reduce device manufacturers’ incentive for developing technology that can be used in these frequencies. While federal spectrum users often share spectrum among themselves, they may have little economic incentive to otherwise use spectrum efficiently, including sharing it with nonfederal users. From an economic perspective, when a consumer pays the market price for a good or service and thus cannot get more of it without this expense, the consumer has an incentive to get the most value and efficiency out of the good as possible. If no price is attached to a good—which is essentially the case with federal agencies’ use of spectrum—the normal market incentive to use the good efficiently may be muted. In the case of federal spectrum users, obtaining new spectrum assignments may be difficult, so an agency may have an incentive to conserve and use the spectrum it currently has assigned to it or currently shares efficiently, but the extent of that incentive is likely weaker than if the agency had had to pay a market price for the all of their spectrum needs. As such, federal spectrum users do not fully face a market incentive to conserve on their use of spectrum or use it in an efficient manner. The full market value of the spectrum assigned to federal agencies has not been assessed, but, according to one expert, would most likely be valued in the tens of billions of dollars. Similarly, many nonfederal users, such as television broadcasters and public safety entities, did not pay for spectrum when it was assigned to them and do not pay the full market price for their continuing use of spectrum so, like federal agencies, they may not fully have market-based incentives to use spectrum efficiently. While licensed, commercial users who purchased spectrum at auction generally have market incentives to use their spectrum holdings efficiently, these users also have incentives that work against sharing spectrum, except in those instances where the incumbent licensee is unlikely to build out its network or offer services to a particular area, such as in certain remote, sparsely populated areas. FCC officials and industry stakeholders and experts told us that these users may prefer not to share their unused spectrum because they are concerned about the potential for interference to degrade service quality to their customers. Also, they may prefer to not give potential competitors access to spectrum. Industry stakeholders and experts also said that companies seeking spectrum may prefer obtaining exclusive spectrum licenses over sharing spectrum that is licensed to another company or federal user, given uncertainties about regulatory approvals, interference, and enforcement if interference occurs. There are several barriers that can deter sharing. One such barrier is that federal agencies will not risk mission failure, particularly when there are security and public safety implications. According to the agency officials we contacted, federal agencies will typically not agree to share spectrum if it puts achieving their missions at risk. The officials stressed that when missions have security and safety implications, sharing may pose unacceptable risks. For example, the military tests aircraft and trains pilots over test ranges that can stretch hundreds of miles, maintaining constant wireless contact. The ranges, according to officials, cannot share the communication frequencies because even accidental interference in communications with an aircraft could result in catastrophic mission failure. Further, sharing information about such flights could expose particular pilots and aircraft, or the military’s larger mission, to increased risk. According to FCC officials, concerns about risk can drive conservative technical standards that can make sharing impractical. In general, the technical analyses and resulting standards are based on worst-case scenarios, and not on assessments of the most likely scenario or a range of scenarios. Moreover, in contrast to FCC’s open rulemaking process, there is little opportunity for public input to the standards setting process. Stakeholders may meet or have discussions with NTIA and the relevant federal agencies, but this occurs without any formal public process. Nor do stakeholders have any effective means to appeal other than by asking FCC to reject NTIA’s analysis or standards. Another barrier is that spectrum sharing can be costly. Stakeholders told us that sharing federal spectrum can be costly for both the nonfederal and federal users seeking to share for the following reasons: Mitigation of potential interference can be costly in terms of equipment design and operation. Users applying to share federal frequencies may find that those frequencies are being used by more than one federal agency or program. As a result of needing to mitigate inference for multiple users, costs to share spectrum in that band could increase. Federal users often use and rely on proven older technology that was designed to use spectrum to meet a specific mission and typically is not conducive to operating as efficiently or flexibly as the state-of-the- art technologies might now allow. Limited budgets may prevent them from being able to invest in newer technology which can facilitate easier sharing. Additionally, we found that spectrum sharing approval and enforcement processes can be lengthy and unpredictable. FCC and NTIA processes can cause two main problems when nonfederal users seek to share federal spectrum, or when nonfederal users share with one another, according to stakeholders: The spectrum-sharing approval process between FCC and NTIA can be lengthy and unpredictable, and the risk associated with it can be costly for new entrants. FCC officials told us that its internal processes can potentially last years if requiring a rulemaking to accommodate shared use of spectrum. In addition to that time, NTIA officials said that IRAC’s evaluation of potential harmful interference could take months. In one example, the Department of Defense, along with other federal agencies and nonfederal entities, currently shares a spectrum band between 413-457 MHz with a nonprofit medical devices provider for use in implant products for veterans. It took approximately 2 years (from 2009 to 2011) for FCC and NTIA to facilitate this arrangement, as FCC required a rulemaking and NTIA required a lengthy evaluation of potential interference. This nonprofit is funded by an endowment and was not dependent on income from the device to sustain itself during this process, but such delays, and the potential for a denial, could discourage for-profit companies from developing and investing in business plans that rely on sharing federal spectrum. Stakeholders we interviewed told us that when federal or nonfederal users share spectrum, both parties have concern that harmful interference may affect their missions or operations if the other party overreaches or does not follow the agreement. They also fear that the enforcement actions that are taken by FCC will happen too slowly to protect their interests or that enforcement outcomes can be unpredictable. Besides lacking incentives and overcoming other barriers, users may also have difficulty identifying spectrum suitable for sharing because data on available spectrum is incomplete or inaccurate, and information on some federal spectrum usage is not publicly available. According to NTIA officials, coordinating spectrum sharing requires accurate data on users, frequencies, locations, times, power levels, and equipment, among other things. We recently reported that both FCC’s and NTIA’s spectrum databases may contain incomplete and inaccurate data. Further, federal agency spectrum managers told us that agencies have not been asked to regularly update their strategic spectrum plans, in which they were required to include an accounting of spectrum use. As mentioned, NTIA is developing a new data system that officials believe will provide more robust data that will enable more accurate analysis of spectrum usage and potential interference, which may in turn identify more sharing opportunities. In addition, recently proposed legislation would require in part that FCC, in consultation with NTIA and the White House Office of Science and Technology Policy, prepare a report for Congress that includes an inventory of each radio spectrum band they manage. The inventory is also to include data on the number of transmitters and receiver terminals in use, if available, as well as other technical parameters—coverage area, receiver performance, location of transmitters, percentage and time of use, a list of unlicensed devices authorized to operate in the band and description of use—that allow for more specific evaluation of how spectrum can be shared. However, experts and federal officials we contacted told us that there may be some limitations to creating such an inventory. For instance, measuring spectrum usage can be difficult because it can only be accomplished on a small scale and technologies to measure or map widespread spectrum usage are not yet available. Additionally, FCC and NTIA officials told us that information on some federal spectrum bands may never be made publicly available because of the sensitive and classified nature of some federal spectrum use. We have previously reported that to improve spectrum efficiency among federal agencies, Congress may wish to consider evaluating what mechanisms could be adopted to provide incentives and opportunities for agencies to move toward more efficient use of spectrum, which could free up some spectrum allocated for federal use to be made available for sharing or other purposes. Federal advisors and experts we talked to identified several options that could provide incentives and opportunities for more efficient spectrum use and spectrum sharing by federal and nonfederal users, which include, among others: (1) assessing spectrum usage fees; (2) expanding the availability of spectrum for unlicensed uses; and (3) increasing the federal focus on research and development of technologies that can enable spectrum sharing and improve spectral efficiency. Assessing spectrum usage fees. Several advisory groups and spectrum industry experts, including those we interviewed, have recommended that spectrum fees be assessed based on spectrum usage. As previously mentioned, with the exception of administrative fees for frequency assignments, federal users incur no costs for using spectrum. As such, federal users may have little incentive to share spectrum assigned to them with nonfederal users or identify opportunities to use spectrum more efficiently—except to the extent that sharing or more efficient use helps them achieve their mission requirements. In 2011, the CSMAC Incentives Subcommittee recommended that NTIA and FCC study the implementation of spectrum fees to drive greater efficiency and solicit input from both federal and nonfederal users who might be subject to fees. The National Broadband Plan has also recommended that Congress consider granting FCC and NTIA authority to impose spectrum fees on unauctioned spectrum license holders—such as TV broadcasters and public safety entities—as well as government users. Fees may help to free spectrum for new uses, since licensees who use spectrum inefficiently may reduce their holdings or pursue sharing opportunities once they bear the opportunity cost of letting it remain fallow or underused. Further, FCC officials told us that they have proposed spectrum usage fees at various times, including in FCC’s most recent congressional budget submission, and requested the legislative authorities to implement such a program. While noting the benefits, the CSMAC Incentives Subcommittee report mentions specific concerns about the impact of spectrum fees on government users. For instance, some CSMAC members expressed concern that fees do not fit into the federal annual appropriations process and new appropriations to cover fees are neither realistic nor warranted in the current budget environment. Other members suggested that fees will have no effect because agencies will be assured additional funds for their spectrum needs. Similarly, the National Broadband Plan notes that a different approach to setting fees may be appropriate for different spectrum users, and that a fee system must also avoid disrupting public safety, national defense, and other essential government services that protect human life, safety, and property. To address some of the concerns regarding agency budgets, the recent PCAST report recommended the use of a “spectrum currency” process to promote spectrum efficiency. Rather than using funds to pay for spectrum, federal agencies would each be given an allocation of synthetic currency that they could use to “buy” their spectrum usage rights. Usage fees would be set based on valuations of comparable private sector uses for which the market has already set a price. Agencies would then have incentive to use their assignments more efficiently or share spectrum. In the PCAST proposal, agencies would also be rewarded for making spectrum available to others for sharing, by being reimbursed for their investments in improving spectrum sharing from a proposed Spectrum Efficiency Fund. Expanding the availability of spectrum for unlicensed use. Unlicensed spectrum use is inherently shared spectrum access, and according to spectrum experts we interviewed and other stakeholders, unlicensed use of spectrum is a valuable complement to licensed spectrum and more spectrum could be made available for unlicensed use. Spectrum for unlicensed use can be used efficiently and for high value applications, like Wi-Fi, for example. Increasing the amount of spectrum for unlicensed use may allow more users to share without going through lengthy negotiations and interference mitigations, and also allow for more experimentation and innovation. More recently, FCC has provided unlicensed access to additional spectrum, known as TV “white spaces,” to help address spectrum demands. The white spaces refer to the buffer zones that FCC assigned the television broadcasters to mitigate unwanted inference between adjacent stations. With the more efficient TV transmission capabilities that resulted from the digital television transition, the buffer zones are no longer needed and FCC approved the previously unused spectrum for unlicensed use. To identify available white space spectrum, devices must access a database which responds with a list of the frequencies that are available for use at the device’s location. As an example, one local official explained that his city uses TV white space spectrum to provide a network of public Wi-Fi access and public safety surveillance functions. Increasing the federal focus on research and development of technologies. Several technological advances promise to make sharing easier, but are still at early stages of development and testing. For example, various spectrum users and experts we contacted mentioned the potential of dynamic spectrum access technology. If made fully operational, dynamic spectrum access technology will be able to sense available frequencies in an area and jump between frequencies to seamlessly continue communication as the user moves geographically and through the spectrum. According to experts and researchers we contacted, progress has been made but there is no indication of how long it will be before this technology is fully deployable. Such new technologies can obviate or lessen the need for extensive regulatory procedures to enable sharing and can open up new market opportunities for wireless service providers. If a secondary user or sharing entity employs these technologies, the incumbent user or primary user would theoretically not experience harmful interference, and agreements and rulemakings that are currently needed may be streamlined or unnecessary to enable sharing. Although industry participants indicated that extensive testing under realistic conditions is critical to conducting basic research on spectrum efficient technologies, we found that only a few companies are involved in such research and may experience challenges in the testing process. Companies tend to focus technology development on current business objectives as opposed to conducting basic research that may not show an immediate business return. For example, NTIA officials told us that one company that indicated it would participate in NTIA’s dynamic spectrum access testing project removed its technologist from the testing effort to a project more closely related to its internal business objectives. Furthermore, some products are too early in the development stage to even be fully tested. For example, NTIA officials also said six companies responded to NTIA’s invitation to participate in the previously mentioned dynamic spectrum access testing project. However, only two working devices were received for the testing, and a third device received did not work as intended. Other companies that responded told NTIA that they only had a concept and were not ready to test an actual prototype. Recent federal advisory committee recommendations emphasize the importance of funding and providing incentives for research and development endeavors. For example, to promote research in efficient technologies, PCAST recommended that (1) the Research and Development Wireless Innovation Fund release funds for this purpose and (2) the current Spectrum Relocation Fund be redefined as the Spectrum Efficiency Fund. As discussed, this adjustment would allow for federal agencies to be reimbursed for general investments in improving spectrum sharing. Similarly, CSMAC recommended the creation of a Spectrum Innovation Fund. Unlike the Spectrum Relocation Fund, which is strictly limited to the actual costs incurred in relocating federal systems from auctioned spectrum bands, the Spectrum Innovation Fund could also be used for spectrum sharing and other opportunities to enhance spectrum efficiency. Radio frequency spectrum is a scarce national resource that enables wireless communications services vital to the U.S. economy and to a variety of government functions, yet NTIA has not developed a strategic, governmentwide vision for managing federal use of this valuable resource. NTIA’s spectrum management authority is broad in scope, but NTIA’s focus is on the narrow technical aspects of spectrum management, such as ensuring new frequency assignments will not cause interference to spectrum-dependent devices already in use, rather than on whether new assignments should be approved based on a comprehensive evaluation of federal spectrum use from a governmentwide perspective. Lacking an overall strategic vision, NTIA cannot ensure that spectrum is being used efficiently by federal agencies. Furthermore, agencies are not required to submit justifications for their spectrum use and NTIA does not have a mechanism in place to validate and verify the accuracy of spectrum-related data submitted by the federal agencies. This has led to decreased accountability and transparency in how federal spectrum is being used and whether the spectrum-dependent systems the agencies have in place are necessary. Without meaningful data validation requirements, NTIA has limited assurance that the agency-reported data it collects are accurate and complete. In our April 2011 report, we recommended that NTIA (1) develop an updated plan that includes key elements of a strategic plan, as well as information on how spectrum is being used across the federal government, opportunities to increase efficient use of federally allocated spectrum and infrastructure, an assessment of future spectrum needs, and plans to incorporate these needs in the frequency assignment, equipment certification, and review processes; (2) examine the assignment review processes and consider best practices to determine if the current approach for collecting and validating data from federal agencies can be streamlined or improved; and (3) establish internal controls for management oversight of the accuracy and completeness of currently reported agency data. With respect to our first recommendation, NTIA has not developed an updated strategic plan and previously noted that the Presidential Memorandum of June 28, 2010, and the Wireless Innovation Initiative provide significant strategic direction for NTIA and the other federal agencies. In September 2012, NTIA officials told us that NTIA intends to update its strategic plan by October 2013. NTIA concurred with our other two recommendations and is taking action to address them. For example, NTIA has proposed approaches to implement new measures to better ensure the accuracy of agency- reported data, and is taking steps to implement internal controls for its data management system in a cost efficient manner. With respect to spectrum sharing, there are currently insufficient incentives to encourage more sharing, and even if incentives were created, several barriers to sharing will continue. Options to address these issues in turn create new challenges, and may require further study. Chairman Walden, Ranking Member Eshoo, and Members of the Subcommittee, this concludes my prepared statement. I will be happy to respond to any questions you may have at this time. For further information on this testimony, please contact me at (202) 512- 2834, or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Sally Moino and Andrew Von Ah, Assistant Directors; Amy Abramowitz; Colin Fallon; Bert Japikse; Elke Kolodinski; Maria Mercado; Erica Miles; and Hai Tran. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Demand for spectrum is increasing rapidly with the widespread use of wireless broadband devices and services. However, nearly all usable spectrum has been allocated either by NTIA for federal use or by the Federal Communications Commission (FCC) for commercial and nonfederal use. Federal initiatives are under way to identify federal spectrum that could be repurposed or possibly shared by federal users or wireless broadband providers and other nonfederal users. This statement discusses how NTIA manages spectrum to address governmentwide spectrum needs and the steps NTIA has taken to repurpose spectrum for broadband. As part of an ongoing review, the statement also discusses preliminary information on the factors that prevent spectrum sharing and actions that can encourage sharing and efficient spectrum use. This testimony is based on GAO's prior work on federal spectrum management and ongoing work on spectrum sharing. GAO analyzed NTIA processes, policies and procedures, and interviewed relevant government officials, experts, and industry stakeholders. The National Telecommunications and Information Administration (NTIA) is responsible for governmentwide federal spectrum management, but GAO reported in 2011 that NTIA’s efforts in this area had been limited. In 2003, the President directed NTIA to develop plans identifying federal and nonfederal spectrum needs, and in 2008, NTIA issued the federal plan. GAO found it did not identify governmentwide spectrum needs and did not contain key elements and conform to best practices for strategic planning. Furthermore, NTIA’s primary spectrum management operations do not focus on governmentwide needs. Instead, NTIA depends on agency self-evaluation of spectrum needs and focuses on mitigating interference among spectrum users, with limited emphasis on overall spectrum management. Additionally, NTIA’s data management system is antiquated and lacks internal controls to ensure the accuracy of agency-reported data, making it unclear if reliable data inform decisions about federal spectrum use. NTIA is developing a new data management system, but implementation is years away. Despite these limitations, NTIA has taken steps to identify spectrum that could potentially be made available for broadband use. For example, in 2010 NTIA evaluated various spectrum bands and identified 115 megahertz of spectrum that could be repurposed within the next 5 years. In doing so, NTIA worked with a special steering group consisting of the Assistant Secretaries with spectrum management oversight in agencies that were the major stakeholders in the spectrum bands under consideration. For each of the identified bands, NTIA reviewed the number of federal frequency assignments within the band, the types of federal operations and functions that the assignments support, and the geographic location of federal use. In addition to efforts to repurpose spectrum, industry stakeholders have also suggested that sharing spectrum between federal and nonfederal users be considered to help make spectrum available for broadband. Our ongoing work has identified several barriers that limit sharing. Primarily, many users may lack incentives to share assigned spectrum. Typically, paying the market price for a good or service helps to inform users of the value of the good and provides an incentive for efficient use. But federal agencies pay only a small fee to NTIA for spectrum assignments, and may, in some contexts, have little incentive to conserve or share it. Federal agencies may also have limited budgets to upgrade to more spectrally-efficient equipment that would better enable sharing. Nonfederal users are also reluctant to share spectrum. For instance, license holders may be reluctant because of concerns that spectrum sharing could encourage competition. A lack of information on federal spectrum use may limit users’ ability to easily identify spectrum suitable for sharing. GAO’s ongoing work suggests that some actions might provide greater incentives and opportunities for more efficient spectrum use and sharing. These actions could include assessing spectrum usage fees to provide economic incentive for more efficient use and sharing, expanding the availability of unlicensed spectrum, and increasing the federal focus on research and development of technologies that can enable spectrum sharing and improve spectral efficiency. However, all of these actions also involve challenges and may require further study.
As a DHS component, TSA follows the department’s policies and procedures for managing its acquisition programs. DHS has established policies and procedures for acquisition management, test and evaluation, and resource allocation. The department uses these policies and procedures to deliver systems that are intended to close critical capability gaps and enable DHS to execute its missions and achieve its goals. DHS’s policies govern TSA’s acquisition programs and are primarily set forth in DHS Acquisition Management Directive 102-01 (AD 102). DHS acquisition policy establishes that an acquisition decision authority shall review the program through the acquisition life cycle phases. Under this directive, an important aspect of an acquisition decision authority’s review and approval of acquisition programs is to ensure that key acquisition documents are completed, including (1) a life cycle cost estimate, which provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a program, and (2) the acquisition program baseline, which establishes a program’s cost, schedule, and performance parameters. When an acquisition program exceeds cost, schedule, or performance thresholds, it is considered to be in breach. TSA’s acquisition policies, which supplement DHS policies, generally designate roles and responsibilities and identify the procedures that TSA is to use to implement the requirements in DHS policies. For example, a TSA policy designates an official to ensure TSA’s acquisition programs comply with AD 102, including the review and approval of key acquisition program management documents and determining required acquisition documentation for TSA programs. In addition, a TSA policy guide provides the procedures that TSA is to use to meet the acquisition review and reporting requirements defined in AD 102. TSA has policies and procedures that address TSARA’s requirements for justifying acquisitions, including a security-related technology acquisition. TSA had implemented most of these procedures prior to TSARA’s enactment because they were required by existing DHS and TSA policy. For acquisition justifications, TSARA provides that before TSA implements any security-related technology acquisition the agency must, in accordance with DHS policies and directives, conduct an analysis to determine whether the acquisition is justified, or whether the benefits exceed the cost of the acquisition. TSA’s policies and procedures address this requirement. One change resulting from TSARA is the requirement that TSA notify Congress at least 30 days preceding contract awards for new security-related technology acquisitions exceeding $30 million, which TSA addressed by developing new procedures. See appendix I for our detailed analysis on the status of TSA’s efforts to implement all TSARA requirements. TSA policies and procedures address TSARA provisions related to justifying acquisitions by requiring the development and approval of specific acquisition documents, including a concept of operations and an analysis of alternatives, prior to the implementation of an acquisition. The concept of operations is to include identifying scenarios of transportation security risk and assessing how the use of the proposed acquisition would help improve transportation security. The analysis of alternatives is to include identifying different security solutions, including technology and non-technology solutions, and an analysis of the operational effectiveness, cost, and benefits of each viable solution. Regarding the requirement that congressional notification be made in advance of obtaining acquisitions of more than $30 million, TSA amended its policies to include the 30-day notification for contracts exceeding $30 million awarded after TSARA’s enactment. TSA also developed a template for a notification letter to Congress that is to include a certification by the TSA Administrator. Consistent with TSARA, TSA is to provide 5-day notice for contract awards that exceed $30 million to facilitate a rapid response if there is a known or suspected imminent threat to transportation security. TSA officials stated they will continue to provide 5-day notice for all individual task order awards or delivery order awards exceeding $1 million or more based on policies in effect prior to TSARA’s enactment. According to TSA officials, TSA has not yet awarded a contract for security-related technology in excess of $30 million since TSARA’s enactment. These officials also said that there have been no acquisitions related to a known or suspected imminent threat to transportation security that would require TSA to immediately notify Congress since TSARA’s enactment. TSA has policies and procedures in place that address TSARA’s requirements to establish acquisition baselines and review whether acquisitions are meeting these requirements. These policies and procedures were largely established prior to TSARA’s enactment. For example, TSA acquisition policies require that TSA prepare an acquisition program baseline, a risk management plan, and the acquisition program office staffing requirements before obtaining an acquisition. According to TSARA, TSA must report a breach if there is a cost overrun of more than 10 percent, a delay in actual or planned schedule for delivery of more than 180 days, or a failure to meet any performance milestone that directly affects security effectiveness. TSA’s TSARA Implementation Strategy Memorandum addresses TSARA’s requirements for reporting breaches to Congress. Specifically, the memorandum designates the Office of Acquisition as being responsible for implementing TSARA breach requirements and includes procedures that outline the steps TSA should take to notify DHS and Congress about breaches. According to TSA’s TSARA Implementation Strategy Memorandum, TSA had existing policies that require breach memorandums and remediation plans when breaches occur. The procedures state that in the event of a breach, TSA will provide a report to Congress that includes the cause and type of breach and a corrective action plan. In addition, TSA officials have briefed acquisition program staff about TSARA’s breach notification requirement changes. Prior to TSARA’s enactment, TSA followed DHS’s acquisition policies that defines breaches against an acquisition program baseline as performance failures, schedule delays, or cost overruns of up to 15 percent, and did not mandate reporting breaches to congressional committees. As required by TSARA, TSA established procedures to notify Congress within 30 days of schedule delays, cost overruns, or performance failures constituting a breach against acquisition program baselines. As of December 2015, TSA reported that it had not experienced such breaches in any existing acquisitions since TSARA’s enactment. TSA’s policies and procedures address TSARA requirements for managing inventory related to, among other things, (1) using existing units before procuring more equipment; (2) establishing policies and procedures to track the location, use, and quantity of security-related equipment in inventory; and (3) providing for the exception from using just-in-time logistics, a process that involves delivering equipment directly from manufacturers to airports to avoid the need to warehouse equipment. For example, TSA’s Security Equipment Management Manual describes the policies and procedures that require TSA to use equipment in its inventory if, for example, an airport opens a new terminal or it recapitalizes security-related technology at the end of its life cycles. Additionally, the current TSA system tracks the location, utilization status, and quantity of security-related equipment in inventory. Further, TSA’s policies and procedures describe TSA’s system of internal controls in place prior to TSARA’s enactment to conduct reviews, which require reporting and following up on corrective actions. TSA’s Security Equipment Management Manual provides for two exemptions from just- in-time logistics that are applicable if just-in-time logistics would (1) inhibit planning needed for large-scale equipment delivery to airports or other facilities or (2) reduce TSA’s ability to respond to a terrorist threat. In accordance with TSARA, TSA must execute its acquisition-related responsibilities in a manner consistent with and not duplicative of, the FAR and DHS policies and directives. TSA policy documents state that TSA is required to ensure that its policies and directives are in accordance with the FAR and DHS acquisition and inventory policies and procedures. According to TSA’s TSARA Implementation Strategy Memorandum, TSA was able to address this requirement. For example, TSA formed a working group, chaired by TSA Executive Secretariat staff, as part of an effort to ensure that TSA implemented TSARA in a manner that was consistent with the FAR and DHS policies and directives. DHS officials further reported that TSA’s actions towards implementation of TSARA requirements is part of DHS’s Acquisition Review Board process and has not led to any duplication or inconsistency with the FAR or AD 102. TSA submitted a Strategic Five-Year Technology Investment Plan (the Plan) to Congress that generally addresses TSARA-mandated elements. For example, the Plan that TSA submitted to Congress identifies capability gaps and security-related technology acquisition needs and procedures. Specifically, the Plan describes TSA’s test, evaluation, modeling, and simulation capabilities, and identifies security- related technologies that are at or near the end of their life cycles. In addition, the Plan identifies TSA’s efforts to provide the private sector with greater predictability and clarity about TSA’s security-related technology needs and acquisition procedures by sharing testing documents and plans. TSA also took steps to ensure that the Plan adhered to TSARA by (1) consulting with DHS officials and an advisory committee, (2) obtaining approval of the Secretary of Homeland Security prior to publishing the Plan, (3) incorporating private sector input on the Plan, and (4) identifying the nongovernment persons who contributed to writing the Plan. TSARA required TSA to submit a report to congressional committees on TSA’s performance record in meeting its published small business contracting goals during fiscal year 2014. In April 2015, TSA reported for fiscal year 2014 that it fell 1.5 percent short of its small business contracting goal of 23 percent, and 1.6 percent short of its Historically Underutilized Business Zones (HUBZone) program goal of 3 percent of its total contracts. To meet its small business contracting goal, TSA would have had to award an additional $22 million in contracts to small businesses of its $1.5 billion in total contracts. According to TSA officials, small businesses’ limited ability to support security-related technology acquisition and TSA’s existing large scale prime contract awards to large businesses for human resources and information technology are part of the challenges that it faces in meeting its small business goals. TSARA provides that if the small business contracting goals are not met, or if the agency’s performance is below the published DHS small business contracting goals, TSA’s report is to include a list of challenges that contributed to TSA’s performance and an action plan, prepared after consultation with other federal departments and agencies. The report submitted by TSA includes an action plan for integrating small business concerns into the acquisition planning procedures and enhancing outreach to disabled, women-owned and HUBZone businesses. TSA’s small business officials also said that they attend monthly meetings with officials from other DHS components’ small business units and conduct outreach events with small businesses. To develop the action plan, TSA did not consult with the Secretary of Defense and the heads of federal departments and agencies that met their small business goals as required by TSARA. However, TSA officials said that they met with the Department of Defense Office of Small Business Programs after they developed the action plan and the agency plans to fully comply with TSARA’s small business requirements in the future. DHS and TSA officials reported that to date TSA has not identified any efficiencies, cost savings, or delays from its implementation of TSARA. TSA officials further stated that because many of their current policies and procedures that met the provisions of the law were in place prior to TSARA’s enactment, it was unlikely for TSARA to result in major cost savings, efficiencies, or delays. TSA officials reported that they recently developed a mechanism to track its progress in implementing follow-on actions identified in the Plan, such as ongoing stakeholder engagement, as well as to track progress and identify challenges and best practices in implementing TSARA requirements to help update the Plan. We provided a draft of this report to DHS for review and comment. DHS did not provide formal comments but provided technical comments from TSA which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In the following tables, we identify the status of Transportation Security Administration (TSA) efforts to address requirements of the Transportation Security Acquisition Reform Act (TSARA). Jennifer A. Grover, (202) 512-7141 or [email protected]. In addition to the contact named above, Glenn Davis (Assistant Director), Nima Patel Edwards (Analyst-in-Charge), David Alexander, Rodney Bacigalupo, Richard Hung, Thomas Lombardi, Luis E. Rodriguez, Tovah Rom, Carley Shinault, and Edith Sohna made key contributions to this report.
Within DHS, TSA is the federal agency with primary responsibility for preventing and defending against terrorist and other threats to domestic transportation systems. From fiscal years 2002 through August 2015, TSA obligated $13.4 billion to acquire security-related technologies such as through the Electronic Baggage Screening Program and the Passenger Screening Program. However, GAO and the DHS Office of Inspector General have reported that TSA did not fully follow DHS policies in deploying Advanced Imaging Technology systems to screen passengers and in estimating costs to screen checked baggage, and faced challenges in managing inventory. Enacted in December 2014, TSARA specifies measures that TSA must take to improve transparency and accountability in acquiring security-related technologies. TSARA contains a provision that GAO report to Congress on TSA's progress in implementing TSARA. This report examines TSA's actions taken toward addressing TSARA. GAO is not fully evaluating the extent to which TSA is implementing the act at this time because TSA has not undertaken an acquisition of security-related technology subject to the requirements of the act since its enactment. Pursuant to TSARA, GAO will report again on TSA's implementation of the act in approximately 3 years. TSA provided technical comments on a draft of this report which GAO incorporated as appropriate. DHS did not provide formal comments. The Transportation Security Administration (TSA) in the Department of Homeland Security (DHS) has policies and procedures that generally address requirements of the December 2014 Transportation Security Acquisition Reform Act (TSARA). Specifically, TSA policy and procedures address TSARA requirements for justifying acquisitions, establishing baselines, managing inventory, and submitting plans, among other requirements. Justifying Acquisitions TSA had taken action toward addressing most TSARA requirements related to justifying acquisitions prior to TSARA's enactment because they were required by existing DHS and TSA acquisition policies. Consistent with TSARA, TSA amended its policies to notify Congress within 30 days of awarding contracts exceeding $30 million for the acquisition of security-related technology. According to agency officials, TSA has not made any such new acquisitions since the enactment of TSARA. Acquisition Baselines TSA policies require that it prepare an acquisition program baseline, risk management plan, and staffing requirements before acquiring security-related technology. Consistent with TSARA, TSA established policies to notify Congress within 30 days of making a finding of performance failures, schedule delays, or cost overruns constituting a breach against acquisition program baselines. TSA reported that it had not experienced breaches in any existing acquisitions (i.e., those in place prior to December 2014) since the enactment of TSARA. Managing Inventory TSA's policies and procedures address TSARA requirements for using existing units before procuring more equipment; tracking the location, use, and quantity of security-related equipment in inventory; and using just-in-time delivery to avoid warehousing equipment. Submitting Plans TSA submitted its Technology Investment Plan and Small Business Report to Congress as required by TSARA. The Technology Investment Plan addresses required elements such as identifying security gaps and security-related technology needs and processes. The Small Business Report includes an action plan for integrating the concerns of small businesses into acquisition processes and increasing outreach to targeted small businesses. DHS and TSA officials said that TSA has not yet identified any efficiencies, cost savings, or delays from its implementation of TSARA. They added that because many of the policies and procedures that meet the provisions of the act were in place prior to the enactment of TSARA, it was unlikely for TSARA to result in major efficiencies, cost savings, or delays. According to TSA officials, TSA has developed mechanisms to monitor various aspects of TSARA, such as tracking progress in implementing planned technology programs.
In 1996, only 66 percent of U.S. children younger than 18—47 million— were covered by private health insurance. Most private insurance for children is acquired through a parent’s employer. However, in 1993, almost one-fourth of the workforce worked for an employer that did not cover dependents. In addition, even if employers offer coverage, the amount that employees have to pay toward it for their families may make health insurance unaffordable. Since the late 1980s, workers’ costs for family coverage have risen sharply. Increases in insurance costs may affect children disproportionately, since the 71 million children younger than 18 represent 27 percent of the U.S. population but 42 percent of the poor. Even if children have insurance, their coverage—and their relationship with their providers—may be disrupted if their parents lose their jobs or change jobs frequently. Public health insurance for children is generally provided through the Medicaid program. Currently about 15.5 million (22 percent) of children younger than 18 are covered through Medicaid. The majority of low-income children (65 percent) in Medicaid have a working parent and, of those that do, about half have a parent working full time. To remain in Medicaid, families generally have their eligibility redetermined at least every 6 months. If family income or other circumstances change, children may go in and out of the Medicaid program during a year, disrupting their coverage. This can delay needed care, which can have long-term health consequences. Children are uninsured when they have neither public nor private coverage. In 1996, 10.6 million children (14.8 percent) were uninsured, living generally in lower-income working families. Compared with privately insured children, a higher proportion of their parents worked for small employers—the group least likely to offer health insurance. In 1993, only a quarter of employees in firms with fewer than 10 employees and about half in firms with 10 to 24 employees reported that their employer offered a health insurance plan for workers and their dependents, compared with 89 percent in firms with 1,000 or more employees. Health insurance does not always cover the preventive care, such as immunizations, that children need to develop optimally. Nevertheless, most of the studies we analyzed used many different measures of access and found that insured children were more likely to have access to both preventive and acute or chronic health care. Children who were insured were more likely to be connected to the health care system through a physician. Having a primary care connection made it easier for children to get regular preventive care, acute care when ill, and more complex care as needed. Uninsured and lower-income children were more likely to be hospitalized for conditions that could have been treated through primary care. Most of the studies we reviewed showed that children who had health insurance had better access to preventive and primary health care than uninsured children. (See table 1.) They were more likely to have a primary care provider, which increased their access to both routine and more complex care. Children who had private health insurance were also more likely than children who had no insurance to get medical care from one source, and that source was more likely to be in a physician’s office. In addition, they were more likely to have seen a doctor recently and to have been up to date with their well-child care. A child’s having a usual source of care increases the likelihood he or she will receive preventive or acute health care. One research study based on nationally representative data found that 20 percent of all uninsured children lacked a usual source of care, compared with 7 percent of insured, white, nonpoor children. Using regression analysis to isolate the effect of insurance from race, income, and ethnicity, this study found that uninsured children were twice as likely to lack a usual source of care as insured children. Uninsured children were also more likely to lack after-hours care and to spend more time traveling and more time waiting to receive care. Similarly, another study found that 33 percent of uninsured children did not go to a physician’s office for their routine care, compared with 14 percent of insured children (insured privately or through Medicaid). Controlling for factors other than insurance, the study found that uninsured children were more than twice as likely as insured children to get care in places other than a physician’s office. Generally, lower-income children (whether uninsured or receiving public insurance) are less likely to go to a physician’s office for their care. The National Center for Health Statistics (NCHS) found that 94 percent of U.S. children—more than 65 million—had a usual source of care in 1993. Of these, 94 percent of privately insured children, 62 percent of publicly insured children, and 74 percent of uninsured children used a doctor’s office as their usual source of care. Conversely, 5 percent of privately insured children, 30 percent of publicly insured children, and 20 percent of uninsured children used a clinic as their regular source of care. Most experts believe that preschool children need regular visits to physicians to stay current in their immunizations and to be screened for health problems, but researchers found access problems for preschool children. About one-quarter of U.S. 3-year-olds born in 1988 had a gap in their health insurance coverage of at least 1 month, and almost 15 percent had a gap of 7 months or more or had never been covered. Preschool children who had gaps in coverage were more likely to have gone to multiple sites for care than children who had continuous insurance coverage, suggesting that the care they received was more likely to be sporadic and fragmented. Just over 40 percent of preschool children went to two or more sites of care (not counting emergency care). However, controlling for other factors affecting access, preschool children who had a gap in coverage of more than 6 months were 74 percent more likely to have gone to more than one site for care. Disruption of insurance coverage seems to be the salient factor because children who had no insurance were no more likely than insured children to have gone to multiple sites of care. Experts have stated that adolescents can benefit from the guidance of a trusted health provider to help them through a period when their bodies are changing and they may be tempted to take risks, such as having unprotected sex or using drugs, alcohol, or tobacco products. Yet uninsured adolescents also have access problems. Researchers found that adolescents who were not insured were less likely to have a usual source of care and regular provider. (See fig. 1.) Better access to primary care is important, because primary care is a gateway to better preventive care and needed specialized services. A number of studies found that uninsured children had fewer health care and dental visits and fewer preventive visits. Compared with the parents of low-income children who had public insurance like Medicaid, parents of uninsured children of all income levels were more likely to defer bringing them into care for financial reasons. Having a primary care provider has been shown to improve care by facilitating the timely receipt of complex care. One study showed that children in Medicaid or who had no insurance were much less likely to have contacted a primary care physician before they came to the hospital with appendicitis. Children whose families did not contact a primary care physician before hospital admission were operated on less quickly if they were admitted on weekends and were more likely to have a perforated appendix. Contact with a primary care provider, not insurance status, was the key to differing rates of this complication, but having private insurance did increase the likelihood that a child would have a relationship with a primary care physician. Six studies that controlled for other factors affecting access found that uninsured children were less likely to receive routine checkups, dental care, or any kind of doctor’s visit. Some of them compared routine visits made with the number of visits recommended by the American Academy of Pediatrics (AAP) (see table 2) and found that uninsured children were less likely to meet such standards. For example, one study found that 30 percent of uninsured children were not up to date with well-child care visits, as AAP recommends, compared with 22 percent of insured children. Compared with insured children, and controlling for other factors that affect access, uninsured children were 50-percent more likely not to have made any visits to a physician in the past year and almost twice as likely never to have had routine care. In a local California study, lack of insurance was the strongest predictor that children older than 5 had not seen a dentist in the past year, compared with privately insured children. Uninsured children were less likely to have received care when it was not an emergency. An analysis of the 1980 National Medical Care Utilization and Expenditure Survey, after adjusting for other factors affecting access, found that uninsured children had a 69-percent likelihood that they would use nonemergency ambulatory care during the year, compared with 81 percent for privately insured children. The uninsured children who had used health services had made fewer nonemergency ambulatory visits, compared with privately insured children. (See fig. 2.) Similarly, an analysis of a more recent survey also showed that being uninsured was a significant predictor of not using a physician’s services. Several studies found that uninsured children were not getting care for conditions that could be serious. Children who had no insurance had lower rates of treatment for injuries, including serious injuries such as broken bones or cuts requiring stitches, compared with children who had private insurance, and were less likely to get care when sick. Sometimes they received care later, after they had become sicker. Childhood injuries were fairly common, but insurance status affected a child’s chances of being medically treated for an injury. In 1988, children younger than 18 had total injury rates of 16.3 per 100. Serious injuries that resulted in restricted activity, bed days, surgery, hospitalization, or substantial pain represented about half of total injuries. A study that compared injury treatment for insured children (private insurance and Medicaid combined) and uninsured children found that the uninsured were less likely to be brought in for the treatment of injuries. The study’s researchers estimated that for children who had no coverage in 1988, the year of the study, between 20 and 30 percent of total injuries may not have been examined and treated by a health professional. At least 40 percent of serious injuries to uninsured children younger than 11 might not have been examined and treated. These researchers also found that Medicaid-insured children had treatment rates similar to privately insured children, suggesting that public insurance helped ensure that children would receive treatment for injuries. Their finding that families that had Medicaid coverage for their children would seek health care for them, while families of uninsured children would not, is consistent with the findings from the Rand Health Insurance Experiment that families of poor children in cost-sharing plans were less likely to seek care for diagnoses related to trauma or accidents than families of poor children with free care. Uninsured children were less likely to receive treatment for some of the common illnesses of childhood. Uninsured children were about twice as likely to have received no care from a physician for pharyngitis, acute earache, recurrent ear infections, and asthma. (See fig. 3.) These are common conditions—with an incidence rate of 8 to 10 per 100 children—for which medical care is considered necessary. They can also have serious consequences for some children if they are left untreated. For example, pharyngitis, if caused by untreated group A streptococci, can lead to rheumatic fever. Untreated middle-ear infections can lead to long-term hearing loss and sometimes to related speech and language difficulties. Severe asthma can cause respiratory failure and death. Looking at more rare conditions, one study examined severity of illness when privately insured and underinsured children were diagnosed with inflammatory bowel diseases. Inflammatory bowel diseases (Crohn’s disease and ulcerative colitis) can result in absence from school, progressive malnutrition, weight loss, anemia, depression, and fatigue. Early diagnosis can catch these diseases before they have progressed so that they can be treated with less-aggressive therapies. The study’s authors, comparing a limited number of cases of underinsured children who had these rare illnesses with insured children who had the same illnesses, found that children who were underinsured had 2-1/2 times the weight loss of insured children and had waited 8 months longer before diagnosis. The children’s laboratory results also indicated that they were sicker before diagnosis and were more likely to be anemic. The authors suggested that delay in diagnosis could have occurred for several reasons, such as seeing different physicians at the same clinic or emergency room or not being able to get timely appointments with subspecialists. A lack of appropriate ambulatory care can cause children to be inappropriately hospitalized when they could have been treated as outpatients. Several researchers have studied hospital admissions among adults and children for conditions that can be managed with good ambulatory care. In general, they found that U.S. communities with poor access to ambulatory care—that is, low-income communities with many residents uninsured or enrolled in Medicaid—had higher rates of this kind of hospitalization. In contrast, hospital admissions in Spain for conditions sensitive to ambulatory care did not vary for children living in lower- and higher-income neighborhoods. Lower-income U.S. neighborhoods had higher avoidable hospitalization rates compared with higher-income neighborhoods for both children and adults. Income differences in avoidable hospitalizations dropped for persons 65 years old or older, probably because of their Medicare coverage. Compared with privately insured patients in the same age category, uninsured patients had higher rates of avoidable hospitalization. Medicaid patients had even higher rates. Most of the potentially avoidable hospitalizations for children younger than 15 were for pneumonia or asthma. Communities where people perceived that they had poorer access to medical care had higher rates of hospitalization for chronic diseases. Self-rated access to care was lower in communities that had greater proportions of uninsured residents, Medicaid beneficiaries, and persons without a usual source of care. Analysis of crossnational data also suggests that broader access to primary care reduces the number of hospitalizations for conditions sensitive to ambulatory care. Several researchers compared such admissions for children in Spain and several U.S. cities. Although rates of hospital admission were higher in general for children in Spain, rates of hospitalization for conditions sensitive to ambulatory care were lower. In addition, lower-income communities in Spain, unlike the United States, did not have higher rates of children’s hospital admissions sensitive to ambulatory care. The authors attributed this difference to Spanish children’s access to universal health care, each child being covered by a responsible primary care provider. Two studies indicated that when children were hospitalized, providers did not give the same type of care to uninsured and privately insured children. Providers may have been unwilling to provide the same intensity of care if the payment source was uncertain or likely to be less than actual charges. One group of researchers found that sick uninsured newborns in California had shorter hospital stays and received less-intensive care while in the hospital than privately insured sick newborns, even though the uninsured newborns and those in Medicaid were sicker. Newborns in Medicaid had lengths of stay and levels of service between those of uninsured and privately insured newborns. Adjusted mean length of stay was 15.2 days for privately insured newborns, 14.2 for Medicaid-covered newborns, and 12.7 for uninsured newborns. Total mean charges were $15,899 for privately insured newborns, $13,858 for Medicaid-covered newborns, and $11,414 for uninsured newborns. Charges per day were also significantly different depending on insurance status. In all, length of stay, total charges, and charges per day were 16-percent, 28-percent, and 10-percent less for uninsured than privately insured newborns. Another group of researchers found that uninsured children and adults were generally sicker when admitted to the hospital, received less care given their condition on admission, and had higher mortality than privately insured children and adults. For children between ages 1 and 17, uninsured black males and white females rated significantly higher on a risk-adjusted mortality index, indicating that they were sicker on admission. The differences for uninsured black females and white males were not significant. Another measure of children’s being sicker on admission is admission on weekends, which was more likely for all uninsured children except black males. For the entire sample of all ages, uninsured people had shorter lengths of stay for conditions for which physicians had more discretion over the length of stay, and they had a lower probability of getting selected procedures that were either costly or more likely to be done at the physician’s discretion. The researchers cautioned that their adjustment for health risk might be imperfect. Nevertheless, they concluded that insurance coverage affects resource use for a broad spectrum of clinical problems, particularly elective and discretionary services. Many children have a chronic condition—one study estimated that 31 percent of children younger than 18 in 1988 had one or more chronic conditions. NCHS estimated that about 15 percent of children who had chronic conditions had special health care conditions that were disabling because they missed school, stayed in bed, limited their activities, or experienced pain or discomfort often. Many children who have chronic conditions are uninsured. In 1988, 21.1 percent of poor children and 9.7 percent of nonpoor children who had chronic conditions were uninsured. About 13 percent of children who had chronic conditions and special health care needs were uninsured—with low-income, Hispanic, and nonsuburban children more likely to be uninsured. Having a regular source of care ensures continuity of care and professional monitoring of disease symptoms. Only a few studies looked at children who had chronic conditions and those who had special health care needs, and fewer controlled for factors that influence access other than insurance. However, these few studies found differences in access to care by insurance status. (See table 3.) For example, poor children who had chronic conditions but no insurance were more than twice as likely as similar, insured children, to lack a usual source of routine care or sick care. (See fig. 4.) Adjusting for severity of illness and other factors, they had only 2.3 physician contacts per year, compared with 3.7 for similar but insured children. An analysis that went even further to separate insurance status from other factors that could affect children’s access to care found that children who had chronic conditions and special health care needs were more than twice as likely to be hospitalized if they had public or private insurance than if they were uninsured, adjusting for differences in need for hospitalization based on their conditions. Many health plans do not cover a number of preventive, primary, and developmental health services needed by some or all children. Private policies differ in whether they cover well-child, dental, and vision care. In 1996, KMPG Peat Marwick reported that 57 percent of the indemnity health plans used by firms with 200 to more than 5,000 workers covered well-child care, compared with 96 percent of health maintenance organizations (HMO) and 73 percent of preferred provider organization (PPO) plans. Dental caries are a common problem for children, while poor vision can lead to problems in learning. Nevertheless, only about half or less of the private plans surveyed covered dental or vision care. Medicaid’s child health benefit package, the Early and Periodic, Screening, Diagnosis, and Treatment (EPSDT) program requires coverage of well-child care, including dental, hearing, and vision care. Other publicly funded programs, such as the Florida HealthyKids Program and New York’s Child Health Plus Program, have not covered dental care; HealthyKids covered vision and hearing care, but Child Health Plus did not. Children who have chronic conditions and special health care needs may have particular difficulties because the services and supplies they need may not be covered by their insurance. For example, coverage for speech or physical therapy to help with developmental delays is often limited or explicitly excluded from private health insurance policies. In contrast, Medicaid’s EPSDT program covers a wide variety of developmental services. Some children are insured but with “bare-bones” policies that provide minimal coverage except for catastrophic costs. Such children, if eligible for Medicaid, could get coverage for services not covered by their private insurance. However, Title XXI—the new child health insurance program—was designed to be restricted to uninsured children, so that low-income children with coverage, even if it were only catastrophic coverage, would not be considered eligible. Florida HealthyKids and New York’s Child Health Plus, two state-based plans whose benefits have been grandfathered into Title XXI, have in the past covered insured children if their health insurance was not comparable in scope to the state-based coverage. Some experts have argued that special pediatric standards should be developed that recognize children’s specific needs, such as their need for health services to ensure optimal development. They have argued that such services should be considered medically necessary and should be covered by private health insurance. Medicaid’s standard of medical necessity is more global than that of private plans. However, families in Medicaid have sometimes had difficulty finding mainstream providers willing to accept them as patients, which limits their ability to secure covered benefits for their children. Since providing uninsured children with publicly funded insurance improves their access to preventive and acute health services, families are more likely to report that their children’s health needs are being met. Children are more likely to be up to date with recommended preventive care and are more likely to see a physician. Two different researchers estimated that the expansion of publicly funded insurance in the United States and Canada decreased child mortality, in association with either more physicians’ visits or more prenatal care. NCHS reported that uninsured children were about three times as likely to have an unmet health need as children with publicly funded insurance (generally Medicaid). (See fig. 5.) Dental care was the most common unmet need for all children—but uninsured children were more than three times as likely not to receive needed dental care as children who had publicly funded insurance. Almost 16 percent of uninsured children were reported as needing but not receiving dental care. Parents of uninsured children reported delaying getting care for their children because of its cost almost five times as often as children who had publicly funded insurance. One local study in Los Angeles found that inner-city Latino parents were also most likely to report that they deferred health care for their toddlers for financial reasons when they were uninsured, compared with others who had Medicaid or private coverage. A number of studies estimated the effect that providing publicly funded insurance, such as Medicaid, had on lessening the gap between uninsured and insured children. One research team examined the effect of expanding Medicaid coverage to children and found decreases over time in the probability that children would go without at least one ambulatory care visit in a year. Making a child eligible for Medicaid lowered the child’s estimated probability of going without a visit by 13 percent. Hospitalizations also rose by an estimated 14 percent—but the estimated probability of making visits to physicians’ offices increased even more than making visits to other sites, suggesting to the authors that expanding Medicaid coverage increased ambulatory care. These authors also looked at the effects of Medicaid expansion on child health as measured by decreases in child mortality. They estimated that the 15-percent rise in the number of children eligible for Medicaid between 1984 and 1992 decreased child mortality by 5 percent. A similar study that looked at the effect of providing national health insurance in Canada found a statistically significant increase in early prenatal care and a significant decrease in infant mortality. Another study of children’s rates of preventive and illness-related primary care visits found that, adjusting for other factors such as race and perceived health status, the predicted probability of making either a preventive or illness-related visit increased if children were covered by public or private insurance, compared with being uninsured. For example, for uninsured children younger than 6 in single-parent families headed by mothers, the predicted probability of making a preventive visit was more than 40-percent greater if the children were covered by public or private insurance, and it was almost 100-percent greater for children aged 6 to 17. Many children miss recommended preventive visits, but uninsured children fare worse than insured children. Short and Lefkowitz found that in 1987, only 49 percent of uninsured preschool children had made any well-child visits, compared with 65 percent of insured children, and only 32 percent of uninsured preschool children had made the recommended number of visits, compared with 48 percent of insured children. They found that when adjusting for other factors, private insurance status was only marginally significant in predicting well-child visits, which they explained by the degree to which private insurance varies in its coverage of well-child care. However, they estimated that for low-income children who would otherwise be uninsured, a full year of Medicaid coverage increased the probability of making any well-child visits by 17 percentage points, and compliance with AAP’s guidelines for well-child visits would increase by 13 percentage points. (See table 2 for AAP guidelines.) Getting appropriate health care when it is needed can be difficult for children. Parents and guardians usually make the decision to seek care for them. Having health insurance and having a regular source of health care facilitate a family’s use of health services, but some families experience systemic, financial, and personal barriers to care. Systemic barriers can include a lack of primary care providers readily available in the neighborhood, physicians’ missing opportunities to provide vaccinations during health care visits, and physicians’ refusing to accept certain patients. Financial barriers, apart from lack of insurance, can include lack of funds to make copayments or pay for uncovered services. Personal barriers can include parents’ lack of knowledge that care is needed and language differences between parents and providers. Similarly, discrimination and poor treatment by health care workers can discourage the use of health care services. Uninsured children and children in Medicaid may also be likely to face systemic, financial, or personal barriers that limit their access to care, beyond their lack of insurance. Compared with privately insured children, uninsured children and those in Medicaid are more likely to have less family income, to be members of a minority group, to have parents who have lower educational attainment, or to live with only one parent—characteristics associated with lower use of health services. As a result, experts in health issues have concluded that while insurance plays a critical role in getting children access to health care, encouraging their appropriate use of health care encompasses multiple strategies. These include making insurance coverage more continuous in order to foster children’s relationships with providers, maintaining a better organized system of primary care in settings that ease access for parents and that have good links to more complex care, enhancing systems in which primary care providers can track and prompt preventive visits and immunizations, and aiming outreach and educational programs at parents. Research has clearly demonstrated that having health insurance makes a difference for children. Children who have no insurance—even those who are sick or chronically ill or have special health care needs—get less health care than children who have insurance. Many studies have shown that increasing children’s coverage increases their access to care, particularly primary care. Without appropriate access to primary care, children are more likely to suffer unnecessarily from illness. But having health insurance is no guarantee that children will get appropriate, high-quality care. Some children live in families that do not understand the need for preventive care or do not know how to seek high-quality care. Some live in neighborhoods that have few health care providers, where they have to travel further and wait longer to get care. Some live in families in which most of the members do not speak English or defer getting care because they have had difficulty getting care previously. Some children have health insurance that does not cover some of the services that they need most—such as dental care or physical therapy for the developmentally disabled. Some children have health insurance whose deductibles and cost-sharing are unaffordable. Such barriers can reduce the likelihood that even insured children will get the care they need. Overcoming these kinds of barriers would require that children be more continuously covered by health insurance so that they could develop long-term relationships with primary care providers. Having a stable source of insurance can help families use the health system for their children optimally over time. Beyond that, children have needs for specific developmental and preventive care that differ in some ways from those of adults. For insurance to work for children, the services they need must be both covered and affordable. Overcoming nonfinancial barriers might require outreach and education for families so that they can learn how better to use preventive and primary health care for their children. In addition, making high-quality primary health services convenient for families in local communities might facilitate children’s access to appropriate care. We asked experts on access to health insurance and children’s health care to review a draft of this report, and we incorporated their comments and suggestions where appropriate. We will make copies of this report available on request. Please contact me at (202) 512-7114 if you or your staff have any questions. This report was prepared by Michael Gutowski, Jonathan Ratner, Sheila Avruch, and Sarah Lamb. Aday, LuAnn. “Health Insurance and Utilization of Medical Care for Chronically Ill Children With Special Health Care Needs.” Advance Data, No. 215. Hyattsville, Md.: National Center for Health Statistics, 1992. Aday, LuAnn, and others. “Health Insurance and Utilization of Medical Care for Children with Special Health Care Needs.” Medical Care, Vol. 31, No. 11 (1993), pp. 1013-26. “American Academy of Pediatrics Committee on Child Health Financing: Principles of Child Health Care Financing.” Pediatrics, Vol. 91, No. 2 (1993), pp. 506-7. Bashshur, R.L., R.K. Homan, and D.G. Smith. “Beyond the Uninsured: Problems in Access to Care.” Medical Care, Vol. 32, No. 5 (1994), pp. 409-19. Behrman, R.E., and C.S. Larson. “Health Care for Pregnant Women and Young Children.” American Journal of Diseases of Children, Vol. 145, No. 5 (1991), pp. 572-74. Berk, Marc L., Claudia L. Schur, and Joel C. Cantor. “Ability to Obtain Health Care: Recent Estimates From the Robert Wood Johnson Foundation National Access to Care Survey.” Health Affairs, Vol. 14, No. 3 (1995), pp. 140-46. Billings, J., G. Anderson, and L. Newman. “Recent Findings on Preventable Hospitalizations.” Health Affairs, Vol. 15 (fall 1996), pp. 239-49. Billings J., and Nina Teicholz. “Uninsured Patients in District of Columbia Hospitals.” Health Affairs, Vol. 9, No. 4 (1990), pp. 158-65. Billings, J., and others. “Impact of Socioeconomic Status on Hospital Use in New York City.” Health Affairs, Vol. 12, No. 1 (1993), pp. 162-73. Bindman, Andrew B., and others. “Preventable Hospitalizations and Access to Health Care.” Journal of the American Medical Association, Vol. 274, No. 4 (1995), pp. 305-11. Bindman, Andrew B., and others. “Primary Care and Receipt of Preventive Services.” Journal of General Internal Medicine, Vol. 11, No. 5 (1996), pp. 269-76. Bloom, Barbara. “Health Insurance and Medical Care: Health of Our Nation’s Children, United States, 1988.” Advance Data, from Vital and Health Statistics for the National Center for Health Statistics, No. 188 (Oct. 1990), pp. 1-8. Bograd, Harvey, and others. “Extending Health Maintenance Organization Insurance to the Uninsured.” Journal of the American Medical Association, Vol. 277, No. 13 (1997), pp. 1067-72. Braveman, P., and others. “Differences in Hospital Resource Allocation Among Sick Newborns According to Insurance Coverage.” Journal of the American Medical Association, Vol. 266, No. 23 (1991), pp. 3300-8. Bronstein, J.M., and others. “Access to Neonatal Intensive Care for Low-Birthweight Infants: The Role of Maternal Characteristics.” American Journal of Public Health, Vol. 85, No. 3 (1995), pp. 357-61. Brook, R.H., and others. “Quality of Ambulatory Care: Epidemiology and Comparison by Insurance Status and Income.” Medical Care, Vol. 28, No. 5 (1990), pp. 392-433. Butler, John A., Sara Rosenbaum, and Judith S. Palfrey. “Ensuring Access to Health Care for Children with Disabilities.” New England Journal of Medicine, Vol. 317, No. 4 (1987), pp. 162-65. Casanova, Carmen, and Barbara Starfield. “Hospitalizations of Children and Access to Primary Care: A Cross-National Comparison.” International Journal of Health Services, Vol. 25, No. 2 (1995), pp. 283-94. Chande, V.T., and J.M. Kinnane. “Role of the Primary Care Provider in Expediting Care of Children With Acute Appendicitis.” Archives of Pediatric and Adolescent Medicine, Vol. 150, No. 7 (1996), pp. 703-6. Chaulk, C.P. “Preventive Health Care in Six Countries: Models for Reform?” Health Care Finance Review, Vol. 15, No. 4 (1994), pp. 7-19. Cunningham, Peter J., and Beth A. Hahn. “The Changing American Family: Implications for Children’s Health Insurance Coverage and the Use of Ambulatory Care Services.” The Future of Children: Critical Health Issues of Children and Youth, Vol. 4, No. 3 (1994), pp. 24-42. Currie, Janet, “Socio-Economic Status and Child Health: Does Public Health Insurance Narrow the Gap?” Scandinavian Journal of Economics, Vol. 97, No. 4 (1995), pp. 603-20. Currie, Janet, and Jonathan Gruber. “Health Insurance Eligibility, Utilization of Medical Care, and Child Health.” Quarterly Journal of Economics, Vol. 111, No. 2 (1996), pp. 431-66. Currie, Janet, and Jonathan Gruber. “Saving Babies: The Efficacy and Cost of Recent Changes in the Medicaid Eligibility of Pregnant Women.” Journal of Political Economy, Vol. 104, No. 6 (1996), pp. 1263-96. Currie, Janet, and Duncan Thomas. “Medical Care for Children: Public Insurance, Private Insurance, Racial Differences in Utilization.” Journal of Human Resources, Vol. 30, No. 1 (1995), pp. 135-62. Davis, Karen. “Inequality and Access to Health Care.” Milbank Quarterly, Vol. 69, No. 2 (1991), pp. 253-73. Escarce, Jose J., and others. “Racial Differences in the Elderly’s Use of Medical Procedures and Diagnostic Tests.” American Journal of Public Health, Vol. 83, No. 7 (1993), pp. 948-54. Ettner, Susan Louise. “The Timing of Preventive Services for Women and Children: The Effect of Having a Usual Source of Care.” American Journal of Public Health, Vol. 86, No. 12 (1996), pp. 1748-54. Gans, J.E., M.A. McManus, and P.W. Newacheck. Adolescent Health Care: Use, Cost, and Problems of Access, AMA Profiles of Adolescent Health Series, Vol. 2. N.p.: 1991. Glied, Sherry, and others. “Children’s Access to Mental Health Care: Does Insurance Matter?” Health Affairs, Vol. 16, No. 1 (1997), pp. 167-74. Goodman, David C., and others. “Why Are Children Hospitalized? The Role of Non-Clinical Factors in Pediatric Hospitalizations.” Pediatrics, Vol. 93, No. 6 (1994), pp. 896-902. Guralnik, Jack, and others. “Annotation: Race, Ethnicity, and Health Outcomes—Unraveling the Mediating Role of Socioeconomic Status.” American Journal of Public Health, Vol. 87, No. 5 (1997), pp. 728-29. Hadley, Jack, Earl Steinberg, and Judith Feder. “Comparison of Uninsured and Privately Insured Hospital Patients: Condition on Admission, Resource Use, and Outcome.” Journal of the American Medical Association, Vol. 265, No. 3 (1991), pp. 374-79. Hafner-Eaton, C. “Physician Utilization Disparities Between the Uninsured and Insured: Comparisons of the Chronically Ill, Acutely Ill, and Well Nonelderly Populations.” Journal of the American Medical Association, Vol. 269, No. 6 (1993), pp. 787-92. Halfon, N., and P. Newacheck. “Childhood Asthma and Poverty: Differential Impacts and Utilization of Health Services.” Pediatrics, Vol. 91, No. 1 (1993), pp. 56-61. Halfon, N., and others. “Routine Emergency Department Use for Sick Care by Children in the United States.” Pediatrics, Vol. 98, No. 1 (1996), pp. 28-34. Halfon, N., and others. “Medicaid Enrollment and Health Services Access by Latino Children in Inner-city Los Angeles.” Journal of the American Medical Association, Vol. 277, No. 8 (1997), pp. 636-41. Hanratty, Maria J. “Canadian National Health Insurance and Infant Health.” American Economic Review, Vol. 86, No. 1 (1996), pp. 276-84. Hellstedt, L.F. “Insurability Issues Facing the Adolescent and Adult With Congenital Heart Disease.” Nursing Clinics of North America, Vol. 29, No. 2 (1994), pp. 331-43. Himmelstein, D.U., and S. Woolhandler. “Care Denied: U.S. Residents Who Are Unable to Obtain Needed Medical Services.” American Journal of Public Health, Vol. 85, No. 3 (1995), pp. 341-44. Holl, J.L., and others. “Profile of Uninsured Children in the United States.” Archives of Pediatric and Adolescent Medicine, Vol. 149 (April 1995), pp. 398-406. Institute of Medicine. Paying Attention to Children in a Changing Health Care System. Washington, D.C.: National Academy Press, 1996. Kogan, Michael D., and others. “The Effect of Gaps in Health Insurance on Continuity of a Regular Source of Care Among Preschool-aged Children in the United States.” Journal of the American Medical Association, Vol. 274, No. 18 (1995), pp. 1429-35. Kohrman, A.F. “Financial Access to Care Does Not Guarantee Better Care for Children.” Pediatrics, Vol. 93, No. 3 (1994), pp. 506-8. Lehmann, C.U., J. Barr, and P.J. Kelly. “Emergency Department Utilization by Adolescents.” Journal of Adolescent Health Care, Vol. 15, No. 6 (1994), pp. 485-90. Lewit, Eugene M., and Alan C. Monheit. “Expenditures on Health Care for Children and Pregnant Women.” National Bureau of Economic Research Working Paper, Boston, Massachusetts, 1992. Liebman, J., and others. “Pennsylvania’s Medically Uninsured Population: Findings from a Statewide Survey.” Journal of Health and Social Policy, Vol. 3, No. 2 (1991), pp. 71-89. Lieu, T.A., P.W. Newacheck, M.A. McManus. “Race, Ethnicity, and Access to Ambulatory Care Among U.S. Adolescents.” American Journal of Public Health, Vol. 83, No. 7 (1993), pp. 960-65. Lieu, T.A., and others. “Health Insurance and Preventive Care Sources of Children at Public Immunization Clinics.” Pediatrics, Vol. 93, No. 3 (1994), pp. 373-78. Lozano, P., F. Connell, and T. Koepsell. “Use of Health Services by African-American Children With Asthma on Medicaid.” Journal of the American Medical Association, Vol. 274, No. 6 (1995), pp. 469-73. McManus, Margaret A., and Paul Newacheck. “Health Insurance Differentials Among Minority Children with Chronic Conditions and the Role of Federal Agencies and Private Foundations in Improving Financial Access.” Pediatrics, Vol. 91, No. 5 (1993), pp. 1040-47. Mark, T., and C. Mueller. “Access to Care in HMOs and Traditional Insurance Plans.” Health Affairs, Vol. 15, No. 4 (1996), pp. 81-87. Marquis, M.S., and S.H. Long. “The Uninsured Access Gap: Narrowing the Estimates.” Inquiry, Vol. 31, No. 4 (1994), pp. 405-14. Marquis, M.S., and S.H. Long. “Reconsidering the Effect of Medicaid on Health Care Services Use.” Health Services Research, Vol. 30, No. 6 (1996), pp. 791-808. Martz, E.W. “Medical Care for the Un(der)insured.” Delaware Medical Journal, Vol. 62, No. 6 (1991), pp. 1076-77. Moffit, Robert A., and Eric P. Slade. “Health Care Coverage for Children Who Are on and off Welfare.” The Future of Children: Welfare to Work, Vol. 7, No. 1 (1997), pp. 87-98. Monheit, Alan C., and Peter J. Cunningham. “Children Without Health Insurance.” The Future of Children: U.S. Health Care for Children, Vol. 2, No. 2 (1992), pp. 154-70. Morgan, David R., and James T. LaPlant. “The Spending-Service Connection: The Case of Health Care.” Policy Studies Journal, Vol. 24, No. 2 (1996), pp. 215-29. Newacheck, P.W. “Improving Access to Health Care for Children, Youth, and Pregnant Women.” Pediatrics, Vol. 86, No. 4 (1990), pp. 626-35. Newacheck, P. W. “Characteristics of Children with High and Low Usage of Physician Services.” Medical Care, Vol. 30, No. 1 (1992), pp. 30-42. Newacheck, P.W. “Poverty and Childhood Chronic Illness.” Archives of Pediatric and Adolescent Medicine, Vol. 148, No. 11 (1994), pp. 1143-49. Newacheck, P.W., D.C. Hughes, M. Cisternas. “Children and Health Insurance: An Overview of Recent Trends.” Health Affairs, Vol. 14, No. 1 (spring 1995), pp. 244-54. Newacheck, P.W., D.C. Hughes, J.J. Stoddard. “Children’s Access to Primary Care: Differences by Race, Income, and Insurance Status.” Pediatrics, Vol. 97, No. 1 (1996), pp. 26-32. Newacheck, P.W., and M.A. McManus. “Health Care Expenditure Patterns for Adolescents.” Journal of Adolescent Health Care, Vol. 11, No. 2 (1990), pp. 133-40. Newacheck, P.W., J.J. Stoddard, and M. McManus. “Ethnocultural Variations in the Prevalence and Impact of Childhood Chronic Conditions.” Pediatrics, Vol. 91, No. 5: Part 2 (1993), pp. 1031-39. Newacheck, P.W., and others. “Children’s Access to Health Care: The Role of Social and Economic Factors.” In Health Care for Children: What’s Right, What’s Wrong, What’s Next, ed. by R. E. Stein. New York: United Hospital Fund of New York, 1997. Office of Technology Assessment. Does Health Insurance Make a Difference? Background paper, OTA-BP-H-99. Washington, D.C.: U.S. Government Printing Office, 1992. Overpeck, Mary D., and Jonathan B. Kotch. “The Effect of U.S. Children’s Access to Care on Medical Attention for Injuries.” American Journal of Public Health, Vol. 85, No. 3 (1995), pp. 402-4. Pappas, G., and others. “Potentially Avoidable Hospitalizations: Inequities in Rates between U.S. Socioeconomic Groups.” American Journal of Public Health, Vol. 87, No. 5 (1997), pp. 811-16. Paulin, Geoffrey D., and Elizabeth M. Dietz. “Health Insurance Coverage for Families With Children.” Monthly Labor Review, Vol. 118, No. 8 (1995), pp. 13-23. Perrin, James, Bernard Guyer, and Jean M. Lawrence. “Health Care Services for Children and Adolescents.” The Future of Children: U.S. Health Care For Children, Vol. 2, No. 2 (1992), pp. 58-77. Perrin, James M., and others. “Health Care Reform and the Special Needs of Children.” Pediatrics, Vol. 93, No. 3 (1994), pp. 504-6. Pollack, Ron, and others. Unmet Needs: The Large Differences in Health Care Between Uninsured and Insured Children, Special Report. Washington, D.C.: Families U.S.A., June 1997. Potterfield, Tyler. “Children’s Access to Health Coverage: The Upside-down House.” Clinical Pediatrics, Vol. 32, No. 10 (1993), p. 591 (1). Rice, D.P. “Ethics and Equity in U.S. Health Care: The Data.” International Journal of Health Services, Vol. 21, No. 4 (1991), pp. 637-51. Rosenbach, Margo L. “The Impact of Medicaid on Physician Use by Low-Income Children.” American Journal of Public Health, Vol. 79, No. 9 (1989), pp. 1220-26. Saver, B.G., and N. Peterfreund. “Insurance, Income, and Access to Ambulatory Care in King County, Washington.” American Journal of Public Health, Vol. 83, No. 11 (1993), pp. 1583-88. Short, Pamela Farley, and Doris C. Lefkowitz. “Encouraging Preventive Services for Low-Income Children: The Effect of Expanding Medicaid.” Medical Care, Vol. 30, No. 9 (1992), pp. 766-80. Simpson, Gloria, and others. “Access to Health Care Part 1: Children.” Vital and Health Statistics, Series 10, No. 196. Hyattsville, Md.: U.S. Department of Health and Human Services, 1997. Smith, M.W., and others. “How Economic Demand Influences Access to Medical Care for Rural Hispanic Children.” Medical Care, Vol. 34, No. 11 (1996), pp. 1135-48. Spillman, Brenda C. “The Impact of Being Uninsured on Utilization of Basic Health Care Services.” Inquiry, Vol. 29 (winter 1992), pp. 457-66. Spivak W., R. Sockolow, and A. Rigas. “The Relationship Between Insurance Class and Severity of Presentation of Inflammatory Bowel Disease in Children.” American Journal of Gastroenterology, Vol. 90, No. 6 (1995), pp. 982-87. Starfield, Barbara. “Primary Care and Health: A Cross-National Comparison.” Journal of the American Medical Association, Vol. 266, No. 16 (1991), pp. 2268-71. Stewart, A.L., and others. “Primary Care and Patient Perceptions of Access to Care.” Journal of Family Practice, Vol. 44, No. 2 (1997), pp. 177-85. Stoddard, Jeffrey J., Robert F. St. Peter, and Paul W. Newacheck. “Health Insurance Status and Ambulatory Care for Children.” New England Journal of Medicine, Vol. 330, No. 20 (1994), pp. 1421-25. Strain, J.E. “Agenda for Change in the U.S. Child Health Care System.” Health Matrix, Vol. 4, No. 1 (1994), pp. 107-18. Strickland, W.J., and C.M. Hanson. “Coping with the Cost of Prescription Drugs.” Journal of Health Care for the Poor and Underserved, Vol. 7, No. 1 (1996), pp. 50-62. Susser, Mervyn. “Race, Health, and Health Services.” American Journal of Public Health, Vol. 83, No. 7 (1993), pp. 939-41. Wehr, E., and E. J. Jameson. “Beyond Benefits: The Importance of a Pediatric Standard in Private Insurance Contracts to Ensuring Health Care Access for Children.” The Future of Children: Critical Health Issues for Children and Youth, Vol. 4, No. 3 (1994), pp. 115-33. Weissman J.S., and A.M. Epstein. “Rates of Avoidable Hospitalization by Insurance Status in Massachusetts and Maryland.” Journal of the American Medical Association, Vol. 268, No. 17 (1992), pp. 2388-90. Weissman J.S., and others. “Delayed Access to Health Care: Risk Factors, Reasons, and Consequences.” Annals of Internal Medicine, Vol. 114, No. 4 (1991), pp. 325-31. Wood, D., and others. “Access to Infant Immunizations for Poor, Inner-City Families: What Is the Impact of Managed Care?” Journal of Health Care for the Poor and Underserved, Vol. 5, No. 2 (1994), pp. 112-23. Wood, David, and others. “Factors Related to Immunization Status Among Inner-City Latino and African-American Preschoolers.” Pediatrics, Vol. 96, No. 2 (1995), pp. 295-301. Wood, David, and others. “Vaccination Levels in Los Angeles Public Health Centers: The Contribution of Missed Opportunities to Vaccinate and Other Factors.” American Journal of Public Health, Vol. 85, No. 6 (1995), pp. 850-53. Wood, D.L. “Access to Medical Care for Children and Adolescents in the United States.” Pediatrics, Vol. 86, No. 5 (1990), pp. 666-73. Uninsured Children and Immigration, 1995 (GAO/HEHS-97-126R, May 27, 1997). Health Insurance for Children: Declines in Employment-Based Coverage Leave Millions Uninsured; State and Private Programs Offer New Approaches (GAO/T-HEHS-97-105, Apr. 8, 1997). Employment-Based Health Insurance: Costs Increase and Family Coverage Decreases (GAO/HEHS-97-35, Feb. 24, 1997). Children’s Health Insurance, 1995 (GAO/HEHS-97-68R, Feb. 19, 1997). Children’s Health Insurance Programs, 1996 (GAO/HEHS-97-40R, Dec. 3, 1996). Private Health Insurance: Millions Relying on Individual Market Face Cost and Coverage Trade-Offs (GAO/HEHS-97-8, Nov. 25, 1996). Medicaid and Uninsured Children, 1994 (GAO/HEHS-96-174R, July 9, 1996). Health Insurance for Children: Private Insurance Coverage Continues to Deteriorate (GAO/HEHS-96-129, June 17, 1996). Health Insurance for Children: State and Private Programs Create New Strategies to Insure Children (GAO/HEHS-96-35, Jan. 18, 1996). Medicaid and Children’s Insurance (GAO/HEHS-96-50R, Oct. 20, 1995). Health Insurance for Children: Many Remain Uninsured Despite Medicaid Expansion (GAO/HEHS-95-175, July 19, 1995). Medicaid: Experience With State Waivers to Promote Cost Control and Access Care (GAO/HEHS-95-115, Mar. 23, 1995). Uninsured and Children on Medicaid (GAO/HEHS-95-83R, Feb. 14, 1995). Employer-Based Health Insurance: High Costs, Wide Variation Threaten System (GAO/HRD-92-125, Sept. 22, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on the relationship between health insurance and health care access, focusing on: (1) what effect health insurance has on children's access to health care; (2) whether expanding publicly funded insurance improves their access; and (3) barriers besides lack of insurance that might deter children from getting health care. GAO noted that: (1) health insurance increased children's access to health care services in almost all the studies GAO analyzed; (2) most of the evaluations showed that insured children were more likely to have preventive and primary care than uninsured children; (3) insured children were also more likely to have a relationship with a primary care physician and to receive required preventive services, like well-child checkups, than uninsured children; (4) differences in access between insured and uninsured children held true even for children who had chronic conditions and special health care needs; (5) when ill, insured children were more likely to receive a physician's care for their health problems, such as asthma or acute earache; (6) in contrast, lack of insurance can inhibit parents from trying to get health care for their children and can lead providers to offer less-intensive services when families seek care; (7) several studies found evidence that low-income and uninsured children were more likely to be hospitalized for conditions that could have been managed with appropriate outpatient care; (8) two studies found that uninsured children sometimes received less-intensive hospital care than insured children; (9) while health insurance benefits differed and some excluded coverage for some basic health care needs, increasing the number of insured children increased the likelihood that more children would receive care; (10) although health insurance can considerably increase access, it does not guarantee entry into the health care system; (11) low family income and education levels, limited availability of neighborhood primary health care facilities, lack of transportation, and language differences are among the barriers to obtaining and appropriately using health care services; (12) both children who have no health insurance and those who have Medicaid coverage are more likely than privately insured children to face such barriers; and (13) to ensure access to high-quality care, public health and clinical experts recommend that children have a stable source of health insurance benefits that cover their health care needs, a relationship with a primary care provider that helps them obtain more complex care as needed, primary care facilities that are conveniently situated, and outreach and education for their families.
During the 1990s, employers—including small businesses—reported increasing difficulty finding, hiring, training, and retaining employees with the appropriate sets of skills. This problem is due in part to that decade’s unprecedented economic growth and the resulting record low unemployment levels and has been compounded by the widespread and increasing use of advanced technologies in nearly all sectors of the U.S. economy. The U.S. Department of Labor reported that employment increased by nearly 21 million in the 1990s, with the service sector— which includes skilled jobs in health and legal services—showing the greatest growth. Significant growth also occurred in high-skill occupations, such as some types of manufacturing and automobile repair, that now use computer technology in their work processes. Additionally, employers report that many new job market entrants, especially youth, are not equipped with the basic skills and job experience they need to succeed in the present environment. Many have deficits in important employment readiness knowledge and skills such as self-reliance, work ethics, teamwork, and communications. Labor projects that the number of young adults, ages 16 to 24, will increase to 25.2 million, or 16.3 percent, of the civilian labor force by 2008. While all businesses face current and future workforce development challenges, small businesses confront additional barriers—both economic and informational—to meeting their workforce needs. Small business employers typically have fewer economic resources and staff to devote to identifying, hiring, training, and retaining employees. The employers may be discouraged from participation in some federal or state workforce development programs because they do not have the staff capacity to manage administrative procedures. Additionally, because they may not be able to dedicate staff to training and personnel matters, small businesses often have more difficulty than larger employers obtaining information to help them identify and address their workforce development needs. Finally, both large and small businesses may hesitate to invest resources in training an employee who could use the newly acquired skills to secure a better paying job elsewhere. However, the impact of this “free riding” might be greater on a small business. According to the Small Business Administration, the approximately 25 million small businesses in the United States provide 67 percent of workers with their first job or initial on-the-job training in basic skills and hire a larger proportion of younger workers. Meeting the nation’s workforce needs—including those of small business and youth—has been the focus of study and activity by a variety of organizations at both the national and local levels. In many communities, programs and services linking businesses to potential employees or offering training to incumbent workers are available through entities such as job centers and community colleges. However, these services may be fragmented among several organizations, making it difficult for small businesses to identify and obtain the range of services they need to solve their workforce problems. Federal legislation, such as the School-to-Work Opportunities Act of 1994 (STWOA) and the Workforce Investment Act of 1998 (WIA), has encouraged communities to create systems that address the education and training of young adults and the workforce needs of business in a modern, competitive work economy. WIA calls for a strong role for the private sector, with local business-lead boards focusing on planning, policy development, and oversight of the local workforce investment system. Recent studies and reports have also pointed to cooperation and coordination among public and private organizations as a promising way to address community workforce needs. A report by the National Center on Education and the Economy notes that recent global and national economic trends point to the need for local workforce systems that will provide employers and workers with the support they need for economic success. Additionally, a report by the Center on Wisconsin Strategy (COWS) at the University of Wisconsin states that workforce development strategies increasingly depend on partnerships between businesses and workers, among firms in specific industries, and between the public and private sector. Finally, a policy statement by the Committee for Economic Development suggests that an intermediary organization can play an important role in helping establish and maintain community partnerships.An intermediary is an entity established by community organizations to act as a focal point, linking businesses with educational institutions, community-based organizations, and other local associations in a network—or consortium—to address mutual goals. Intermediaries broker or provide workforce development services and manage ongoing relations among consortium members. Businesses may access consortium services and activities directly through the intermediary or through their association with another consortium organization. A community workforce consortium with an intermediary organization could include many of the organizations shown in figure 1, such as chambers of commerce, community colleges, school districts, community-based organizations, business and trade associations, and unions. In the four communities we reviewed—Austin, Texas; Cedar Rapids, Iowa; Charlotte, North Carolina; and Milwaukee, Wisconsin—workforce development consortia had been established in response to local businesses’ needs and spearheaded by key community organizations, such as the chamber of commerce or local community college. In some cases, these needs had been identified and examined as part of a formal study of local workforce conditions and possible economic challenges. In others, consensus on community workforce needs was reached by consortia organization officials. The consortia were based primarily on cooperative relationships among community organizations rather than on formal agreements. Consortium membership varied by individual community but often included school districts, business and trade organizations, labor unions, and community-based service organizations, such as the YWCA and family services agencies. Consortia organization officials also served as members of the local workforce investment boards required under the Workforce Investment Act. Additionally, consortia in Austin, Cedar Rapids, and Milwaukee had created an intermediary organization to facilitate the coordination and cooperation of workforce development activities among consortium members and to act as a broker of information and services. Funding for the consortia organizations was typically a “patchwork” of public and private sources. However, all the consortia we visited received substantial financial support from private businesses and corporations or from private not-for-profit organizations. For example, the Capital Area Training Foundation (CATF) in Austin reported receiving fees from businesses participating on advisory councils, the City of Austin, and Travis County. The Workplace Learning Connection in Cedar Rapids reported receiving funding from area corporations, Kirkwood Community College, and service fees paid by participating school districts. Federal financial support also played an important role in the consortia we reviewed. Both CATF and The Workplace Learning Connection reported receiving federal School-to-Work funding and WRTP in Milwaukee received a grant from the U.S. Department of Labor to train low-income workers for jobs in higher paying fields such as construction, data networking, and manufacturing. See table 1. While consortia varied according to individual community needs and resources, Charlotte and Milwaukee were examples of the evolution and organization of community workforce efforts. Charlotte, North Carolina — In 1998, a group of Charlotte business leaders, working with the Charlotte Chamber of Commerce, initiated a study—Advantage Carolina. The goal of the study was to determine how Charlotte could capitalize on its economic advantages to ensure continued prosperity for the region. Additionally, it explored the strengths and weaknesses of the local economy and how to maximize what was viewed as the tradition of public and private teamwork. Local business, government, and nonprofit organization leaders helped guide the effort. The study—updated in 2000 and 2001—identified the area’s primary economic challenges and several initiatives to address them. One initiative—the workforce development continuum—specifically addressed the challenge of building a competitive, promotable, and sustainable workforce. Specific objectives of this initiative included conducting research on workforce needs and trends, building a Web site for job seekers and employers, and building collaboration between higher education institutions, Charlotte-Mecklenburg Schools, and industry. To implement study initiatives, the Charlotte Chamber of Commerce and Central Piedmont Community College (CPCC) assumed important workforce development leadership roles that helped foster a community consortium. CPCC provided contract and custom training for local businesses, a variety of technical and trade curricula, and several initiatives aimed at training and employing the disadvantaged. As an outgrowth of the Advantage Carolina study, the community college conducted a survey of local employers to determine current and future workforce needs. The Chamber of Commerce has also conducted workshops to address the specific workforce hiring needs of small businesses and, according to a school official, has worked closely with the school district to identify businesses to participate in work-based learning activities, such as job shadowing and internships. Other consortium participants included the Charlotte-Mecklenburg Schools, the Workforce Development Board, and business and trade organizations. According to officials, the Chamber of Commerce and its activities were funded through member dues and participation fees. Advantage Carolina initiatives were funded with a combination of public and private funds. Milwaukee, Wisconsin—In the early 1990s, business, government, and labor leaders in Milwaukee reached consensus about the need to preserve the area’s manufacturing industry and keep jobs in the area that pay enough to support families. The leaders determined that by working together they could help sustain industry and help ensure that existing workers could advance in a career track and young people could move into entry level jobs. The leaders convened a series of meetings with the Center on Wisconsin Strategy at the University of Wisconsin to discuss the idea and, in 1992, the center brokered an agreement between the parties to form a steering committee to guide the creation of the Wisconsin Regional Training Partnership (WRTP). The WRTP received funding in 1997 from a local nonprofit organization—the Milwaukee Jobs Initiative—to improve the economic prospects of central city families by linking them with training and jobs. WRTP worked with local community-based organizations that provided pools of potential employees for businesses with jobs to fill. WRTP recently expanded to provide workforce development services to additional business sectors, including construction, health care, hospitality, technology, and transportation. The Milwaukee Area Technical College worked under contract with WRTP to provide pre-employment and job training for program participants. Activities were funded by numerous sources, including the Annie E. Casey Foundation, the U.S. Departments of Labor and Health and Human Services, the Milwaukee Foundation, Milwaukee County, the City of Milwaukee, and other Milwaukee-area philanthropic and corporate sponsors. Small businesses can seek solutions to their workforce problems by linking with a consortium of community organizations that help them address both their current and future workforce development issues. Small businesses in the communities we visited participated in these consortia by joining member organizations or engaging in their activities. Consortia activities to help businesses meet current workforce needs centered on finding and hiring new employees as well as on training existing employees. Activities to address future workforce needs focused on creating career pathways for potential workers—particularly youth. Small businesses that link with a community consortium—either directly through an intermediary or through another consortium organization—can benefit from consortium services that address their current workforce development needs and problems. Current needs include both hiring new employees and training existing employees. To meet these needs, businesses must identify, recruit, and hire workers to fill current job vacancies. Additionally, businesses must maintain and upgrade the skills of the existing workforce to stay current with changes in technology and allow for future growth. Consortia can be instrumental in helping businesses connect with prospective employees who are equipped with the appropriate job skills and, in some cases, the pre-employment skills and social supports they need to be successful jobholders. In the consortia we reviewed, we found a variety of activities that were designed to meet businesses’ immediate employment needs, including job fairs to provide a venue for businesses and prospective employees to come together, initiatives with community-based organizations that targeted the disadvantaged, and WIA one-stop job centers. Specific examples of what we found include the following: In Milwaukee, the WRTP—a consortium intermediary organization— worked with community-based organizations such as the YWCA and the Milwaukee Housing Authority to link businesses with pools of potential employees. The prospective employees were offered employment based on their current qualifications or their completion of the requisite training classes in specific job skills and pre-employment skills such as communication and goal setting. WRTP also worked with the community organizations to help prospective employees secure job retention services such as day care and transportation. Central Piedmont Community College, one of Charlotte’s consortium leaders, sponsored Pathways to Employment—a 12 to14 week welfare to work program that provided academic, social, and job-specific training to prepare welfare recipients to enter the workforce as skilled employees. Pathways to Employment linked CPCC with the local Department of Social Services, community businesses, and other organizations to move participants from welfare to work. Pathways prepared students for employment in five curriculum areas: customer service representative, medical office administration, medical reimbursement specialist, hospital unit coordinator, and office information systems specialist. These curriculum areas were developed based on community workforce needs. Pathways developed partnerships with local employers to assist students in attaining employment after graduation. Businesses participating in the program agreed to consider program graduates for employment. The Capital of Texas Eastview Workforce Center—one of three WIA workforce board job centers in Austin and a consortium member—was located on a campus of Austin Community College and across the street from a low-income housing facility. One of the goals of the center was to help both large and small businesses in the community find employees who are ready to work and have the appropriate job skills. The Center sponsored a job fair each Thursday where businesses could talk with prospective employees. According to a center official, most of the businesses that used the center have fewer than 100 employees. The consortium linked with Huston-Tillotson College, a local historically black 4-year institution that provided computer training on site. Daily classes were also offered in job search skills, including resume writing and interviewing techniques. Consortia we reviewed also offered a wide range of activities to meet the training needs for the existing workers of small businesses. According to consortia organization officials, training provided these incumbent workers with the skills to keep current with evolving technology, revised laws and regulations, safety standards, and job processes. Training can help companies retain and sustain their current workforce and provide opportunities for potential business expansion and growth, as well as foster employee advancement. Businesses often looked to consortium members such as technical and community colleges to provide training for incumbent workers. However, other consortia organizations may also provide training for the existing workforce. Incumbent worker training opportunities in the consortia we reviewed included: In Cedar Rapids, the Chamber of Commerce, partnering with Kirkwood Community College, worked with local businesses to address their workforce training needs. This project received funding from Iowa’s Accelerated Career Education initiative, which has allocated funds for community colleges to develop accelerated training programs to meet the needs of industry. Recent activities in Cedar Rapids focused on three industry sectors—manufacturing, information technology, and press operators. Small businesses’ incumbent workers could receive training in a variety of areas including upgrading computer skills, workplace communication and conflict resolution, and advanced training for new generations of equipment. Specific job-skills courses included blueprint reading, industrial math, and electrical/mechanical technician training. In Austin, the Community Technology and Training Centers—sponsored by consortium intermediary Capital Area Training Foundation, and located at two local high schools in low-income neighborhoods—were open to participants at no cost during non-school hours. They offered a range of computer classes from basic skills courses to advanced software training, but with an emphasis on business skills. The centers also provided free Internet access and career guidance services. According to an official, small businesses participated by sending employees for training and some sponsored internships through the center. In Milwaukee, WRTP was originally established to help the manufacturing industry upgrade worker skills in response to changing technology. This consortium organization continued to address incumbent worker skills in the manufacturing sector and expanded to include additional industry sectors. WRTP assisted businesses in developing education and training programs. For example, according to a union official, WRTP worked with both the union and management of a local foundry to provide the mostly Spanish speaking workers with English as a second language and math training to help them communicate and work more effectively and qualify for higher skill jobs. WRTP’s menu of employer services for incumbent workers also included providing technical assistance with the implementation of work-based learning and mentoring systems, development of worker training programs such as on-site learning centers and apprenticeship programs, and the development of innovative strategies for reducing absenteeism and turnover. Small businesses and consortium officials in the communities we visited said that they believed participation in career pathway activities— particularly for youth and young adults—was an important way for them to ensure that businesses will have a skilled workforce available in the future. Career pathway activities offered by consortia organizations were intended to expand students’ employment horizons by exposing them to the wide variety of future career opportunities available to them. Some consortia career pathway activities prepared students for employment by providing work-based learning opportunities, such as summer internships at a job site. Others, like apprenticeships, provided longer-term training in a specific trade or technical field. In the communities we visited, business participation in consortium activities to introduce middle and high school students to career opportunities included short-term interactions between business officials and students, such as a business representative speaking to an automotive repair class or taking students on a tour of a manufacturing plant. Additionally, consortia offered opportunities for businesses to participate in more extensive work-based learning experiences like providing internships and part-time jobs. Specific examples include the following: In Austin and Cedar Rapids, the consortia intermediary organizations— CATF and The Workplace Learning Connection—sponsored internships, job shadowing, and industry tours to increase middle and high school students’ awareness of the connection between academic studies and their future career opportunities. Speaker’s bureaus in several industry sectors also connected professionals to students in the classroom. The intermediaries also worked with employers to provide opportunities for high school teachers to participate in job-site activities, such as summer internships or the teacher at work program, that expanded their understanding of the business world and provided practical experience and relevant information they could incorporate into their curricula. These intermediary activities were supported, in part, with school-to-work funding. Consortium member Charlotte Mecklenburg School District offered a range of work-based learning opportunities that included internships to explore career areas, classroom-related job experience with businesses and community agencies, and job shadowing for students to observe business professionals and learn about work environments in their fields of interest. The district also partnered with local businesses to sponsor summer internships for students enrolled in its Finance and Travel and Tourism career academies. In addition, a school district official reported that the district would open a new technical high school in 2002. The school will teach curricula based on the six key industry sectors identified in the Advantage Carolina report from the Charlotte Chamber of Commerce. The technical school will have business partners that will provide technical expertise as well as some of the faculty. Longer-term training programs offered by consortium organizations, such as apprenticeships and cooperative (co-op) education programs also created future career pathways. Apprenticeship and co-op programs provided businesses with the opportunity to train future employees in the skills needed for a specific technical or trade career. Apprenticeships are usually several years in duration and apprentices work part- or full-time and attend classes part time. In co-op programs, students may alternate periods of time working full time with attending class full time. These programs typically linked businesses with local technical and community colleges and sometimes secondary schools. Businesses worked with the community colleges in developing and updating the curricula, teaching classes, and providing training on the job-site. Businesses could also participate in apprenticeship programs through their trade associations. Programs in the locations we visited included: In Charlotte, six manufacturing companies have partnered with Central Piedmont Community College to develop Apprenticeship 2000. Apprentices typically began working part-time for the company in their senior year of high school, and were employed full-time upon graduation while taking coursework at CPCC. The apprentices were paid for all work and daytime classroom hours as well as tuition and fees. After completion of the 4-year program, students received an associate of applied science degree from the community college and a journeyman’s certificate from the North Carolina Department of Labor. The college also sponsored a co- op program leading to a 2-year associate degree in automotive repair, according to an official. Students spent the first 8 weeks in classroom training and the remainder of the semester working at a car dealership. In Milwaukee, automobile dealerships have participated in consortium activities with Milwaukee Public Schools through a youth apprenticeship program for high school juniors and seniors. During the 2-year program, students studied auto mechanics at school and worked part-time at a dealership during the school year and full-time during the summer. Each student was assigned to work with and be mentored by a master mechanic. The curriculum was provided and the program certified by the National Automotive Technician Foundation, which represents all major automobile manufacturers. In Cedar Rapids, representatives from construction trade unions active in the local consortium said they sponsored apprenticeship programs and worked with Kirkwood Community College to provide the educational component while the trade unions provided the on-the-job training. Union officials reported that they targeted the apprenticeship programs to young adults—over 21—because they generally are more mature and have some work experience. In addition, according to officials, many of the job-sites can be hazardous and challenging, and younger workers tend not to be as careful or attentive to their work. We found that consortia organizations shared important principles and related best practices that they believe are essential in implementing and sustaining workforce development activities. Consortia officials we interviewed identified four key principles common to all of the communities we visited: (1) activities are focused primarily on businesses’ workforce needs and are structured around key industry sectors represented in their community; (2) consortium organizations provide leadership and maintain on-going, positive working relationships with their partners; (3) workforce development activities are accessible by both businesses and prospective employees; and (4) consortium organizations create ways to make participation in activities more attractive to small businesses. All of the locations we visited had identified key industry sectors in their communities and had organized their workforce development efforts to target local businesses’ needs in those sectors. Several consortia officials told us that organizing by industry sectors is an effective and efficient approach because businesses in the same sectors often have similar workforce issues and can work together to resolve them. Table 2 shows the targeted business and industry sectors in each community consortium we reviewed. In Cedar Rapids and Charlotte, the sector focus grew out of studies done on community economic issues. Both studies identified important local industry sectors and the workforce needs of each sector. According to officials, many small businesses were represented in each sector, especially in the manufacturing, construction, and automotive sectors. Kirkwood Community College in Cedar Rapids initiated a study—Skills 2000 to determine local workforce needs. The study surveyed 33 major area businesses representing five key industry sectors: manufacturing, information technology, health care, agriculture and biotechnology, and general services. Each industry sector identified a mismatch between the skills they wanted in employees and the skills in the available workforce. Kirkwood, working with other consortium members, used this study to develop specific training programs based upon the needs of local businesses in those sectors. One example is the Press Consortium Training Program where 13 small printing companies joined together with Kirkwood to address their training needs in an effort to remain competitive and current with new technologies. They developed and implemented six 10-week training modules for incumbent press assistants and press operators to receive training in press operations, essential skills, sales, and customer service. Project members reported that they have shifted the focus from competing with each other for qualified employees to working together to promote the printing industry. In Charlotte, the Advantage Carolina study, done in 1998, identified six key industry sectors that consortium organizations used to identify local workforce development issues. Three sectors represented already existing industry clusters: financial services, transportation and distribution services, and manufacturing. Three represented emerging industries: innovative technology, professional services, and travel and entertainment. Together they accounted for 60 percent of Charlotte’s employment growth between 1980 and 1999. Representatives from each of the sectors reported the critical issues associated with each sector and identified strategies to address them. Workforce development and training was a theme common to all six sectors, and several consortium efforts address the workforce needs identified by business. For example, the chamber of commerce’s Information Technology Collaborative initiative— implemented in response to their Advantage Carolina study—addressed the need for skilled workers by developing information technology certification programs and by linking students with businesses. The Capital Area Training Foundation—Austin’s intermediary—convened seven industry-lead steering committees that collaborate with educators and employers to develop workforce solutions for the key industry sectors in the community. Targeted industries included semiconductor manufacturing, construction, finance, hospitality, information technology, automotive technology, and health care. The steering committees were responsible for engaging employers in designing career pathways; sponsoring work-based learning experiences for students and teachers; and linking employers directly with schools and post-secondary institutions. One example of the steering committees was the Building Industry Construction Alliance, which according to an official, included about 75 businesses and several other consortium organizations. The Alliance also worked with local high schools and educational institutions to develop career pathways for the construction trades. In Milwaukee, the industry sector approach to workforce development began when WRTP established a manufacturing steering committee to assist employers and unions in the manufacturing sector in improving employment security for current employees and career opportunities for community residents. This committee played the important role of monitoring the health of local manufacturing businesses to help guide workforce development activities in their communities. WRTP has since expanded to include other industry sectors such as construction, hospitality, technology, transportation, and health care, and currently works with over 100 member businesses and unions. Consortia we reviewed were loose alliances of organizations, but had established firm consensus on both community problems and goals among consortia members. Key consortia organizations provided leadership and developed close working relationships with other member organizations in an effort to implement and sustain workforce development activities in their communities. Officials at some locations cited leadership as a vital component of the operations of workforce consortia. Additionally, some consortia officials we spoke with said that close coordination and communication among organizations was critical in meeting local workforce needs. Consortia efforts to encourage strong leadership and promote positive working relationships included: In Austin, several consortium organizations including Austin Community College, the Capital Area Training Foundation, the Workforce Development Board, the Tech-Prep Consortium, and the Capital Area Education and Careers Partnership co-located their offices in Austin Community College’s Highland Business Center. College officials told us that the centralization of these organizations took place under the leadership of the president of the community college, who believed that having consortium members in the same location would promote coordination and better serve the organizations and the community. Consortium officials reported that co-location also facilitates regular communication, scheduling of meetings, and fosters the feeling of collegiality in working across organizations that often have different, but complementary, missions. Consortium organization officials in Charlotte reported that the Charlotte Chamber of Commerce had taken the lead in workforce development by convening all of the key consortium organizations and facilitating regular communication among these members. According to one official, the chamber also recognized the need to form a business and education collaborative infrastructure to direct the management of pertinent education issues. The chamber worked closely with local business representatives and public officials to establish 17 key initiatives— including the Information Technology Collaborative and the Workforce Development Continuum—that grew out of the Advantage Carolina study. The chamber also encouraged consortium organizations to participate on multiple boards and committees and partner with other consortium members on specific activities. For example, chamber officials told us that the Director of Workforce and Professional Development at the chamber had a seat on the local WIA workforce development board. In addition, the chamber sponsored a Small Business Round Table every other month where organizations serving small businesses meet to discuss what they are doing and to coordinate dates of activities and events. Organizations included are the Small Business Technical Development Center at Central Piedmont Community College, the Small Business Administration, City of Charlotte, Mecklenburg County and the Metrolina Entrepreneur Council, an organization of small companies—many of them technology based. In addition, according to an official, the chamber has recognized the need to form a business/educational collaborative infrastructure to direct the management of pertinent educational issues. In Milwaukee, officials from area unions told us that the leadership of the Wisconsin Regional Training Partnership has helped build close working relations between union and management that are critical to maintaining and sustaining workforce development activities. The officials said the WRTP had credibility with the unions from the beginning because nearly all of the WRTP staff had union experience and that WRTP provided the link between the unions, educational institutions, and other consortium organizations for workforce development activities. At each business working with WRTP on workforce development, there was a union co- chair of the activities. At the regional and local level there were union representatives on all of the committees that implement WRTP goals and initiatives. Consortium officials said that the relationship—built by the union, employers, and WRTP—was now established and would continue even if the economy changes and the labor market weakens. Workforce development activities that are convenient and easily accessible help engage small businesses and increase awareness of employment opportunities for prospective workers. Small businesses and consortia officials alike emphasized the importance of easy access for small business owners and potential employees. Consortia organizations offered multiple doorways into workforce development activities for small businesses through member organizations or intermediary outreach. Consortia used strategies such as providing outreach services to local small businesses to inform them of opportunities in workforce development activities and assisting prospective employees overcome potential barriers to employment such as finding childcare services. Additionally, since businesses differ in the amount of time and resources they have available to devote to workforce development, consortia offered a range of participation options to make workforce development activities accessible to all businesses. Examples of how consortia we visited provided easy access to activities included: Kirkwood Training Services, a division of continuing education at Kirkwood Community College in Cedar Rapids, had program directors that Kirkwood officials told us were considered the “sales and marketing team” and facilitated business participation. Officials explained that the program directors met with both small and large businesses and developed customized training as well as industry sector training programs. The program directors called on local businesses to discuss training needs and training opportunities. Kirkwood Training Services contracted with over 150 businesses a year to provide customized training services. Small businesses could choose from several different options to access workforce development activities that met their workforce training needs. For example, Kirkwood Community College developed industry sector training programs where like-businesses pool resources for training. In one of them, eight call center businesses combined efforts with Kirkwood to create an 11-week customer call center program. Training included work skills, telephone skills, and etiquette and customer service skills. The customer call center program enabled all eight industry partners to have access to a pool of qualified potential employees and share the training costs. Additionally, Kirkwood Training Services offered computer-based training at its training center, including computer training modules and instructional software to provide online skill-specific training in selected fields such as information technology, safety, and workplace basics. Officials reported that small businesses could use the center for employee training, and the center was open in the early mornings and in the evenings as well as during the workday to provide easy access. WRTP officials in Milwaukee recognized that difficult family and financial circumstances could present serious access problems to employment for many prospective employees. To address these issues, WRTP officials told us that they worked with community-based organizations to help low- income clients identify and overcome barriers to employment such as difficulty in finding childcare programs or the necessary transportation that could otherwise prevent them from succeeding in the workplace. WRTP also helped prospective employees develop back-up plans so that if, for example, a daycare provider gets sick, employees have other organizations or people that will help. Industry liaison staff at the intermediaries in Austin and Cedar Rapids facilitated businesses’ access to workforce development activities by offering a variety of ways to participate. Additionally, liaison staff helped businesses decide which activities best suit their workforce needs. These ranged from a single speaking engagement at a local school to providing on-the-job training to a student intern. The intermediaries also offered high school students an assortment of work-based learning opportunities ranging from company tours to apprenticeships. In Cedar Rapids, liaison staff at the intermediary, The Workplace Learning Connection, stressed the importance of workforce development activities being easy for everyone to use—businesses, schools, and students—because if they were not, participation would suffer. The Charlotte-Mecklenburg School District reported employing four people that provide outreach to local businesses to engage them in school- to-career programs in area high schools. These staff informed businesses on how to become involved in community workforce development activities and how this participation might be a financial benefit to them. High school students were connected with the businesses in work-based learning activities such as job shadowing, internships, and apprenticeships. School district officials reported that small businesses chose their level of involvement based on their resources and workforce needs by working closely with the school district staff. Consortia officials told us that incentives make participating in workforce development activities more appealing to small businesses. Officials said that they engage businesses in workforce development activities by pointing out a number of specific benefits. Some noted that providing businesses with skilled workers to meet their workforce needs could be a significant enough incentive to participate. Other incentives in the locations we visited included the possibility of longer-term benefits such as building the future workforce by connecting high school students with local businesses. Additionally, in some consortia, financial incentives attracted businesses to become involved in workforce development activities. In Cedar Rapids, Kirkwood Community College provided financial incentives for businesses to participate in workforce development activities through two state jobs training programs that lower training costs. Kirkwood officials said that businesses learn about these programs through direct mail marketing, seminars, outreach from staff at Kirkwood Training Services, and also by word-of-mouth. The first jobs training program, the Iowa Industrial New Jobs Training Program, was created in 1983 to provide an economic incentive to new or expanding industries in Iowa. Eligible companies that were creating new positions or new jobs could receive state funding for training administered through their local community college. The community college district in which a qualifying business is located initially pays the costs of the training program—financing it through the sale of job training certificates (bonds.) The community college is repaid over a 10-year period by the business diverting 1.5 or 3 percent of the state payroll tax it withholds on the employees’ wages for the newly created jobs. Property tax revenues, resulting from capital improvements, might also be used for repayments. Repayment of the certificates does not involve additional taxes to the businesses. The training certificate amount a business receives depends on the number of jobs it creates and the wages it pays those positions. The second jobs program, the Iowa Jobs Training Program, was created to help Iowa businesses fund customized training for current employees. Community colleges assisted businesses with the development of training programs that were funded by cash awards through the Iowa Department of Economic Development. The maximum amount of funding was $25,000 for each project, and businesses could receive a maximum of $50,000 over 3 years. Eligible applicants included businesses engaged in manufacturing, processing, assembling products, warehousing, wholesaling, or conducting research and development. Reimbursable services included skill assessment, adult basic education, and the cost of training services, materials, and professional services. In Milwaukee, officials at WRTP reported they were able to provide services to individual businesses at no out of pocket cost using grant money received from a variety of sources, including the Annie E. Casey Foundation and the U.S. Department of Labor. WRTP services included technical assistance with the transition to new technologies and work processes, expansion of worker education and training programs, improvement of work-based learning and mentoring systems, adoption of innovative strategies for reducing absenteeism and turnover, and development of cost-effective alternatives to temporary employment agencies. Employers contributed in-kind support by providing equipment, materials, and job shadowing and mentoring opportunities. One business official we spoke with acknowledged that he saw a significant cost incentive to using WRTP’s free employment services. He said that by working with WRTP, he will save the approximately $3,000 per employee he would have to spend if he worked with a temporary employment agency to identify potential employees. Consortium organization officials in Charlotte told us that businesses benefit from participation in activities such as speaking engagements and teaching courses because they can have first opportunity to recruit potential new employees. For example, the Information Technology program at Central Piedmont Community College was a five- to six-level certificate program that asks business members to speak or teach. The officials report that the program had about 100 part-time instructors, half of whom are from small businesses. Additionally, a CPCC official reported that the college provided training assistance, at no cost, to North Carolina businesses that create new full-time manufacturing and customer service positions. Two programs, New and Expanding Business Industry and Focused Industrial Training, provided customized training services that included pre-employment training, on-site instruction, and materials. A third state-funded program—In-plant Training—also assisted businesses, at no cost, in providing employees with in-service training in basic job skills. Outcome studies of workforce development activities at the sites we visited were limited in scope. We found that some consortia organizations in some locations reported collecting data to monitor the number of participants in activities such as job fairs and internships, employee placements following completion of training programs, and employment retention and advancement from local workforce development initiatives. However, officials said that consortia organizations did not have systems in place to evaluate the overall effectiveness of workforce development activities in their community. Outcome information for specific activities on participation, employee placement, and employment and retention rates included: In Milwaukee, WRTP’s Manufacturing Jobs Connection project targets central city residents—many of whom have limited work experience and less than a high school education, according to a program official. The project reported that—since its inception in 1997, 202 participants have completed the program’s customized training and nearly all were placed in manufacturing jobs. Fifty-seven of the placements were with small businesses of 100 or fewer employees—earning an average wage of $10.75 per hour. As of January 2000, the employees’ job retention rate was 68 percent for 3 months, 63 percent for 6 months, and 48 percent for 12 months. According to an official, WRTP has placed more than 1,000 employees in jobs—over 600 in manufacturing—since 1997 with most jobs paying at least $10.00 per hour, plus health care, pension, tuition reimbursement, and other benefits. The project also reported an increase in program participants’ annual earnings from about $9,000 to $23,000 per year in the first year on the job. In Austin, CATF—the consortium intermediary—reported placing over 2,000 high school students in summer internships in 2000 as well as 1,350 middle school students in job shadowing activities during the 2001 school year. The annual Greater Austin @ Work High School Career Fair sponsored by CATF attracted 2,600 students from 25 high schools and over 170 employers, colleges and universities, and community-based organizations in 2000. Additionally, CATF’s Construction Gateway Program, a five-week job-training program for the construction trades, has graduated 504 trainees in the past 6 years—many of them incarcerated youth. Of the participants who graduated between 1994 and 1999, 259 were subsequently employed in the construction field. Program officials were planning to use a workforce commission database to survey the program’s graduates to determine their work progress since graduation. In Charlotte, Central Piedmont Community College had several programs that tracked participation and retention rates. For example, the Pathways to Employment program had 70 to 80 participants per semester, most of whom received Temporary Aid for Needy Families (TANF). Pathways was a short-term training program designed to prepare participants to enter the workforce as skilled employees in areas including customer service, heating and air conditioning, medical office administration and early childhood development. Program officials reported an 80 percent job placement rate and a 75 to 80 percent retention rate after 3 months. In Apprenticeship 2000—a 4-year program where participants work and attend community college classes—officials reported there were 45 apprentices participating at the time of our review. Outcome information is not formally collected; however, a representative from one company told us that only 2 of the 28 apprentices it has sponsored have dropped out since the program's inception. In Cedar Rapids, The Workplace Learning Connection tracked participation rates for all work-based learning activities. Reported levels of student participation in these activities increased from fiscal years 1999 through fiscal year 2000 as shown in table 3. Additionally, the intermediary reported that 37 students who had served as unpaid interns with local businesses in fiscal year 2000 were later hired as paid part-time employees. The Workplace Learning Connection officials told us that they also measured the success of activities by the continuing use of their services by the schools—since the schools pay a fee for service. At the time of our review, the intermediary had memoranda of understanding with 29 of the 33 school districts in its region. Kirkwood Community College in Cedar Rapids reported that it, along with 268 participating businesses, has generated about 19,000 new jobs through the Iowa Industrial New Jobs Training Program since 1983. Partnerships among private sector groups, local governments, and their public agencies continue to emerge as an important force in addressing local workforce development problems. These partnerships, often prompted by adverse or changing local economic conditions, have strong business leadership, focus, and financial support. They address the disconnect between the community’s employers—especially small businesses—and workforce development services designed to identify and prepare entry level workers, upgrade the skills of existing workers, and create career pathways for young adults. Within each community we visited, the capacity to address these workforce development needs was present. What had been missing was a consistent and stable mechanism to link businesses to the employment and training resources they needed. While the consortia followed similar paths in their approach, each location addressed its problems by mobilizing the unique strengths of its locale, adapting to the special circumstances of the community. Often found in these communities, and partners with the consortia, were the efforts of both current and past federal initiatives—local WIA boards and partnerships established under the School-to-Work Program. The infrastructure created by federal initiatives like these, that can support new service entities operating in harmony with existing service systems, appears to be a promising way of promoting broad national goals while providing the local discretion necessary to create solutions to fit with local problems. We provided officials at the Department of Labor and consortia officials from Austin, Cedar Rapids, Charlotte, and Milwaukee an opportunity to comment on a draft of this report. All reviewing officials generally agreed with the contents of the report and some provided clarifications and technical comments that we incorporated where appropriate. We performed our review from September 2000 to July 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Honorable Elaine L. Chao, Secretary of Labor, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix I. In addition to those named above, the following individuals made important contributions to this report: Lisa A. Lusk, Scott R. McNabb, Dianne L. Whitman-Miner, Howard J. Wial, James P. Wright, Jonathan H. Barker, and Richard P. Burkard.
Small businesses often have serious difficulty finding skilled employees, upgrading the skills of their existing employees, and identifying strategies to meet future workforce needs. The Workforce Investment Act of 1998 seeks to address these workforce development issues and give federal training programs a greater employer focus. Although these problems are common throughout the country, small businesses in some areas have joined with business and trade organizations, community colleges, and other public and private groups to form workforce development networks--often referred to as workforce consortia. This new approach offers small businesses access to various workforce development activities in which they might otherwise be unable to participate. Limited information exists on the outcome of workforce consortia at the sites GAO reviewed. There were no systematic efforts to evaluate overall consortium effectiveness, but there were isolated attempts to monitor participation rates and assess the impact of specific activities on job retention and future earnings.
Ginnie Mae operates as a unit of HUD and its administrative, staffing, and budgetary decisions are coordinated with HUD. Ginnie Mae is organized into five offices and relies on contractors for many aspects of its work. Contracted functions include certifying new MBS, administering payments to investors, data collection from issuers and risk analysis, Ginnie Mae servicing of defaulted loans, internal control reviews, issuer compliance reviews, and information systems management. Ginnie Mae staff responsibilities include policy and management functions and oversight of contractors. We discuss Ginnie Mae’s organization, staffing, and budget in greater detail later in this report. Ginnie Mae guarantees the performance of MBS, which are obligations of the issuers that are backed by mortgages insured or guaranteed by federal agencies, such as FHA, PIH, VA, or RHS. Ginnie Mae provides an explicit federal guarantee (full faith and credit of the United States) on these MBS, but it does not issue the MBS or originate the underlying mortgages. Rather, it relies on approved financial institutions (issuers) to pool or securitize the eligible mortgages and issue Ginnie Mae- guaranteed MBS. The issuers can service the MBS themselves or hire a third party to transmit the monthly principal and interest payments to investors. Ginnie Mae’s explicit guarantee can lower the cost of borrowing for issuers, which allows them to offer lower interest rates to mortgage borrowers. Issuers can obtain these mortgages by originating the loans or purchasing the loans from another institution. See figure 1 for an overview of Ginnie Mae securitization. Ginnie Mae’s guarantee is limited to the risk that issuers cannot make the required monthly principal and interest payments to investors. While other federal agencies already insure or guarantee the mortgages that back Ginnie Mae-guaranteed MBS, the private-sector issuers of these MBS are responsible for ensuring that investors that purchase these MBS receive monthly payments on time and in full, even if the borrower makes a late payment or defaults. Ginnie Mae issuers are responsible for making these advance payments to investors using their own funds and for recovering any losses from the federal agencies that insured or guaranteed the mortgages. If an issuer cannot ensure the timely payment of principal and interest to investors, Ginnie Mae defaults the issuer, acquires the servicing of the loans, and uses its own funds to manage the portfolio and make any necessary advances to investors. Ginnie Mae charges issuers a monthly guarantee fee, which varies depending on the product, for guaranteeing timely payment. Issuers also pay a commitment fee to Ginnie Mae each time they request authority (commitment authority) to pool mortgages into Ginnie Mae-guaranteed MBS. Investors in Ginnie Mae-guaranteed MBS face the risk that a mortgage will be removed from the MBS pool prematurely—either due to borrower default or prepayment of a loan—which reduces the amount of interest earned on the security. However, investors do not face credit risk—the possibility of loss from unpaid mortgages—because Ginnie Mae guarantees timely payment of principal and interest. Ginnie Mae has several different products. Its original MBS program, Ginnie Mae I, requires that all pools contain similar types of mortgages (such as single-family or multifamily) with similar maturities and the same interest rates. The Ginnie Mae II MBS program, which was introduced in 1983, permits pools to contain loans with differing characteristics. For example, the underlying mortgages can have varying interest rates and a pool can be created using adjustable-rate mortgages. Ginnie Mae’s Multiclass Securities Program, introduced in 1994, offers different types of structured products, including Real Estate Mortgage Investment Conduits (REMIC) and Ginnie Mae Platinum Securities. REMICs tailor the prepayment and interest rate risks associated with MBS to investors with varying investment goals. These products direct principal and interest payments from underlying MBS to classes, or tranches, with different principal balances, terms of maturity, interest rates, and other characteristics. Platinum Securities allow investors to aggregate MBS with relatively small remaining principal balances and similar characteristics into new, more liquid securities. The MBS aggregated into these structured products retain Ginnie Mae’s full faith and credit guarantee. In addition, Ginnie Mae guarantees the timely payment of principal and interest on the structured products and charges an additional fee to the financial institutions that create them. Ginnie Mae also requires that these institutions contractually agree to reimburse any costs Ginnie Mae may incur to guarantee these products. Ginnie Mae defines its mission as expanding affordable housing by linking capital markets to the nation’s housing markets. Ginnie Mae does this by serving as the dominant secondary market vehicle for government-insured or -guaranteed mortgage loan programs. Ginnie Mae’s guarantee benefits lenders, borrowers, and investors in a number of ways. First, the guarantee benefits lenders by increasing the liquidity of mortgage loans, which may lower the cost of raising funds and allow lenders to transfer the interest-rate risk of a mortgage to investors. Second, the guarantee benefits borrowers by lowering the cost of raising funds for lenders, which helps lower interest rates on mortgage loans. Finally, Ginnie Mae’s guarantee provides investors with a fixed-income security that has the same credit quality as a U.S. Treasury bond. Ginnie Mae relies on its fee revenues rather than appropriations from the general fund to pay for its operations and cover costs related to issuer defaults. However, the amount of MBS Ginnie Mae can guarantee each year is capped by its commitment authority level in HUD’s appropriation. For 2010 and 2011, Ginnie Mae was authorized each year to guarantee up to $500 billion in MBS. Ginnie Mae guarantees the timely payment of principal and interest on MBS. For budgetary purposes, Ginnie Mae and other federal agencies estimate the net lifetime costs (credit subsidy costs) of their guarantee program and include the costs to the federal government in their annual budgets. For Ginnie Mae, credit subsidy costs represent the net present value of expected cash flows over the life of the securities it guarantees, excluding administrative costs. Cash inflows consist primarily of guarantee fees charged to MBS issuers and cash outflows includes advance payments of principal and interest on delinquent mortgages underlying MBS from defaulted issuers. When estimated cash inflows exceed expected cash outflows, a program is said to have a negative credit subsidy rate. When the opposite happens, a program is said to have a positive credit subsidy rate, and therefore require appropriations to cover the estimated subsidy cost of new business. Historically, Ginnie Mae has estimated that its guarantee program would have a negative credit subsidy rate and, as a result, generate budgetary receipts for the federal government. These receipts have resulted in substantial balances in a reserve account, which is used to help cover unanticipated increases in those costs—for example, increases due to higher-than-expected issuer defaults or fraud. According to Inside Mortgage Finance data, from calendar year 2007 to 2010 Ginnie Mae’s share of the MBS market increased from nearly 5 percent to 25 percent as the total size of the secondary mortgage market declined and the role of private-label MBS issuers declined substantially. The size of the MBS market decreased from $2.16 trillion in new MBS in calendar year 2005 to $1.57 trillion in calendar year 2010, a decline of nearly one-third (see fig. 2). The overall market decline was driven by the housing downturn and increased defaults and foreclosures. This led to mortgage lenders tightening their underwriting standards and making fewer loans. Also, private-label MBS issuers faced a sharp decline in eligible loans and investor demand. As the demand for FHA and other federally insured or guaranteed mortgages grew during this time, financial institutions increased their issuance of Ginnie Mae-guaranteed MBS to finance these federally insured or guaranteed loans. As Ginnie Mae’s market share increased, the number of Ginnie Mae issuers generally stayed the same although their numbers declined from 2007 to 2008 and increased in 2009 and 2010 (see fig. 3). Moreover, for the three quarters of 2011, 371 financial institutions participated in the Ginnie Mae-guaranteed MBS program. While most were mortgage banks, the issuers with the largest Ginnie Mae-guaranteed MBS portfolios were commercial banks. As of June 30, 2011, three commercial banks accounted for nearly two-thirds of the dollar amount of outstanding Ginnie Mae-guaranteed MBS. According to Ginnie Mae data, concentration among issuers generally has remained the same. More specifically, in 2005, 20 issuers accounted for 92 percent of Ginnie Mae single-family MBS issuance; in 2010, 26 issuers accounted for 94 percent of single- family MBS. According to Ginnie Mae data, as Ginnie Mae’s share of the secondary mortgage market increased, the volume of Ginnie Mae-guaranteed MBS outstanding increased from $412 billion in 2005 to more than $1 trillion in 2010 (see fig. 4). Concurrently, new guarantees of Ginnie Mae- guaranteed MBS increased from about $89.3 billion to nearly $413 billion. To accommodate the securitization of an increasing volume of federally insured and guaranteed mortgages, Congress increased the statutory cap on Ginnie Mae’s commitment authority from $200 billion to $500 billion over the same period. The increases in annual volume were due to increases in the volume of mortgages insured by FHA or guaranteed by VA, PIH, or RHS that were pooled into Ginnie Mae-guaranteed MBS (see fig. 5). Of the agencies, FHA accounted for most of the increases in annual volume. FHA-insured loans pooled into Ginnie Mae-guaranteed MBS increased from $63.8 billion in 2005 to $330.2 billion in 2010—and more recently, to $182 billion during the first three quarters of 2011. Furthermore, in 2010, nearly all single-family mortgages insured by FHA or guaranteed by VA were pooled into Ginnie Mae-guaranteed MBS. In addition to issuing guaranteed MBS from loans for single-family homes, Ginnie Mae issuers increasingly produced MBS backed by other mortgage products, such as multifamily loans and reverse mortgages on single-family homes (see fig. 6). More specifically, the volume of reverse mortgages backing Ginnie Mae-guaranteed MBS increased significantly starting in 2009 when Ginnie Mae instituted its reverse mortgage securities program, which was the main securitization program available for FHA reverse mortgage loans during this time. During the first three quarters of 2011, financial institutions issued more than $8 billion in Ginnie Mae-guaranteed MBS backed by reverse mortgages. The volume of structured products backed by Ginnie Mae-guaranteed MBS increased as the total volume of MBS has increased since 2005. For instance, the volume of REMICs issued by financial institutions approved to issue Ginnie Mae structured products increased in 2009 and 2010 (see fig. 7). During the first three quarters of 2011, financial institutions issued $102 billion in REMICs, $27 billion in Platinum Securities, and $670 million in Callable Trusts. Ginnie Mae’s fee revenues also increased from these products, from $20.7 million in 2005 to $63.4 million in 2010. As of June 30, 2011, Ginnie Mae had received $45.2 million in fee revenues from structured products for 2011. Fees from these products represent a small but growing share of annual revenue for Ginnie Mae (from 2.6 percent in 2005 to 6.3 percent in 2010). Operational risk is the risk of loss resulting from inadequate or failed internal processes, people, and systems or from external events. We and others, including HUD’s OIG, have identified limited staff, substantial reliance on contractors, and the need for modernized information systems as operational risks that Ginnie Mae may face. Ginnie Mae also faces counterparty risk when an issuer fails or defaults, which would require the agency to service the underlying loans and ensure that investors receive monthly principal and interest payments. Ginnie Mae has taken a number of steps to address both types of risks. A complete listing of Ginnie Mae’s planned changes to address operational and counterparty risk can be found in appendix II. To help mitigate operational risk, Ginnie Mae has developed strategies to address staffing gaps, realigned its organizational structure, conducted risk assessments on its contracting, and started to improve outdated information systems. Although Ginnie Mae’s market share and volume of MBS has increased in recent years, its (noncontractor) staff levels have been relatively constant during this time despite requests for increased staffing authority. For example, in 2004, when Ginnie Mae’s MBS market share was 7 percent, HUD conducted a Resource Estimation and Allocation Process (REAP) study, which suggested that Ginnie Mae’s staff be increased from 70 to 76 full-time equivalent (FTE) positions. However, Ginnie Mae officials told us that its authorized staff levels were not increased to the levels suggested in the REAP study until 2010 when the agency was given authority for 78 FTEs. Between 2005 and 2009, Ginnie Mae’s authorized staff level fluctuated between 67 and 72.2 FTEs. Moreover, its actual staff levels trailed its authorized staff levels. Table 1 illustrates the number of requested, authorized, and actual FTEs from 2005 to 2010. Most recently, Ginnie Mae’s internal control reviews for 2009 and 2010 identified a control deficiency due to employee vacancies. In 2009, the report found multiple vacancies in certain positions relevant to internal controls, such as an internal control manager and monitoring analysts. The report also found that the vacancies caused employee workloads to increase, which could lead to negative performance. In 2010, the report stated that while key senior-level positions had been filled, vacancies had brought actual FTE levels below the level recommended in the 2004 REAP study, mainly in the Office of Mortgage-Backed Securities. In 2011, the reviews had no findings related to employee vacancies. As part of a broad effort to address and mitigate its operational risks related to staffing levels, Ginnie Mae has incorporated some principles consistent with our internal control and management tool. Internal control and human capital guidance states that agencies should develop strategies that are tailored to address gaps in the number and deployment of staff, evaluate their organizational structure, and make changes based on changing conditions. Consistent with this guidance, Ginnie Mae has identified skill gaps in staff resources, developed a plan to hire additional staff, and made changes to its organizational structure. In 2010, Ginnie Mae officials presented HUD senior management with a staffing justification that identified skill gaps in its current staffing. Ginnie Mae officials reported needing 160 staff to develop or enhance policies, procedures, and related systems to properly manage risks and bring some contracted services in-house, such as project management. The staffing justification stated that Ginnie Mae did not have sufficient or dedicated staff to mitigate certain risks internally. To identify these gaps in staffing, Ginnie Mae created a matrix that identified certain roles that were not fully staffed. For example, the matrix identified that Ginnie Mae needed:  dedicated staff to design, develop, and leverage risk-related analytic tools to reduce dependency on recommendations of contractors to manage Ginnie Mae’s risk;  dedicated staff to develop exit and replacement strategies for critical,  dedicated staff to manage and oversee operational risks;  dedicated staff to establish and manage loss reserves and portfolio sufficient staff to develop and maintain systems manuals used by employees and Ginnie Mae issuers and servicers. In 2011, Ginnie Mae received approval to support a staffing level of 108 FTEs. Ginnie Mae had developed a plan to hire additional staff in two phases. For the first phase, Ginnie Mae focused on staffing 25 priority positions, of which 9 were in the Office of Mortgage-Backed Securities, 5 in the Office of Finance, 4 to assist the Chief Risk Officer, 2 in the Office of Capital Markets, 3 in the Office of Management Operations, 1 in the Office of Program operations, and 1 in the Office of the President and Executive Vice President. The President’s 2012 budget request included $30 million for additional administrative expenses, including hiring up to 249 FTEs. According to Ginnie Mae officials, the increase would allow the agency to implement its second phase of hiring and increase its staffing levels. However, Ginnie Mae officials explained that in July 2011 they reassessed and revised the budget request after determining that the requested $30 million would be sufficient to hire only 137 FTEs. According to Ginnie Mae officials, additional flexibility provided in the budget request will enable Ginnie Mae to strengthen risk management and oversight, move in-house some functions performed by contractors, and provide flexibility for future needs. More specifically, if Ginnie Mae does not receive the authority requested in its revised 2012 request, officials told us the agency would be forced to use its limited resources across its many-risk management efforts and would have little capacity to conduct preventative analysis, therefore leaving Ginnie Mae to rely on a more reactive approach. Ginnie Mae initially proposed realigning its organizational structure to support increased staffing levels in November 2010, and amended its proposal in March 2011 based on comments received by HUD senior management. Ginnie Mae proposed the revisions to create a new office and add divisions under an existing office so that new staff could be more effectively integrated into the agency. For example:  The proposed structure created an Office of Enterprise Risk to be headed by the Chief Risk Officer. The Chief Risk Officer position and a Risk Committee were created in 2008 in response to a 2007 HUD OIG report identifying a potential conflict of interest between Ginnie Mae’s issuer approval and issuer monitoring functions.  The proposed structure added two divisions in the Office of Program Operations, which manages day-to-day functions for Ginnie Mae’s MBS and structured product programs. The Project and Data Management Division will oversee and direct initiatives across Ginnie Mae, such as the implementation of new disclosure information. The Operations Division will focus on managing operations, such as pooling loans and creating securities, and will direct Ginnie Mae’s contractors who maintain and operate a large part of Ginnie Mae’s securitization process. Figure 8 illustrates the proposed reorganization. As of August 2011, officials had received HUD approval to implement the new structure and have notified Congress and HUD’s union and await their responses to begin implementation. Between 2005 and 2010, as Ginnie Mae’s volume and issuer activity increased and staff levels remained largely the same, the agency increasingly relied on contractors. In 2005, we reported that in 2004 approximately 81 percent of Ginnie Mae’s activities were contracted out and concluded that ensuring the agency had sufficient staff capabilities to plan, monitor, and manage its contracts was essential. According to Federal Procurement Data System-Next Generation data, from 2005 through 2010, Ginnie Mae obligated approximately $599 million on contracts. As shown is figure 9, while the amount of obligations had been increasing since 2005, they increased significantly in 2009 and 2010. Contract obligations in 2010 were more than 14 times the obligations in 2005 due, in part, to increases in volume and market share, expenses related to servicing nonperforming loans in defaulted issuers’ portfolios, and the need to use contracts to implement planned improvements to technology systems. Further, the number of active contracts and orders increased from 18 in 2005 to 37 in 2010. According to Ginnie Mae officials, they have contracted out many functions because the agency has flexibility to use agency revenues to procure contractors. That is, statutorily Ginnie Mae has more flexibility to spend funds for contracting expenses because they can be funded from agency revenues without annual appropriations. To pay for staff, Ginnie Mae has to seek annual appropriations that have to be approved by HUD, OMB, and Congress. As a result, Ginnie Mae has relied on contractors to develop and operate information technology systems, manage and dispose of acquired mortgage portfolios, and conduct monitoring reviews of issuers. According to Ginnie Mae officials, throughout its history the agency has operated with a business model that includes a small staff that is largely supported by contractors because of the difficulty in securing annual appropriations and not being able to use agency revenues to pay for staff. Officials explained they have not conducted a formal benefit-cost assessment of using contractors but believe such a heavy reliance on contractors may not be cost-effective. Ginnie Mae depends on contractors to provide a variety of services, including those related to guaranteeing MBS, such as collecting data from issuers and processing monthly principal and interest payments to investors. In addition, Ginnie Mae relies on several contractors to take over the servicing responsibilities on pooled loans when issuers default. Table 2 illustrates some core functions at Ginnie Mae performed by contractors and the total amounts obligated from 2005 to 2010. Ginnie Mae has used its own staff and third-party assessments of contracts to oversee its contractors but plans to provide additional staff resources to supplement the third-party assessments. According to HUD’s contractor monitoring guide and handbook on procurement policies and procedures, a Government Technical Representative (GTR) should be assigned to oversee and monitor the contractor’s performance. For example, the guidance requires that GTRs monitor the contract for timeliness and review invoices for accuracy. Since 1993, Ginnie Mae has relied on third-party contractors to conduct Contract Assessment Reviews (CAR) in accordance with procedures developed by Ginnie Mae. In general, the CARs guidance outlines that the third-party contractor should focus on determining whether the contractors complied with the terms of their contracts, conducted appropriate billing, and maintained adequate internal controls to minimize risk to Ginnie Mae. The CAR reports also provide information on any potential risks to Ginnie Mae based on other completed audits and reviews. These reviews are to be conducted on contracts that have expended more than $1 million. Ginnie Mae officials explained they had plans to supplement these reviews in 2011 with the hiring of additional Ginnie Mae staff to conduct on-site reviews and oversight concurrent with, and independently of, the third-party contractors. However, due to changes to its budget, implementation of this plan has been put on hold until 2012 or 2013. Officials explained that in previous years staffing limitations required waiting until the following review to address issues identified in the previous review. In some instances, there might be a significant time lag between reviews. One review might cover a 15-month period while another would cover a 9-month time frame. Ginnie Mae officials explained the timing of the reviews often depended on the time needed to procure the contractors rather than on a set schedule. Based on our nonprobability sample of 33 CAR reports from 2005 to 2010, the reports produced some findings. These findings included questionable costs, information technology controls, and accounting controls. For instance, one contractor did not have proper procedures to review timesheets and improperly billed Ginnie Mae for $2,621. The contractor agreed to develop formalized procedures and reimburse Ginnie Mae for the improper payment. Additionally, in a few instances the third party conducting the review had difficulty accessing necessary files to complete contractually required procedures. Ginnie Mae officials explained that they now work to address any access issues with contractors at the beginning of the contractor’s reviews. While our review of a sample of CARs identified some findings, the 2010 HUD OIG management letter discussed problems relating to one contractor and recommended associated improvements in internal controls, including assessing the effectiveness of CAR procedures. More specifically, in October 2009 Ginnie Mae identified accounting irregularities at its servicer of manufactured home loans. Agency officials subsequently asked the contractor that performs internal control reviews to do a more in-depth review of the servicer, including a file review. The internal control review confirmed the servicer had not completely or accurately processed manufactured home loan transactions for Ginnie Mae. As a result, Ginnie Mae officials explained they developed a corrective action plan and decreased the size of the portfolio managed by the servicer from $26 million in August 2010 to about $4.7 million in August 2011. The HUD OIG management letter suggested that internal control over Ginnie Mae’s manufactured housing servicer needed improvement and stated one of the causes for the finding was that the prior year CAR did not include procedures to review specific loan-level details. The HUD OIG made four recommendations—the one specific to CAR procedures stated Ginnie Mae should assess the effectiveness of and update CAR procedures if needed. Ginnie Mae officials told us that they have addressed the HUD OIG recommendations and have updated review procedures for this servicer and its other servicers of single-family and multifamily properties. Subsequent to these reviews, Ginnie Mae began to take other steps to address operational risks related to contracting that are consistent with the principle identified in our internal control and management tool to consider risks associated with major suppliers and contractors. More specifically, Ginnie Mae has conducted risk assessments of its contracts and potential operational risks, and plans to review the proposed recommendations and determine how to implement them. However, as of October 2011, none of the recommendations have been implemented. In December 2010, the Chief Risk Officer staff analyzed Ginnie Mae contracts and identified approximately 12 contracts that could pose operational risk to Ginnie Mae. The purpose of the risk assessment was to assess the inherent risks associated with activities its top contractors executed and to determine what controls the agency had in place or should have in place to mitigate risks. The potential risks to Ginnie Mae included (1) lack of a contingency plan if the contractor ceased work with Ginnie Mae, (2) poor internal controls, (3) nonperformance under contract terms, and (4) failure of operations. The analysis included short-term recommendations related to better management of internal controls—for example, increasing training requirements for GTR staff on areas of the greatest risk exposure to Ginnie Mae such as cost overruns and inadequate recordkeeping. Long-term recommendations included increasing the number of Ginnie Mae staff to reduce the dependency on a few key staff. Targeted recommendations included developing  a transition plan to automate manual processes that might lead to operational errors to help address the risk of failure of operations and formal contract reporting on projects with performance metrics to help avoid nonperformance under contracts. Ginnie Mae also contracted with a firm to provide recommendations for enhancing its risk-management capabilities. In June 2011, the contractor’s study recommended that Ginnie Mae systematically assess staff overseeing its contracts to identify any gaps in expertise—for example, by annually using a checklist or other mechanism to identify expertise. In addition, the study suggested that Ginnie Mae develop a system to track any contract-related incidents so that any issues would be handled promptly. The study noted that as Ginnie Mae continues to grow, establishing formalized processes for contract-related incidents would be important. Although Ginnie Mae has conducted risk assessments on its contracts, it has not yet implemented the recommendations from these assessments. According to Ginnie Mae officials, they have deferred implementing the recommendations from the December 2010 risk assessment because staff working for the Chief Risk Officer also have been conducting another assessment on ways to improve contract management and procurement processes. Officials explained that once this review was complete, they would review recommendations from all three assessments and develop a plan to implement them collectively. Ginnie Mae officials also explained that during 2012, the Chief Risk Officer plans to work with senior management to assess the recommendations in the June 2011 study and prioritize their implementation relative to other competing projects currently underway at the agency, such as technology improvements and updates to its statistical model used to forecast cash flows to and from the program. We discuss technology improvements in the following paragraphs and the statistical model in the next section of this report. Concurrent with its other risk assessments, Ginnie Mae began to change its procurement practices in an effort to reduce its reliance on contractors for critical functions. More specifically, as part of senior management performance plans for the 2011 calendar year, managers have been directed to develop and put in place a contracting environment that leverages contractors and Ginnie Mae staff more effectively. For instance, some senior management performance plans include a directive to conduct a needs assessment for every contract that is new, has the option to extend, or has ended. These assessments consider whether the contract should be recompeted to bring targeted services or work products in-house, thereby reducing contractor expenses and reliance. Officials explained they also plan to include this directive in 2012 calendar year performance plans. Officials also told us that these needs assessments are required for all contract actions. As of August 2011, of the nine contracts for which needs assessments might be conducted, four have been completed. According to Ginnie Mae officials, the results of the assessments for two contracts identified possible ways to bring certain functions in-house, such as one contract for project management, which may save $600,000. In 2012, Ginnie Mae officials expect to complete 17 needs assessments. Senior managers also told us they have been reviewing current contract provisions to make sure Ginnie Mae staff understood all the elements of a contract. For example, management reviewed one contract with a large technology component and found that the system documentation and user manuals had not been consistently updated. According to officials, Ginnie Mae recognizes the need for updated documentation and is in the process of modernizing the data system used by the contractor, which includes new system documentation and user manuals. Ginnie Mae has been working on an ongoing initiative to improve its information technology systems. According to officials, Ginnie Mae has been working on the first phase of its business process improvement initiative for the last few years based on a plan developed in conjunction with OMB. The main goal of the initiative is to modernize the agency’s technology by consolidating processes and eliminating redundant systems. Some of the weaknesses included outdated data systems, a reliance on paper-based processes, and a lack of integrated data systems. According to our internal control management and evaluation tool, management should derive critical operating data from its information management function and support efforts to make improvements in the systems as technology advances. According to Ginnie Mae, the first phase of the initiative resulted in the creation of nine new information technology system initiatives. Seven of these initiatives have been in place since October 2009. For instance, one system allows Ginnie Mae to receive enhanced reporting and provide status information to issuers. Another allows Ginnie Mae issuers to provide pool information electronically. According to Ginnie Mae, these systems let Ginnie Mae modernize its technology by merging legacy systems into a centralized database. Ginnie Mae officials further explained that they have been modernizing the pooling information system so that it can be integrated with the enterprise-wide data system. In addition, Ginnie Mae has been drafting a strategy document for its ongoing initiative to look for additional business improvement opportunities in its information technology systems. To manage its counterparty risk, Ginnie Mae has processes in place to oversee issuers that include approval, monitoring, and enforcement. In response to changing market conditions and increased market share, Ginnie Mae revised its approval and monitoring procedures. In addition, Ginnie Mae has several planned initiatives to enhance its management of counterparty risk; however, many have not yet been fully implemented. Issuers are subject to the requirements outlined in the Ginnie Mae MBS guide and all participant memorandums, some of which have been made more stringent in recent years due to changes in industry and market conditions. In September 2008, Ginnie Mae issued a notice to participants that it was raising the issuer approval standards and requirements due to industry and market conditions. For example, newly approved issuers became subject to a 1-year probationary period, which begins after their first issuance or acquisition of a servicing portfolio. Before this time, new issuers had no probationary period. In addition, for newly approved and already existing issuers, the Office of Mortgage- Backed Securities monitors required risk thresholds, such as delinquency levels and loan matching statistics. New and existing single-family issuers also must meet increased net worth and liquid asset thresholds. Initially, new issuers in the single- family and reverse mortgage program had to have a minimum net worth of $250,000. In 2008, the minimum increased to $1 million. In October 2010, the minimum net worth requirement was raised to $2.5 million. At the same time, Ginnie Mae announced a new liquid asset requirement, which requires single-family issuers to maintain liquid assets that are 20 percent of the issuer’s Ginnie Mae required net worth requirement. According to the policy memorandum Ginnie Mae issued, the increased liquid asset requirement is intended to help ensure funds would be available when cash was needed for mortgage buyouts or to pay for potential indemnification requests from federal guarantee programs. Existing single-family issuers had until October 2011 to meet the increased net worth and liquid asset thresholds. Corresponding to changes in Ginnie Mae’s market share, the number of new issuer applications and approvals increased from 2008 to 2010 (see fig. 10). For the first three quarters in 2011, the agency received 73 new applications, approved 32 new issuers, and 85 applications were denied or withdrawn. Ginnie Mae’s process for screening applications includes a review of the applicant’s net worth and its performance as an FHA lender. In addition, the applicant may be required to undergo a special servicer review if the applicant is not an approved Fannie Mae or Freddie Mac seller or servicer, or Ginnie Mae believes the applicant warrants a more in-depth review. According to Ginnie Mae officials, the special servicer review (conducted by Ginnie Mae staff and its contractors) began in 2008 as an on-site review of the financial, management, and operational capacity of selected new applicants and existing issuers. As of June 30, 2011, the agency had conducted 32 special servicer reviews on new applicants since 2008, for which 27 were approved and 5 rejected. Officials explained one of Ginnie Mae’s goals is to decrease the approval time for issuers from approximately 1 year to 6–8 months. They plan to hire additional staff to review applications and have one of their contractors help obtain the necessary documentation from issuers. However, the creation of these new positions has been on hold due to decreases in the FTE levels for 2011 and potential budget decreases for 2012. Ginnie Mae also has been considering raising its application fee to deter issuers that might have little intention of issuing MBS but think approval from a federal entity would reflect well on their business. Ginnie Mae officials also told us they planned to expand the number of issuers by marketing Ginnie Mae and its products to smaller financial institutions, such as credit unions and state housing finance agencies because the concentration of the MBS portfolio among a few issuers represents some level of risk to Ginnie Mae. For instance, if one large issuer were to fail, Ginnie Mae would be responsible for servicing more mortgages than if a small issuer failed. Officials said that the risk posed by concentration may be mitigated because these issuers generally were regulated at the federal level. Monitoring processes for issuers include the approval process for commitment authority, reviews of quarterly and monthly summary reports, and on-site reviews of issuers. Ginnie Mae has modified some of these processes in recent years by requiring issuers to request commitment authority more frequently and developing additional quarterly and monthly summary reports. The agency also plans to add other monitoring tools. According to Ginnie Mae officials, the agency uses its ability to limit or modify commitment authority requests as a primary risk-management tool (by limiting commitment authority, the agency reduces the flow of funds to the issuer). To deal with increased demand, in 2005, Ginnie Mae created two processes for granting commitment authority—streamlined and nonstreamlined requests. Issuers that meet required risk thresholds set by Ginnie Mae go through the streamlined process, which limits the number of approvals needed for the request. Issuers that do not meet these thresholds or are on Ginnie Mae’s watch list would be considered under the nonstreamlined process, which requires additional scrutiny by Ginnie Mae staff and additional approvals by Ginnie Mae management. Before 2005, the agency used the same process for those that did and did not meet required risk thresholds. Officials explained the change was made to increase the efficiency of the process for issuers who met required thresholds. Whether streamlined or not, officials explained requests for commitment authority now require more frequent approvals. Before 2008, issuers generally would request commitment authority annually. However, Ginnie Mae issuers currently apply for commitment authority in an amount equal to the securities they plan to issue during the next 4 months. Therefore, issuers generally must request the authority every 2 to 3 months, which allows Ginnie Mae to take an in-depth look at the issuer’s performance and compare it against its required risk thresholds. In 2010, Ginnie Mae also revised its guidance to require that streamlined requests receive management-level review rather than just a staff-level review in the Office of Mortgage-Backed Securities. The commitment authority process has been subject to internal reviews from 2006 through 2011, but these reviews found no material weaknesses. Specifically, Ginnie Mae’s annual internal control review generally examines the commitment authority process. Although control deficiencies—that is, less serious findings that identify an internal control that might not be designed to prevent or detect and correct issues—were identified from 2008 through 2011, officials explained the deficiencies did not result in any issuer being granted commitment authority that should not have received it. For instance, in 2008, Ginnie Mae was unable to locate the files for the sample of 25 files selected by the internal auditor to conduct its review. In 2009–2011, the required commitment authority checklist was not always completed according to guidance. To address the 2008 finding, Ginnie Mae officials explained that management was directed to enforce the guidance and the filing system was changed. For the 2009–2011 findings, Ginnie Mae updated procedures in its manual and amended the checklist twice. Since 2007, according to Ginnie Mae, one of the two contractors that manage the administration of the MBS program has created 30 new monthly and quarterly monitoring reports, which staff from the Office of Mortgage-Backed Securities review. Ginnie Mae officials explained that these reports were generally created because new programs, such as the reverse mortgage program, were developed that required new monitoring requirements, or enhancements were identified to existing monitoring processes, which required additional reporting. Among other new reports, in 2008 the contractor created a monthly summary report addressing active issuers. The report summarizes issuer risks (in areas such as default and financial condition) and results of issuer compliance reviews. However, Ginnie Mae has not updated its guidance to reflect this new report. Officials explained that Ginnie Mae staff rely on the summary information prepared by the contractor that combine information on all issuers rather than creating individualized reports. In fact, our analysis of 10 issuer files revealed that Ginnie Mae staff had not prepared monthly management worksheets for any of these issuers as Ginnie Mae’s guidance requires. Ginnie Mae officials said they plan to revise guidance in 2012 to reflect the move from staff preparing reports for individual issuers to reliance on contractor-prepared summary reports. In addition, Ginnie Mae officials explained that they have been enhancing data systems to assess counterparty risk. More specifically, according to the Chief Risk Officer, the agency’s highest priority is to develop a counterparty risk-management system by March 2012. The new system aims to help Ginnie Mae identify its total counterparty risk exposure with all entities, such as issuers and contractors. The system would include information on issuers, such as rating data and risk calculations, and an algorithm to predict issuer default. In addition, the system would incorporate a scorecard to help Ginnie Mae have a comprehensive view of issuers, including information on issuer required risk thresholds. Ginnie Mae also monitors issuers through on-site reviews conducted by a contractor. Ginnie Mae has implemented two new types of reviews since 2008 to provide additional monitoring of new and existing issuers and increased the frequency of reviews on new issuers. Previously, there were two types of issuer reviews—basic and special. In 2010, Ginnie Mae added a findings resolution field review, which differs from the other reviews because the issuer is not given prior notice of the review. The purpose of this review is to test whether corrective actions for prior findings have been implemented. According to Ginnie Mae officials, seven finding resolution field reviews have been conducted since the review was implemented. According to Ginnie Mae’s January 2011 revisions to its MBS guide, new issuers are subject to on-site basic or special reviews by contractors after 6 months of the start of their Ginnie Mae issuance activity, and then annually for 2 years from the start of activity. Before this revision, contractors reviewed new issuers 6 months after their issuance activity started but did not conduct the annual reviews. Our review of information on the frequency of new issuer reviews indicated that of the five new issuers whose issuance activity began between December 2010 and March 2011, none had been reviewed after 6 months of program participation as required. According to Ginnie Mae officials, two reviews were completed in September 2011 (3 months late) and the other three were delayed due to scheduling issues and competing priorities. Existing issuers are subject to on-site basic or special reviews by contractors no less than once every 3 years, but may be reviewed more frequently based on their ability to meet performance thresholds and other factors. For example, an issuer review may be prompted by an issuer’s portfolio size, monthly reporting portfolio statistics, a sudden increase in issuance activity, monitoring of delinquency reporting, previous review results and findings, a request from the Risk Committee, or other information received by Ginnie Mae indicating potential risk to the agency. We reviewed a March 2011 schedule for reviews of 196 issuers and found that 174 reviews were conducted within the 3-year time frame. According to Ginnie Mae officials, the 22 issuers not reviewed were not active issuers during the 3-year time frame. Ginnie Mae officials explained that each year its contractor develops a schedule of the issuer reviews to be conducted in each quarter based on the factors identified earlier in this report. Currently, officials explained they work with the contractor that maintains the database on issuer reviews to develop the schedule with which issuers will be reviewed during the next time frame. However, they plan to enhance the development process by creating an additional factor for consideration—a scoring system that summarizes the results of prior issuer reviews—and coordinating with the Chief Risk Officer. Officials were unclear on the timeline for implementing this plan due to competing priorities with technology improvements. From 2005 to 2010, Ginnie Mae issued 3,971 findings from the basic and special issuer reviews. As of June 2011, 3,699 were cleared (93 percent), and 268 were referred (7 percent) to Ginnie Mae by the contractor for final resolution because they had not been cleared within the required time frame (see fig. 11). Findings from the issuer reviews fall into three risk categories (high, medium, and low). High-risk findings must be addressed within 21 days of the review, medium-risk findings within 45 days, and low-risk findings within 120 days. Findings are reflected as “cleared” if an issuer submits a resolution plan that includes evidence that the original cause of the finding has been corrected and a policy, procedure, or action was implemented to prevent the recurrence of the finding. Findings are considered “open” if they are not addressed in these time frames. Ginnie Mae can take a variety of enforcement actions against issuers, which we discuss in detail in the following paragraphs. According to Ginnie Mae officials, they do not have a database in place for tracking the resolution or timing of individual findings. However, they have been developing this capability through their information technology systems and expect it to be completed by June 2012, unless delayed by other priorities. To monitor the resolution or timing of findings, officials stated they received a weekly report from the contractor that lists all reviews completed over a certain period. The report includes the number of findings from each review and actions pending from issuers to close out any findings. If a finding has been referred to Ginnie Mae, the issuer is flagged in the system used to monitor issuers. Ginnie Mae’s internal control reviews from 2008 to 2011 repeatedly identified that the guidance used to conduct the issuer reviews should be updated to mitigate the risk of the current field review process not incorporating tests that address changing risks in the MBS market. Ginnie Mae officials told us that they have not updated the guidance because the internal control review was not specific about what risks were not being identified by the issuer reviews. However, they said that they have made changes to their issuer reviews and monitoring procedures—such as the unannounced on-site reviews and remote monitoring procedures on the movement of funds—during this time that were not reflected in updates to their guidance. Officials expected to update their guidance by the end of 2012. They explained the delay in updating the guidance in 2009 was due to the increase in the number of new issuers in 2008 and 2009, the need to conduct more issuer reviews on both new and existing issuers, and a delay in adding more funds to the contract to update the guidance. As mentioned previously, issuers found to not be in compliance are placed on Ginnie Mae’s watch list or are subject to more scrutiny during the commitment authority approval process. In addition, Ginnie Mae may declare the issuer in default and terminate the issuer. As of June 2011, 27 single-family active issuers (of 165) were on the watch list. These 27 single-family issuers had an average portfolio size of about $4.8 billion. Issuers on the watch list generally receive a quarterly monitoring letter detailing the reason for being on the watch list and are given 30 days to respond and take action. According to Ginnie Mae officials, they do not track how long issuers stay on the watch list. Ginnie Mae’s desk manual on operational procedures and its MBS guide list the types of enforcement actions it can take against noncomplying issuers. However, Ginnie Mae officials explained they plan to update this guidance by December 2011 because the violations listed may warrant a wide range of responses based on the severity of the violations. For example, if an issuer is defaulted by one of the government-sponsored enterprises, this action would warrant a more severe response than missing a deadline to post a letter of credit. However, the guidance currently does not distinguish among types of violations based on severity. Based on its monitoring of issuers, Ginnie Mae may issue a notice of intent to default if an issuer has violated the guidelines identified in the MBS guide, such as a missed pass-through of monthly principal and interest payment to an investor. Officials explained the most common enforcement actions used against issuers were the notice of intent to default an issuer. From 2005 to 2010, Ginnie Mae issued 46 notices of intent to default (see fig. 12). Officials told us that they issued most of the notices because issuers committed an operational error, such as a missed payment, and that issuers rectified the errors in a timely manner. Once an issuer receives a notice of intent to default, the issuer has 30 days to respond. If the issuer does not respond in 30 days, Ginnie Mae takes action on the violation based on the information available. In the first three quarters of 2011, Ginnie Mae issued seven notices of intent to default. During 2005–2010, Ginnie Mae defaulted 21 issuers. Officials said the reasons for defaults have included suspensions by FHA, terminations by Fannie Mae, bankruptcy, or failure to submit audited financial statements. When an issuer is defaulted, Ginnie Mae takes over responsibility for servicing that issuer’s portfolio. Currently, Ginnie Mae has a large portfolio of single-family loans it is responsible for servicing due in part to the default of Taylor, Bean & Whitaker Mortgage Corporation in 2009. Ginnie Mae defines its mission as expanding affordable housing by linking capital markets to the nation’s housing markets. Ginnie Mae has been fulfilling its mission by securitizing the growing volume of federally insured and guaranteed mortgage loans. Changes in the housing market and the economic downturn have increased the volume and market share of Ginnie Mae-guaranteed MBS significantly in the last 5 years. Although Ginnie Mae’s portfolio of guaranteed MBS outstanding has grown, increasing the financial exposure to the federal government, it has mechanisms in place to help offset this financial exposure. As mentioned previously, Ginnie Mae charges issuers a guarantee fee and has accumulated reserves over the years. In addition, the mortgages that back Ginnie Mae-guaranteed MBS are fully or partially insured against default by another federal agency, such as FHA, VA, RHS, or PIH. Finally, Ginnie Mae has a number of practices in place to mitigate its operational and counterparty risks and has enhanced or plans to enhance these practices. Nevertheless, the methods by which Ginnie Mae measures the expected costs and revenues stemming from its growing commitments may not take full advantage of available data and techniques for accurately assessing program costs. According to Ginnie Mae’s financial statements, income to Ginnie Mae, mainly in the form of a guarantee fee paid by issuers, exceeded Ginnie Mae’s costs by an average of about $700 million each year from 2006 through 2010. As of September 30, 2010, excess revenues allowed Ginnie Mae to accumulate a capital reserve of about $14.6 billion. Ginnie Mae has not required appropriations from the general fund to cover any losses. Ginnie Mae uses fee revenue to cover the cost of issuer defaults by making timely payment of principal and interest to investors in Ginnie Mae-guaranteed MBS when an issuer is unable to do so. Although Ginnie Mae forecasts the severity of defaults, a higher-than-expected delinquency and default rate on those mortgages could require Ginnie Mae to make payments to investors using its accumulated reserves. Additionally, while mortgages backing Ginnie Mae-guaranteed MBS generally must be insured or guaranteed by another federal agency, such as FHA, borrower defaults may result in lower fee and claim payments to Ginnie Mae in some instances.  For instance, if the number of borrowers who prepaid or stopped paying their mortgages was greater than Ginnie Mae expected, guarantee fees paid by issuers would be less than expected.  For delinquent loans it acquires from defaulted issuers, Ginnie Mae makes advances of principal and interest to cover any late payments on those mortgages in the MBS pools. If the borrower made late payments and eventually defaulted, Ginnie Mae might not recover the entire value of the loss, although the mortgage was insured. For example, for FHA-insured mortgages, Ginnie Mae has to incur the cost to foreclose on a defaulted borrower but receives only a percentage of the associated costs. During 2005–2010, Ginnie Mae defaulted 21 issuers and took over the portfolio for approximately $28.8 billion in mortgages (see fig. 13). While the number of issuers defaulting has varied from two to five in recent years, the number of loans involved increased during this period. In 2009, Ginnie Mae defaulted a large issuer—Taylor, Bean & Whitaker Mortgage Corporation—and took over the portfolio for approximately $26.2 billion in mortgages. In general, the actual cost of a defaulted portfolio for Ginnie Mae cannot be determined until insurance or guarantee claims are processed and the number of fraudulent or delinquent mortgages determined. As of June 2011, Ginnie Mae’s disbursed $7.4 billion as a result of the 21 defaults. However, according to its 2010 financial statements after considering forecasted receipts from claims and recoveries, Ginnie Mae estimated that its defaulted issuer portfolio at that time of about $4.5 billion would result in net costs of approximately $53 million. For budgetary purposes, Ginnie Mae annually estimates the expected subsidy costs to the federal government of its guarantee activity. Ginnie Mae’s subsidy cost estimates to date have indicated that the program would generate net revenues, meaning that the fees Ginnie Mae collects were expected to exceed its losses on a present value basis. These estimates take into account forecasted fees and expected losses in the event of an issuer default. Once an issuer defaults, Ginnie Mae would take over the issuer’s portfolio as its own loan portfolio. As a result, the initial subsidy cost estimates take into account potential losses on the guaranteed portfolio as well as potential losses on its loan portfolio from the defaulted issuers. Agencies typically update or re-estimate the subsidy cost estimates annually to reflect actual program performance and changes in expected future performance. Ginnie Mae performed a re-estimate for the first time at the end of 2010 and officials told us that they plan on performing annual re-estimates going forward. The 2010 re-estimate lowered expected net revenues by $720 million from the previous estimate. Ginnie Mae officials explained that they performed the re-estimate of their portfolio in 2010 because for the first time the agency and OMB had developed a methodology upon which both parties could agree. Ginnie Mae officials noted that they faced challenges in developing a re-estimate methodology. Officials explained the nature of their business posed a challenge because Ginnie Mae does not have a yearly cohort of loans like other federal guarantee programs. Ginnie Mae officials also stated the re-estimate was performed due to the default of the Taylor, Bean & Whitaker Mortgage Corporation in 2009. Although Ginnie Mae has made some changes to the model it uses to forecast cash flows for the program, it has not implemented certain practices identified in Federal Accounting Standards Advisory Board (FASAB) guidance. While Ginnie Mae, as a government corporation, follows private sector accounting standards rather than FASAB accounting standards, we believe FASAB guidance on preparing cost estimates for federal credit programs represent sound internal control practices for evaluating Ginnie Mae’s model. Ginnie Mae uses a statistical model to forecast cash flows, including guarantee fee income and costs related to issuer defaults, to develop a credit subsidy cost for the federal budget and to calculate a reserve for loss for its financial statements. Ginnie Mae’s model uses historical trends on the default and prepayment characteristics of loans in its guaranteed MBS and estimates of future events, such as issuer defaults, to forecast 30 years of costs and revenues to the program. Ginnie Mae officials explained they recognized improvements could be made to their model. In 2009, Ginnie Mae hired a contractor to redesign its model over a 2-year period. Ginnie Mae hired additional staff to assist with the development of the model in March 2011. The contractor completed the new version of Ginnie Mae’s revised model in August 2011. Examples of changes made to the model since 2009 include the following:  Changing the data used in the model from FHA loan-level data to Ginnie Mae data, which includes data on other loans in Ginnie Mae- guaranteed MBS, such as PIH, VA, and RHS loans. Incorporating econometric methods similar to those used in FHA’s model.  Changing the types of scenarios used for stress testing. Previously, Ginnie Mae relied on vendor-provided scenarios rather than using customized scenarios tailored to Ginnie Mae. Ginnie Mae staff recently obtained FHA’s estimates of borrower default and prepayment and are intending to use these for future credit subsidy estimates, credit subsidy re-estimates, and financial statements. However, the current model still does not implement certain practices identified in FASAB guidance and risk-budgeting guidance. According to FASAB guidance, managers of federal credit programs should develop cost estimate models that include the following characteristics:  Estimates should be based on the best available data of the performance of the loans or loan guarantees, including data from related federal agencies. Furthermore, agency documentation supporting the estimates should include evidence of consultation with relevant agencies.  Estimates also should include a sensitivity analysis to identify which cash flow assumptions have the greatest impact on the performance of the model. In addition, according to academic risk-budgeting guidance, it is important that stress testing, which is a form of sensitivity analysis, use realistic scenarios to provide accurate indications of the effect of variability in economic and market factors.  Estimates can rely on informed opinion (i.e., management assumptions), but these assumptions only should be used in lieu of available data and on an interim basis. Moreover, agency documentation supporting the assumptions should demonstrate how the assumptions were determined. Although FASAB suggests that estimates be based on the best available data, Ginnie Mae did not fully evaluate the benefits and costs of using data to develop borrower default and prepayment estimates from relevant agencies, including FHA. More specifically, it did not consider or assess the benefits of using FHA’s default and prepayment model, rather than spending resources on developing its own model. According to Ginnie Mae officials, they took steps intended to improve the revised model by using their own loan-level data as a basis for developing estimates of borrower default and prepayment. However, Ginnie Mae did not perform or document any analyses to determine what other data from FHA—or VA, RHS, and PIH—could improve its model or help assess its cost- effectiveness. Ginnie Mae officials explained that they used their own loan-level data in the revised model because they could incorporate data on mortgages from all of the guaranteeing agencies, without obtaining data from each of the agencies that insure or guarantee mortgages in Ginnie Mae-guaranteed MBS. However, since approximately 80 percent of loans pooled into Ginnie Mae-guaranteed MBS are FHA-insured mortgages, there may be some benefits of incorporating elements of FHA’s data or model. These benefits include its cost-effectiveness and the potential for more detailed loan-level data than Ginnie Mae collects on FHA mortgages. Similarly, there may be benefits to incorporating VA data on loans it guarantees, which represented 16 percent of loans pooled in Ginnie Mae-guaranteed MBS in 2010. More specifically, FHA’s models include certain data elements that Ginnie Mae’s model does not, such as identifying which loans are FHA streamlined refinancing products and reverse mortgages. An FHA official with whom we spoke explained that these types of mortgages have different borrower default and prepayment characteristics. In addition, the official explained that including information identifying these types of mortgages would improve the predictive quality of any model of default and prepayment. For example, according to 2009 FHA data, borrowers who refinanced their mortgage under the streamlined refinance program had higher early payment delinquency rates than those with other refinanced mortgages. Our review of Ginnie Mae’s August 2011 revised model showed that it did not identify reverse or streamlined- refinanced mortgages. However, since our review of the model, Ginnie Mae officials said they have received data from FHA on estimates of borrower default and prepayment and are intending to use this information for preparing future credit subsidy estimates, credit subsidy re-estimates, and financial statements. FHA’s estimates of borrower default and prepayment does include data on streamlined-refinance mortgages. Ginnie Mae officials have not yet incorporated data on reverse mortgages, which are modeled separately by FHA, or explored and documented VA estimates of defaults and prepayments in their model. According to Ginnie Mae officials, they are using FHA data to approximate the experience expected of VA loans rather than using VA data directly (by adjusting these data for expected differences for prepayment and default experience). However, the analysis underlying these adjustments have not been documented. According to FASAB, sensitivity analysis should be performed to improve the accuracy of a model. A stress test provides an analysis of the sensitivity of a model’s forecasted cash flows in response to extreme changes in economic conditions. According to academic risk budgeting guidance, using realistic stress test scenarios is important to accurately indicate the effect of variability in economic and market factors. More specifically, stress test scenarios should consider the impact of movements of individual market factors and interrelationships or correlations among these factors. Although Ginnie Mae recently has developed more customized stress test scenarios in its revised model, some of these scenarios may not be realistic because they do not reflect the interrelationships between economic and capital markets factors. For example, Ginnie Mae’s revised model includes customized scenarios that focus on mortgage rate movements. More specifically, mortgage rates in one scenario were lowered by 300 basis points, or 3 percent, but no other economic variables, such as housing prices and unemployment rates, were changed. Ginnie Mae’s revised model stated that this scenario consistently produced the lowest cumulative defaults across its FHA and VA portfolio. However, as we previously reported, an economic scenario involving a mortgage rate decrease, which included rising unemployment and falling house prices, could produce more realistic model results. This is an example of a scenario that could create a different—yet plausible—scenario for defaults under various economically stressful conditions. If the scenarios Ginnie Mae used were unrealistic, it could affect the accuracy of its model. Ginnie Mae relies on management assumptions rather than data to forecast issuer defaults and mortgage buyout rates. For example, Ginnie Mae’s management assumptions for the costs of future issuer defaults were $300 million in 2011 and $25 million annually from 2012 to 2015. Ginnie Mae officials were not able to provide documentation on the basis for these assumptions and have explained they have had difficulty forecasting the risk that an issuer would default because defaults are dependent on both economic and noneconomic factors. However, Ginnie Mae officials acknowledged that issuers that have a higher concentration of federally insured or guaranteed mortgages in their portfolio may face a greater risk of default if these mortgages default at high rates, and they could not continue making the required advances to investors. In addition, Ginnie Mae’s model does not incorporate data on the mortgage buyout rate but includes a management assumption that issuers will buy out all mortgages after default. The mortgage buyout rate affects Ginnie Mae’s cash flows because mortgage buyouts reduce the guarantee fee revenue on the MBS backed by the loans. However, Ginnie Mae officials told us changes in interest rates may influence an issuer’s decision to buy a defaulted mortgage out of an MBS pool. More specifically, officials explained that if an issuer’s borrowing rate (cost of capital) is higher than the interest rate on a delinquent mortgage, the issuer is less likely to buy the mortgage out of the pool and will choose to continue making advances to investors. When the issuer’s borrowing rate is lower than the interest rate on a delinquent mortgage, the issuer is more likely to buy the mortgage out of the pool at an earlier opportunity. Officials said they plan to include quantitative estimates on issuer defaults and interest rates in determining mortgage buyout rates in future iterations of the model, but the agency does not have a timeline for incorporating these data and the analysis into the model. Because Ginnie Mae’s revised model does not fully implement certain practices identified in FASAB guidance, the model may lack critical data needed to produce a reliable credit subsidy rate and reserve for loss amount, which could affect Ginnie Mae’s ability to provide more informed budgetary cost estimates and financial statements. Ginnie Mae also may be forgoing opportunities to further enhance its model in the most cost- effective way possible by not regularly consulting with other agencies and evaluating their data. In addition, economic scenarios used to conduct stress tests on its revised model may not be sufficiency realistic, which may impact the accuracy of the model. Further, Ginnie Mae’s reliance on management assumptions rather than quantitative estimates for issuer defaults and mortgage buyout rates also may impact the accuracy of the model and a lack of documentation on how these assumptions were developed limits the transparency of the model. Ultimately, because of these limitations on its model, Ginnie Mae could be limited in its ability to accurately portray the extent to which Ginnie Mae’s programs represent a financial exposure to the government. During the recent financial crisis and in response to continuing stresses in housing markets, Ginnie Mae has assumed an increasingly prominent position in the secondary mortgage market. However, risks have accompanied its growth. The agency has faced an increased reliance on contractors to perform many critical functions, while at the same time coping with relatively flat staffing levels and outdated information technology. Although Ginnie Mae has conducted risk assessments on its contracts and enhanced some processes, technology, and staffing, or planned to do so, a number of recommendations from these assessments and initiatives remain in planning or under development—warranting vigorous and continued commitment and follow through from senior management. In recent years, Ginnie Mae also received a salient demonstration of counterparty risk when it defaulted a major issuer and had to assume and service a $26 billion loan portfolio. This and other issuer defaults and issuance volume surpassing a trillion dollars highlight the need for comprehensive risk mitigation and monitoring. As with operational risks, the agency has several planned initiatives to enhance its management of counterparty risk, which have yet to be fully implemented. These actions are critical to Ginnie Mae’s efforts to enhance its operations and we encourage the agency to complete their implementation as soon as practicable. Finally, although Ginnie Mae revenues exceeded costs thus far (including the costs of defaults) and the agency has a positive capital reserve, it has had to lower net revenue projections in a recent re-estimate of program costs. A combination of factors, including changed economic conditions, increased Ginnie Mae market share and volume, and the results of the re- estimate suggest that now is an opportune time for the agency to reexamine its data sources and methodologies, and identify opportunities to improve inputs and analyses for the statistical model it uses to forecast cash flows to and from the program. Ginnie Mae has acknowledged that it could improve the model and has said they will use FHA’s estimates of borrower default and prepayment for preparing future credit subsidy estimates, credit subsidy re-estimates, and financial statements. For example, Ginnie Mae officials explained they are using FHA data to approximate the experience expected of VA loans rather than using VA data directly. However, the analysis underlying these adjustments have not been documented. Therefore, without fully documenting, it is not possible to assess the rigor of this analysis. Given that VA was 16 percent of Ginnie Mae’s portfolio in 2010, evaluating and documenting the accuracy of its assumptions going forward and assessing whether its assumptions are sufficiently accurate for VA loans or if it should use data directly from VA is important. We identified several areas in which the agency could better implement certain practices identified in federal guidance for estimating program costs, including using the best available data, conducting sensitivity analyses, and assessing and documenting reasons for using management assumptions (judgment) rather than data. By consulting other agencies, assessing different modeling inputs and approaches, and leveraging other agencies’ datasets, Ginnie Mae could provide more informed budgetary cost estimates and financial statements. In addition, Ginnie Mae could realize opportunities to further enhance its model in a cost-effective way. Further, by developing quantitative estimates for issuer defaults and mortgage buyout rates, Ginnie Mae could better predict potential impacts on the costs and revenues of its programs. With more informed budgetary cost estimates and financial statements, Congress could more confidently use this information to understand the extent to which Ginnie Mae’s credit programs represent a financial exposure to the government. To help ensure that Ginnie Mae is developing the most accurate model for estimating costs and revenues, we recommend that the Secretary of Housing and Urban Development direct Ginnie Mae to take steps to ensure its model more closely follows certain practices identified in Federal Accounting Standards Advisory Board guidance for estimating subsidy costs of credit programs. More specifically, Ginnie Mae should take the following four actions: 1. Assess and document that it is using the best available data in its model and most appropriate modeling approach. For example, Ginnie Mae should determine if other agencies’ datasets (such as FHA, VA, RHS, or PIH) provide for more detail that could lead to better predictability. Ginnie Mae should also determine whether using other models for prepayment and defaults are sufficient for accurately estimating future guarantee fee revenue. 2. Conduct and document sensitivity analyses to determine which cash flow assumptions have the greatest impact on the model. 3. Document how management assumptions are determined, such as those for issuer defaults and mortgage buyout rates. 4. Assess the extent to which management assumptions, such as those for issuer defaults and mortgage buyout rates, can be replaced with quantitative estimates. We provided copies of this draft report to HUD, VA, USDA, OMB, and the Federal Housing Finance Agency for their review and comment. Ginnie Mae (HUD) provided written comments that have been reprinted in appendix III. Ginnie Mae, OMB, VA, and the Federal Housing Finance Agency provided technical comments, which we incorporated as appropriate. USDA did not have any comments. The President of Ginnie Mae wrote that Ginnie Mae is working towards implementing our recommendation for conducting sensitivity analyses relating to issuer risk and behavior, but neither agreed nor disagreed with our other specific recommendations that are also intended to better ensure that Ginnie Mae is developing the most accurate model for estimating costs and revenues. Rather, Ginnie Mae noted that limited funding and resources constrained its ability to develop its model to forecast cash flows for the program consistent with our recommendations. However, as we also note, Ginnie Mae devoted significant resources to designing its models, but did not fully implement certain practices identified in FASAB guidance when redesigning its model. More specifically, Ginnie Mae hired a contractor in 2009 to redesign its model over a 2-year period, which cost approximately $1.8 million. The expected additional cost for the subsequent 3-year period of the contract is $193,000. Ginnie Mae agreed with a number of our findings. In particular, the President of Ginnie Mae noted that he agreed with the report’s analysis that limited staff, substantial reliance on contractors, and the need for modernized information systems are operational risks that Ginnie Mae can face. In addition, Ginnie Mae agreed with our observation about the importance of completing ongoing and planned initiatives for enhancing its risk-management processes, as soon as practicable, to improve operations. Finally, while Ginnie Mae agreed that its model could be further enhanced by incorporating some general FASAB guidance, the President of Ginnie Mae stated some aspects of the guidance did not provide a relevant framework for the nature of Ginnie Mae’s business. We recognize these differences, and as discussed in the report, our analysis focuses on particular aspects of FASAB guidance that are specific to developing cost estimate models and we believe the guidance that we cite provides a relevant framework for Ginnie Mae. Ginnie Mae also discussed a number of other issues that were beyond the scope of this review. For example, Ginnie Mae stated that its negative credit subsidy calculation is overstated because OMB currently does not allow Ginnie Mae to reduce the negative subsidy to reflect administrative costs. Additionally, Ginnie Mae noted that FCRA-type accounting presented a challenge because the agency could not use funds generated from previous fiscal years’ negative subsidy payments to cover the cost of defaulted issuers. For this study, we did not assess the accounting that Ginnie Mae is required to perform. To achieve the objectives of this report, we reviewed Ginnie Mae’s financial statements and their subsidy estimate and re-estimate to understand how Ginnie Mae’s portfolio may affect financial exposure to the federal government and how well Ginnie Mae has been managing this exposure. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Housing and Urban Development, appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Mathew Scirè at [email protected] or (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To describe changes in the Government National Mortgage Association’s (Ginnie Mae) market share and volume, we collected and analyzed data from Ginnie Mae and Inside Mortgage Finance, a firm that collects data on the primary and secondary mortgage markets. The data from Ginnie Mae covered fiscal years 2005–2010 and part of fiscal year 2011 (October 2010 through June 2011). We analyzed information on the number and types of institutions that issue Ginnie Mae-guaranteed mortgage-backed securities (MBS) and the share of outstanding MBS by type of issuers. We included information on the amount of federally insured and guaranteed mortgages pooled into new Ginnie Mae- guaranteed MBS; the amount of new guaranteed MBS backed by reverse mortgages, multifamily loans; and the volume of Ginnie Mae structured products. The data from Inside Mortgage Finance covered calendar years 2005–2010. We analyzed information on the volume of MBS issuance by Ginnie Mae issuers, private-label issuers, and government-sponsored enterprises, and cumulative outstanding guaranteed MBS. We assessed the reliability of the data provided by Ginnie Mae by reviewing documentation on the systems that produced the data, performing electronic testing, and conducting interviews with relevant officials at Ginnie Mae and its contractors. To assess the reliability of the data provided by Inside Mortgage Finance, we interviewed an official at the firm and reviewed documentation that describes how market information is collected. We determined that the data were sufficiently reliable for the purposes of this report. We also reviewed Ginnie Mae’s 2009 and 2010 reports to Congress. To describe the reasons for changes in Ginnie Mae’s market share and volume, we interviewed officials from the Department of Housing and Urban Development (HUD)—more specifically, from Ginnie Mae, the Federal Housing Administration (FHA), Office of the Inspector General, Public and Indian Housing; the Department of Agriculture’s Rural Housing Service; the Department of Veterans Affairs; Fannie Mae and Freddie Mac (government-sponsored enterprises); the Federal Housing Finance Agency and the Mortgage Bankers Association. To assess the types of risks Ginnie Mae faces and how it manages these risks, we conducted a literature review of risks that may be prevalent in the MBS market for Ginnie Mae and government-sponsored enterprises. We also interviewed officials from Ginnie Mae, Fannie Mae, and Freddie Mac to determine what risk these agencies face and how they are managed. From this review and discussion, we identified counterparty and operational risk as the key risks facing Ginnie Mae. For both of these risks, we reviewed and identified principles in our internal control and management tool relevant to managing these risks. We also identified human capital principles in our prior work on the topic. We compared these principles to the steps that Ginnie Mae took to manage its risk. For operational risk, we focused on risks present in the agency’s management of human capital, contracting, and technology. We assessed Ginnie Mae’s staffing and organizational realignment plans to determine if Ginnie Mae developed strategies to address gaps in staffing needs and evaluated its organizational structure and made changes based on changing conditions. We collected information on the number of full-time equivalent positions requested, authorized, and actual in 2005– 2010 and part of 2011 (October 2010 through June 2011). We assessed the reliability of the information provided by reviewing documentation that HUD’s budget office and Ginnie Mae’s administrative officer maintain and conducting interviews with relevant officials. We determined that the data were sufficiently reliable for the purposes of this report. We reviewed HUD’s 2004 Resource Estimation and Allocation Process study. We also reviewed proposed 2012 budget documents produced by the Office of Management and Budget. To assess how Ginnie Mae managed risks associated with contracting, we reviewed Ginnie Mae’s guidance and other HUD and federal contracting standards. We obtained and analyzed Federal Procurement Data System-Next Generation data to determine the amount of Ginnie Mae contract dollars awarded from 2005 to 2010. We assessed the reliability of the data by reviewing documentation on the systems that produced the data, conducting interviews with relevant officials at Ginnie Mae and HUD’s Office of the Chief of Procurement, and reviewing the internal controls on the data. We also reviewed the number of active contracts and orders during that time period. We reviewed a nonprobability sample of 14 contracts selected either because the activities involved represented a core business function or Ginnie Mae identified the activities as key business functions that could result in operational risk if problems occurred with the contract. In addition, we examined a nonprobability sample of 33 third-party Contract Assessment Reviews conducted between 2005 and 2010. We also interviewed 7 contractors from our nonprobability sample of 14 contracts to gain an understanding of the work performed and how they were monitored. The interviews included four contractors that received some of the largest obligation amounts from 2005–2010 and three third-party contractors who conducted reviews on the majority of these contracts. We interviewed Government Technical Representatives for five contracts to understand their role and how they monitored contracts. We also reviewed risk assessments conducted by Ginnie Mae and its review contractor in December 2010 and June 2011 and determined if these studies followed our principles (from our internal control and management tool) for agencies to consider risks associated with major suppliers and contracts. We reviewed the 2011 performance plans for senior management that contained directives to assess contracts. In addition, we met with HUD Inspector General officials and reviewed Ginnie Mae financial statements from 2006 to 2010, management letters from 2008 to 2010, and program audits on the MBS program and information technology. To assess how Ginnie Mae managed risks associated with its information technology, we reviewed Ginnie Mae’s information technology improvement initiative and discussed the initiative and additional plans with Ginnie Mae officials. For counterparty risk, we assessed Ginnie Mae’s MBS policies and guidance, including processes for issuer approval, monitoring, and enforcement. We interviewed Ginnie Mae officials and contractors about how issuers are approved and monitored and the changes made to these processes in recent years. We collected and reviewed data from Ginnie Mae from 2005 to 2010 and part of 2011 (October 2010 through June 2011) and described the number of new issuer applications, approvals, reviews, and findings. We assessed the reliability of these data and determined they were reliable for our purposes. In addition, we met with HUD Inspector General officials and reviewed 2008 and 2009 program audits on the MBS program. We also reviewed A-123 internal control reviews performed by a third-party contractor from 2006 to 2011 to determine the types of findings and recommendations on Ginnie Mae’s approval, monitoring, and enforcement processes. We reviewed Ginnie Mae’s risk committee minutes from 2008 to 2010 to determine the role of the committee and how risks were monitored. We reviewed documentation on a nonprobability sample of 10 issuers to understand the types of monitoring Ginnie Mae and its contractors conducted. To select the issuers, we used a certainty sample to select the three largest issuers based on overall portfolio size and also one newly approved issuer approved after Ginnie Mae made changes to its process. The other six issuers were selected at random and included three that were on Ginnie Mae’s watch list and three that were not. We also selected 5 of the 10 issuers to interview based on size, institution type, and results from monitoring reviews. Two of the issuers selected also were investors and sponsors of structured products. We also looked at examples of monthly and quarterly reports prepared by Ginnie Mae’s contractor that report such information as issuer performance thresholds. We reviewed documentation on risk-management ideas, planned initiatives, and risk assessments performed by the agency’s contractor. We also reviewed the performance plans of senior management in the office of the Chief Risk Officer and the Office of Mortgage-Backed Securities to determine what goals were in place for the year. To determine how recent changes in Ginnie Mae’s market share and volume might affect financial exposure to the federal government and the agency’s ability to meet its mission, we interviewed officials from Ginnie Mae and its contractor that conducts modeling, the Office of Management and Budget, and FHA. We reviewed Ginnie Mae’s guidance and financial statements. We obtained and analyzed data on the potential risks of changes in Ginnie Mae’s market share and volume on its mission. More specifically, we analyzed data on FHA and Department of Veterans Affairs loan securitization rates from Inside Mortgage Finance for calendar years 2005–2010. To gain an understanding of how Ginnie Mae’s program might produce financial exposure to the federal government, we obtained data from Ginnie Mae on issuer defaults, including the number of pools, loans, and remaining balance of the assets needed to be serviced from 2005 to 2010 and part of 2011 (October 2010 through June 2011). We also obtained data as of June 30, 2011, on the associated costs Ginnie Mae incurred due to issuer defaults. We analyzed Ginnie Mae’s revenue and expense data to identify the extent to which its guarantee fee revenues covered its costs from 2005 through 2010 and part of 2011 (October 2010 through June 2011). We assessed the reliability of the data provided by Inside Mortgage Finance and Ginnie Mae by means such as interviewing relevant officials and reviewing documentation on the systems that produced the data. We determined that the data were sufficiently reliable for the purposes of this report. In addition, to determine how Ginnie Mae forecasts costs and revenues, we reviewed the Federal Credit Reform Act of 1990 and budget documents produced by the Office of Management and Budget. We also reviewed Ginnie Mae’s statutes and documentation related to the development of the annual subsidy estimate, including the credit subsidy re-estimate for 2010 and Ginnie Mae’s model used to forecast cash flows. Furthermore, we reviewed Federal Accounting Standards Advisory Board guidance for cost estimation of federal credit programs, academic research on risk budgeting, and FHA’s 2010 actuarial review. We compared this information with Ginnie Mae’s revised model. We conducted this performance audit from September 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As mentioned previously, Ginnie Mae has several planned and proposed initiatives to address operational and counterparty risk. Table 3 provides a listing of these plans. In addition to the contact named, Andy Pauline (Assistant Director), Serena Atim Agoro-Menyang, Jennifer Alpha, Marcia Carlsen, Kathryn Edelman, Carol Henn, Julia Kennon, John McGrail, Luann Moy, Marc Molino, Nadine Garrick Raidbard, Paul G. Revesz, Barbara Roesmann, and Heneng Yu made key contributions to this report.
The Government National Mortgage Association (Ginnie Mae) has increased its role in the secondary mortgage market significantly. Ginnie Mae is a wholly owned government corporation in the Department of Housing and Urban Development (HUD). It guarantees the timely payment of principal and interest of mortgage-backed securities (MBS) backed by pools of federally insured or guaranteed mortgage loans, such as Federal Housing Administration (FHA) loans. GAO was asked to (1) describe how Ginnie Mae's volume of MBS and market share have changed, (2) assess the risks Ginnie Mae faces and how it manages these risks, and (3) determine what effect recent changes in Ginnie Mae's market share and volume may have on financial exposure to the federal government, including mission. To address these objectives, GAO analyzed data on volume and market share and assessed their reliability. GAO also reviewed guidance and Ginnie Mae's credit subsidy calculations and estimation model, and interviewed agency officials and others. From 2007 to 2010, the volume of Ginnie Mae-guaranteed MBS and its share of the secondary mortgage market increased substantially. Ginnie Mae-guaranteed MBS outstanding grew from $412 billion to more than $1 trillion, and market share grew from 5 percent to more than 25 percent. As the demand for FHA and other federally insured or guaranteed mortgages grew during this time, financial institutions increased their issuance of Ginnie Mae-guaranteed MBS to finance these federally insured or guaranteed loans. Ginnie Mae has taken steps to better manage operational and counterparty risks, and has several initiatives planned or underway. The agency may face operational risk--the risk of loss resulting from inadequate or failed internal processes, people, or from external events--and counterparty risk--the risk that issuers fail to provide investors with monthly principal and interest payments. GAO and others, including HUD's Inspector General, have identified limited staff, substantial reliance on contractors, and the need for modernized information systems as operational risks that Ginnie Mae may face. For example, although Ginnie Mae's market share and volume of MBS have increased in recent years, its staffing levels were relatively constant and actual staff levels trailed authorized levels. In addition, between 2005 and 2010, the agency increasingly relied on contractors. Ginnie Mae has identified gaps in resources and conducted risk assessments on its contracts but has not yet fully implemented changes based on these analyses. To manage its counterparty risk, Ginnie Mae has processes in place to oversee MBS issuers that include approval, monitoring, and enforcement. In response to changing market conditions and increased market share, Ginnie Mae revised its approval and monitoring procedures. Ginnie Mae also has several planned initiatives to enhance its risk-management processes for issuers, including its tracking and reporting systems, but these plans have not been fully implemented. It will be important for Ginnie Mae to complete these initiatives as soon as practicable to enhance its operations. The growth in outstanding Ginnie Mae-guaranteed MBS resulted in an increased financial exposure for the federal government as Ginnie Mae fulfills its mission of expanding affordable housing by linking capital markets to the nation's housing markets. Nonetheless, Ginnie Mae's revenues have exceeded its costs and it has accumulated a capital reserve of about $14.6 billion. However, GAO found that in developing inputs and procedures for the model used to forecast costs and revenues, the agency did not consider certain practices identified in Federal Accounting Standards Advisory Board (FASAB) guidance for preparing cost estimates of federal credit programs. Ginnie Mae has not developed estimates based on the best available data, performed sensitivity analyses to determine which assumptions have the greatest impact on the model, or documented why it used management assumptions rather than available data. By not fully implementing certain practices identified in FASAB guidance that GAO believes represent sound internal controls for models, Ginnie Mae's model may not use critical data which could affect the agency's ability to provide well-informed budgetary cost estimates and financial statements. This may limit Ginnie Mae's ability to accurately report to the Congress the extent to which its programs represent a financial exposure to the government. Ginnie Mae should enhance the model it uses to forecast cash flows for the program by (1) assessing potential data sources, (2) conducting sensitivity analyses, and (3) assessing and documenting its modeling approaches and reasons for using management assumptions, among others. In written comments, Ginnie Mae agreed with GAO's recommendation to conduct sensitivity analyses, but neither agreed nor disagreed with the other recommendations.
Vanuatu consists of 83 islands spread over hundreds of miles of ocean in the South Pacific, 1,300 miles northeast of Sydney, Australia. About 39 percent of the population is concentrated on the islands of Santo and Efate. Vanuatu’s capital, Port Vila, is on Efate, and Vanuatu’s only other urban center, Luganville, is on Santo. In the past decade, Vanuatu’s real GDP growth averaged 2 percent, although more rapid population growth led to a decline in per capita GDP over the same period. Average growth of real GDP per capita was negative from 1993 to 2005. An estimated 40 percent of Vanuatu’s population of about 207,000 has an income below the international poverty line of $1 per day. Agriculture and tourism are the principal productive sectors of Vanuatu’s economy, contributing approximately 15 percent and 19 percent to GDP, respectively. Although agriculture represents a relatively small share of Vanuatu’s overall economy, approximately 80 percent of Vanuatu’s residents live in rural areas and depend on subsistence agriculture for food and shelter. The tourism sector is dominated by expatriates of foreign countries living in Vanuatu, who also predominate in other formal sectors of the economy such as plantation agriculture and retail trade. On May 6, 2004, MCC determined that Vanuatu was eligible to submit a compact proposal for Millennium Challenge Account funding. Vanuatu’s proposal identified transportation infrastructure as a key constraint to private-sector development. The timeline in figure 1 shows the development and implementation of the Vanuatu proposal and compact. The $65.7 million Vanuatu compact includes $54.5 million for the rehabilitation or construction of 11 transportation infrastructure assets on 8 of Vanuatu’s 83 islands, including roads, wharves, an airstrip, and warehouses (see fig. 2). The compact also includes $6.2 million for an institutional strengthening program to increase the capacity of the Vanuatu Public Works Department (PWD) to maintain transportation infrastructure. The remaining $5 million is for program management and monitoring and evaluation. More than half of the compact, $37 million, is budgeted for three road projects on Santo and Efate islands. The compact provides for upgrading existing roads on both islands; the compact also includes five new bridges for an existing road on Santo. MCC’s compact with Vanuatu and congressional notification state that the compact will have a transformational impact on Vanuatu’s economic development, increasing average per capita income by approximately $200—15 percent—by 2010 and increasing total GDP by “an additional 3 percent a year.” MCC’s investment memo further quantifies the per capita income increase as $488—37 percent—by 2015. The compact and the congressional notification also state that the compact will provide benefits to approximately 65,000 poor, rural inhabitants (see fig. 3). In projecting the impact of the Vanuatu compact, MCC estimated the benefits and costs of the proposed infrastructure improvements. MCC also estimated the number of beneficiaries within a defined catchment area— that is, the geographic area in which benefits may be expected to accrue. MCC used the estimated benefits and costs to calculate the compact’s ERR and impact on Vanuatu’s GDP and per capita income. MCC’s analysis determined that the compact will reduce transportation costs and improve the reliability of access to transportation services for poor, rural agricultural producers and providers of tourism-related goods and services and that these benefits will, in turn, lead to increases in per capita income and GDP and reduction in poverty. MCC projects several direct and induced benefits from the compact’s infrastructure improvement projects over a 20-year period, beginning in full in 2008 or 2009 and increasing by at least 3 percent every year. Direct benefits. MCC projects that direct benefits will include, for example, construction spending, reduced transportation costs, and time saved in transit on the improved roads. Induced benefits. MCC projects that induced benefits from tourism and agriculture will include, for example, increased growth in Vanuatu tourism, tourist spending, and hotel occupancy and increased crop, livestock, and fisheries production. Figure 4 illustrates MCC’s logic in projecting the compact’s impact. MCC expects compact benefits to flow from different sources, depending on the project and its location. In Efate, the Ring Road is expected to provide direct benefits from decreased road user costs and induced benefits through tourism and foreign resident spending. In Santo, MCC anticipates similar benefits as well as the induced benefit of increased agricultural production. On other islands, where tourism is not as developed, MCC expects benefits to derive primarily from user cost savings and increased agriculture. To calculate construction and maintenance costs for the transportation infrastructure projects, MCC used existing cost estimates prepared for the government of Vanuatu and for another donor as well as data from the Vanuatu PWD. To estimate the number of poor, rural beneficiaries, MCC used Vanuatu maps to identify villages in the catchment area and used the 1999 Vanuatu National Population and Housing Census to determine the number of persons living in those villages. In all, MCC calculated that approximately 65,000 poor, rural people on the eight islands would benefit from MCC projects. On the basis of the costs and benefits projected over a 20-year period, MCC calculated three summaries of the compact’s impact: its ERR, effect on per capita income, and effect on GDP. MCC projected an overall compact ERR of 24.7 percent over 20 years. In projecting the compact’s impact on Vanuatu’s per capita income, MCC used a baseline per capita income of $1,326 for 2005. MCC also prepared a sensitivity analysis to assess how a range of possible outcomes would affect compact results. MCC’s tests included a 1-year delay of the start date for accrued benefits; a 20 percent increase of all costs; a 20 percent decrease of all benefits; and a “stress test,” with a 20 percent increase of all costs and a 20 percent decrease of all benefits. MCC calculated a best-case compact ERR of 30.2 percent and a worst-case compact ERR of 13.9 percent. MCC’s public portrayal of the Vanuatu compact’s projected effects on per capita income and on GDP suggest greater impact than its analysis supports. In addition, MCC’s portrayal of the compact’s projected impact on poverty does not identify the proportion of benefits that will accrue to the rural poor. Impact on per capita income. In the compact and the congressional notification, MCC states that the transportation infrastructure project is expected to increase “average income per capita (in real terms) by approximately $200, or 15 percent of current income per capita, by 2010.” MCC’s investment memo states that the compact will cause per capita income to increase by $488, or 37 percent, by 2015. These statements suggest that as a result of the program, average incomes in Vanuatu will be 15 percent higher in 2010 and 37 percent higher in 2015 than they would be without the compact. However, MCC’s underlying data show that these percentages represent the sum of increases from per capita income in 2005 that MCC projects for each year. For example, according to MCC’s data, Vanuatu’s per capita income in a given year between 2006 and 2010 will range from about 2 percent to almost 4 percent higher than in 2005; in its statements, MCC sums these percentages as 15 percent without stating that this percentage is a cumulative increase from 2005. Our analysis of MCC’s data shows that actual gains in per capita income, relative to income in 2005, would be $51, or 3.9 percent, in 2010 and $61, or 4.6 percent, in 2015 (see fig. 5). Figure 6 further illustrates MCC’s methodology in projecting the compact’s impact on per capita income levels for 2010 and 2015. Impact on GDP. Like its portrayal of the projected impact on per capita income, MCC’s portrayal of the projected impact on GDP is not supported by the underlying data. In the compact and the 2006 congressional notification, MCC states that the compact will have a transformational effect on Vanuatu’s economy, causing GDP to “increase by an additional 3 percent a year.” Given the GDP growth rate of about 3 percent that MCC expects in Vanuatu without the compact, MCC’s statement of a transformational effect suggests that the GDP growth rate will rise to about 6 percent. However, MCC’s underlying data show that although Vanuatu’s GDP growth rate will rise to about 6 percent in 2007, in subsequent years the GDP growth rate will revert to roughly the rate MCC assumes would occur without the compact, approximately 3 percent (see fig. 7). Although MCC’s data show that the compact will result in a higher level (i.e., dollar value) of GDP, the data do not show a transformational increase to the GDP growth rate. Impact on poverty. MCC’s portrayal of the compact’s projected impact on poverty does not identify the proportion of the financial benefits that will accrue to the rural poor. In the compact and the congressional notification, MCC states that the program is expected to benefit “approximately 65,000 poor, rural inhabitants living nearby and using the roads to access markets and social services.” In its underlying documentation, MCC expects 57 percent of the monetary benefits to accrue to other beneficiaries, including expatriate tourism services providers, transport providers, government, and local businesses; 43 percent is expected to go to the local population, which MCC defines as “local producers, local consumers and inhabitants of remote communities” (see fig. 8). However, MCC does not establish the proportion of local- population benefits that will go to the 65,000 poor, rural beneficiaries. Our analysis shows that risks related to construction costs, timing of benefits, project maintenance, induced benefits, and efficiency gains may lessen the Vanuatu compact’s projected impact on poverty reduction and economic growth. Accounting for these risks could reduce the overall compact ERR. Construction costs. Although MCC considered the risk of construction cost increases, the contingencies used in its calculations may not be sufficient to cover actual construction costs. Cost estimate documentation for 5 of MCC’s 11 construction projects shows that these estimates include design contingencies of 20 percent. However, cost overruns of more than 20 percent occur in many transportation projects, and as MCC’s analysis notes, the risk of excessive cost overruns is significant in a small country such as Vanuatu. Any construction cost overrun must be made up within the Vanuatu compact budget by reducing the scope, and therefore the benefits, of the compact projects; reduced project benefits would in turn reduce the compact’s ERR and effects on per capita income and GDP. Timing of benefits. Although MCC’s analysis assumes compact benefits from 2008 or 2009—shortly after the end of project construction—we found that benefits are likely to accrue more slowly. Our document review and discussions with tourism services providers and agricultural and timber producers suggest that these businesses will likely react gradually to any increased market opportunities resulting from MCC’s projects, in part because of constraints to expanding economic activity. In addition, MCC assumes that all construction spending will occur in the first year, instead of phasing the benefits from this spending over the multiyear construction schedule. Project maintenance. Uncertainty about the maintenance of completed transportation infrastructure projects after 2011 may affect the compact’s projected benefits. Vanuatu’s record of road maintenance is poor. According to World Bank and Asian Development Bank officials, continuing donor involvement is needed to ensure the maintenance and sustainability of completed projects. However, although MCC has budgeted $6.2 million for institutional strengthening of the Vanuatu PWD, MCC has no means of ensuring the maintenance of completed projects after the compact expires in 2011; the Millennium Challenge Act limits compacts to 5 years. Poor maintenance performance will reduce the benefits projected in the MCC compact. Induced benefits. The compact’s induced benefits depend on the response of Vanuatu tourism providers and agricultural producers. However, constraints affecting these economic sectors may prevent the sector from expanding as MCC projects. Limited response to the compact by tourism providers and agricultural producers would have a significant impact on compact benefits. Efficiency gains. MCC counts efficiency gains—such as time saved because of better roads—as compact benefits. However, although efficiency gains could improve social welfare, they may not lead to changes in per capita income or GDP or be directly measurable as net additions to the economy. Accounting for these risks could reduce the overall compact ERR from 24.2 percent, as projected by MCC, to between 5.5 percent and 16.5 percent (see table 1). MCC’s public portrayal of the Vanuatu compact’s projected benefits— particularly the effect on per capita income—suggests a greater impact than MCC’s underlying data and analysis support and can be understood only by reviewing source documents and spreadsheets that are not publicly available. As a result, MCC’s statements may foster unrealistic expectations of the compact’s impact in Vanuatu. For example, by suggesting that per capita incomes will increase so quickly, MCC suggests that its compact will produce sustainable growth that other donors to Vanuatu have not been able to achieve. The gaps between MCC’s statements about, and underlying analysis of, the Vanuatu compact also raise questions about other MCC compacts’ projections of a transformational impact on country economies or economic sectors. Without accurate portrayals of its compacts’ projected benefits, the extent to which MCC’s compacts are likely to further its goals of poverty reduction and economic growth cannot be accurately evaluated. In addition, the economic analysis underlying MCC’s statements does not reflect the time required to improve Vanuatu’s transportation infrastructure and for the economy to respond and does not fully account for other risks that could substantially reduce compact benefits. In our report, we recommend that the CEO of MCC take the following actions: revise the public reporting of the Vanuatu compact’s projected impact to clearly represent the underlying data and analysis; assess whether similar statements in other compacts accurately reflect the underlying data and analysis; and improve its economic analysis by phasing the costs and benefits in compact ERR calculations and by more fully accounting for risks such as those related to continuing maintenance, induced benefits, and monetized efficiency gains as part of sensitivity analysis. In comments on a draft of our report, MCC did not directly acknowledge our recommendations. MCC acknowledged that its use of projected cumulative compact impact on income and growth was misleading but asserted that it had no intention to mislead and that its portrayal of projected compact benefits was factually correct. MCC questioned our finding that its underlying data and analysis do not support its portrayal of compact benefits and our characterization of the program’s risks. (See app. VI of our report for MCC comments and our response.) Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the person named above, Emil Friberg, Jr. (Assistant Director), Gergana Danailova-Trainor, Reid Lowe, Angie Nichols-Friedman, Michael Simon, and Seyda Wentworth made key contributions to this statement. Also, David Dornisch, Etana Finkler, Ernie Jackson, and Tom McCool provided technical assistance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In January 2004, Congress established the Millennium Challenge Corporation (MCC) for foreign assistance. Congress has appropriated almost $6 billion to MCC. As of March 2007, MCC had signed almost $3 billion in compacts with 11 countries, including a 5-year, $65.7 million compact with Vanuatu. MCC states that the Vanuatu compact will have a transformational effect on the country's economy, increasing per capita income and GDP and benefiting 65,000 poor, rural people. This testimony summarizes a July 2007 report (GAO-07-909) examining (1) MCC's methods of projecting economic benefits, (2) MCC's portrayal and analysis of the projected benefits, and (3) risks that may affect the compact's impact. To address these objectives, GAO reviewed MCC's analyses and met with officials and business owners in Vanuatu as well as with other donors. In its July 2007 report, GAO recommended that the Chief Executive Officer of MCC revise the public reporting of the Vanuatu compact's projected impact; assess whether similar reporting in other compacts accurately reflects underlying analyses; and improve its economic analyses by more fully accounting for risks to project benefits. MCC did not directly address GAO's recommendations but commented that it had not intended to make misleading statements and that its portrayal of projected results was factual and consistent with underlying data. MCC projects that the Vanuatu compact's transportation infrastructure projects will provide direct benefits such as reduced transportation costs and induced benefits from growth in tourism and agriculture. MCC estimated the costs and benefits over 20 years, with benefits beginning in full in 2008 or 2009 and growing each year, and it counted poor, rural beneficiaries by defining the area where benefits were likely to accrue. Using projected benefits and costs, MCC calculated the compact's economic rate of return (ERR) and its effects on Vanuatu's gross domestic product (GDP) and per capita income. MCC's portrayal of the projected impact does not reflect its underlying data. MCC states that per capita income will increase by approximately $200, or 15 percent, by 2010 and by $488, or 37 percent, by 2015. However, MCC's underlying data show that these figures represent the sum of individual years' gains in per capita income relative to 2005 and that actual gains will be $51, or 3.9 percent, in 2010 and $61, or 4.6 percent, in 2015. MCC also states that GDP will increase by an additional 3 percent a year, but its data show that after GDP growth of 6 percent in 2007, the economy's growth will continue at about 3 percent, as it would without the compact. MCC states that the compact will benefit approximately 65,000 poor, rural inhabitants, but this statement does not identify the financial benefits that accrue to the rural poor or reflect its own analysis that 57 percent of benefits go to others.We identified five key risks that could affect the compact's projected impacts. (1) Cost estimate contingencies may not be sufficient to cover project overruns. (2) Compact benefits will likely accrue more slowly than MCC projected. (3) Benefit estimates assume continued maintenance, but MCC's ability to ensure maintenance will end in 2011, and Vanuatu's maintenance record is poor. (4) Induced benefits depend on businesses' and residents' response to new opportunities. (5) Efficiency gains, such as time saved in transit, may not increase per capita income. Our analysis of these areas of risk illustrates the extent that MCC's projections are dependent on assumptions of immediate realization of benefits, long-term maintenance, realization of induced benefits, and benefits from efficiency gains.
Product support refers to the support functions required to field and maintain the readiness and operational capability of major weapon systems, subsystems, and components, including all functions related to a weapon system’s readiness. O&S costs historically account for approximately 70 percent of a weapon system’s total life-cycle cost and include costs for repair parts, maintenance, contract services, engineering Weapon systems are costly support, and personnel, among other things.to sustain in part because they often incorporate a technologically complex array of subsystems and components and need expensive spare parts and logistics support to meet required readiness levels. In addition, military operations in such locations as Afghanistan have increased the wear and tear on many weapon systems and escalated their O&S costs well beyond peacetime levels. Many of the key decisions affecting a weapon system’s O&S costs are made while the system is still in the acquisition process. For example, acquisition-based decisions about the design, materials, and technology for a system affect the logistics support that is eventually needed to keep that system available and ready after it is fielded. Controlling O&S costs is critical to ensure future affordability of defense budgets. In short, the acquisition of a weapon system today involves a significant financial commitment to that system over its entire life cycle, a period that may last several decades from the system’s development to the time it is removed from DOD’s inventory. For example, DOD estimated in 2012 that life-cycle O&S costs for the F-35 Joint Strike Fighter—being acquired for the Air Force, Navy, and Marines—would be about $1.1 trillion, in addition to an estimated $391.1 billion in total acquisition costs. Under Secretary of Defense for Acquisition, Technology and Logistics, “Better Buying Power: Mandate for Restoring Affordability and Productivity in Defense Spending,” memorandum (June 28, 2010); “Better Buying Power: Guidance for Obtaining Greater Efficiency and Productivity in Defense Spending,” memorandum (Sept. 14, 2010); “Implementation Directive for Better Buying Power—Obtaining Greater Efficiency and Productivity in Defense Spending,” memorandum (Nov. 3, 2010); “Better Buying Power 2.0: Continuing the Pursuit for Greater Efficiency and Productivity in Defense Spending,” memorandum (Nov. 13, 2012). Consistent with section 2337 and DOD guidance, PSMs are assigned to major weapon systems to provide oversight and management and to serve as advisors to Program Managers on matters related to product support, such as weapon system sustainment. According to DOD’s PSM Guidebook, DOD must continue to improve product support, with a specific focus on increasing readiness and enabling better cost control. DOD guidance describes a PSM as the individual who provides weapon systems product support subject-matter expertise to the Program Manager for the execution of his or her total life-cycle management responsibilities.Manager is assigned life-cycle management responsibility and is accountable for the implementation, management, and oversight of all activities associated with the development, production, sustainment, and disposal of a weapon system across its life cycle. The Program Manager’s responsibilities for oversight and management of the product support function are typically delegated to a PSM, who leads the development, implementation, and top-level integration and management of all sources of support to meet warfighter sustainment and readiness requirements. This organization is displayed in figure 1. The Under Secretary of Defense for Acquisition, Technology and Logistics (USD) serves as the Defense Acquisition Executive and is the individual responsible for supervising the defense acquisition system. The USD(AT&L) has policy and procedural authority for the defense acquisition system, is the principal acquisition official of the department, and is the acquisition advisor to the Secretary of Defense. For acquisition matters, the USD(AT&L) generally takes precedence in DOD, including over the secretaries of the military departments, after the Secretary of Defense and Deputy Secretary of Defense. The USD(AT&L)’s authority includes directing the services and defense agencies on acquisition matters and making milestone decisions for major defense acquisition programs. Under the USD(AT&L), and subject to the authority, direction, and control of the Secretary of the relevant military department, each of the military services has officials designated as Component or Service Acquisition Executives who are responsible for acquisition functions within their services. A Program Executive Officer— a military or civilian official who has responsibility for directing assigned programs—reports to and receives guidance and direction from the Service Acquisition Executive. The Program Executive Officer supervises a Program Manager, who is the individual responsible for accomplishing a program’s objectives for development, production, and sustainment to meet the user’s operational needs. The PSM reports to the Program Manager. Under the PSM, there may be a need for Product Support Integrators, who are assigned within the scope, direction, and oversight of the PSM, and who may be either a government or commercial entity. Product Support Integrators are tasked with integrating sources of support, and may use Product Support Providers to accomplish this role. Product Support Providers are tasked with providing specific product support functions. established wherein the PSM (acting on behalf of the Program Manager) may effectively delegate some levels of responsibility for product support implementation and oversight to Product Support Integrators. The Product Support Integrators, in turn, ensure that the performance requirements to meet their arrangements are accomplished by the Product Support Providers, who perform product support activities on major weapon systems. However, as noted by the PSM guidebook, in all cases the PSM is accountable to the Program Manager for the support outcome. The PSM guidebook includes depots and original equipment manufacturers among the most likely candidates for both the Product Support Integrator and Product Support Provider roles. logistics management function.command, the Army Materiel Command (AMC), works closely with program executive offices, the Army acquisition executive, industry, academia, and other related agencies to develop, acquire, and sustain materiel for the Army. AMC’s maintenance depots and arsenals overhaul, modernize, and upgrade major weapon systems. The Army’s principal materiel Navy and Marine Corps. The Assistant Secretary of the Navy for Research, Development, and Acquisition serves as the Component Acquisition Executive and is responsible for all research, development, and acquisition within the Department of the Navy. In order to address a diverse set of needs, the Department of the Navy comprises components known as Systems Commands. These include Naval Sea Systems Command, Naval Air Systems Command, and Space and Naval Warfare Systems Command, among others. Marine Corps Systems Command serves as the Department of the Navy enterprise acquisition and life-cycle systems manager for the Marine Corps. Marine Corps Systems Command provides competency resources to the program executive officer, including financial management, engineering, contracting, logistics, and program management. These Systems Commands oversee various acquisition programs, such as for ships and aircraft, and these programs are responsible for the management of their respective systems’ life-cycle support. Air Force. The Office of the Assistant Secretary of the Air Force for Acquisition is responsible for the integrated life-cycle management of systems from the time the system enters into the defense acquisition management system until system retirement and disposal. Individual program executive officers beneath this office are then responsible for the total life-cycle management of an assigned portfolio of programs. Air Force Materiel Command and Air Force Space Command support these efforts by providing technical assistance, infrastructure, manpower, test capabilities, laboratory support, professional education, training and development, and management tools. DOD and the services have taken steps to implement PSMs for major weapon systems and have described them as a valuable resource in managing product support, but certain aspects of the implementation process remain incomplete. DOD has assigned PSMs to almost all of its major weapon systems and has developed PSM training courses, but DOD, in coordination with the military services, has not developed a plan—to include objectives, milestones, and resources—to implement and institutionalize a comprehensive PSM career path. The services have identified and assigned PSMs to almost all of their major weapon systems. As of the most-current data available from the military services, 325 of 332 PSM position requirements across DOD for major weapon systems—approximately 98 percent—were filled. In addition, DOD has designated the PSM position as a key leadership positionpolicy, the PSM position for major defense acquisition programs is to be filled by a properly qualified military servicemember or full-time DOD for ACAT I level systems. In accordance with statute and DOD employee. Most of the PSMs are senior-level civilian personnel; the remaining positions are filled by military personnel. However, according to Navy and Air Force officials, in a few instances, the services have had to issue waivers to individuals to allow them to take PSM positions, because they did not have the necessary education, experience, or training to fill the position. OSD, military department headquarters, and PSM officials told us that PSMs are carrying out the duties identified in law. Moreover, PSMs we spoke with told us that they are performing many of the same duties that they performed in their previous positions as senior logisticians or in related fields. In addition to those duties, however, DOD officials told us that one of the changes to these officials’ prior responsibilities is the idea that support concepts should be evaluated periodically over a system’s life cycle; to this end, section 2337 requires that PSMs develop and implement a comprehensive product support strategy, and revalidate any business-case analysis performed in support of the strategy prior to each change or every 5 years. This requirement is met in part via the development of a document called a life-cycle sustainment plan. To help improve life-cycle product support, the Office of the USD(AT&L) has issued guidance that discusses how to develop a life-cycle sustainment plan and works with program offices to review these plans. Table 1 shows the number and characteristics of PSMs assigned to major weapon systems by service. OSD and the Defense Acquisition University have developed courses for PSMs on life-cycle product support and logistics management; however, DOD, in coordination with the military services, has not developed a plan—to include objectives, milestones, and resources—to implement a comprehensive PSM career path. For example, in 2011 DOD began offering a new course on life-cycle product support, among other courses, and the Defense Acquisition University is currently developing a new executive-level PSM course, which is expected to focus on PSMs’ lessons learned and on enhancing PSMs’ success in fielding and sustaining systems. Further, recognizing the importance of placing qualified individuals in PSM positions, in November 2013 the Office of the USD(AT&L) noted that it would establish a new set of qualification boards, whose task will be to prescreen personnel to qualify a pool of candidates to fill key leadership positions, including PSM positions. These boards are expected to identify individuals who are prepared to fill key leadership positions based on their training, education, and experience. This process will allow DOD and service leadership to create a pool of qualified personnel who are ready to fill these positions and assist in workforce talent management and succession planning. In addition, the Office of the Deputy Assistant Secretary of Defense for Materiel Readiness has also developed a PSM notional career path. Moreover, at the service level, the Army, Navy, Marine Corps, and Air Force have each taken some steps to create notional career paths for PSMs, as well as issuing guidance identifying training, experience, and other requirements. Army. The Army’s 2012 Product Support Manager Concept of Operations, calls for a defined career path for PSMs that targets progressive leadership growth, with focused education and experience requirements to shape and develop PSMs into future It also outlines a “notional career senior leaders and executives.roadmap” for the newly created PSM position. However, the Army notes in its Product Support Manager Concept of Operations that this roadmap is still in its infancy and states that there is currently no defined comprehensive career path in place to develop, train, and support future PSMs. Furthermore, an Army official told us that, as of March 2014, the Army does not have a plan with actions, milestones, objectives, or resources dedicated to implementing a PSM career path. Yet, according to this official, the Army is actively working to address long-term PSM development and management planning issues by meeting to discuss these items. Navy and Marine Corps. The Navy has also provided a draft “notional development career ladder” for life-cycle logistics to each of its various Systems Commands as a starting point for developing a PSM career path. Officials from one of the Navy’s Systems Commands told us that they are concerned about the future of, and succession planning for, PSM positions and that, to address this concern, the command is implementing the draft career ladder and using it to develop a draft talent-management document. According to a senior official within the Department of the Navy, the Systems Commands need to implement a fundamental career structure for PSMs, with specific learning objectives laid out. Additionally, according to Department of the Navy officials, while the Systems Commands have indicated that efforts are ongoing, a completion date for these efforts has not been determined. Moreover, according to these officials, the Department of the Navy does not currently have a plan with actions, milestones, objectives, or resources dedicated to implementing a PSM career path. Air Force. The Air Force noted in October 2013, as part of a review of its life-cycle logisticians, that there was no clear “career progression path” or competency model to develop life-cycle logisticians. Recognizing these challenges, the Air Force embarked on a 2–3 year effort aimed at developing life-cycle logistics professionals. As one of the initial short-term activities within this effort, the Air Force issued in October of 2013 an Air Force Life Cycle Logistics (LCL) Workforce Guidebook, which includes a “notional career roadmap” for life-cycle logistics professionals. The Air Force also recently engaged in an effort to recode positions to increase the number of personnel available to fill life-cycle logistics positions. According to Air Force officials, however, there are not always enough personnel within the life-cycle logistics workforce to meet the Air Force’s needs. Further, while the Air Force has taken steps to address some of the initial challenges it identified and has developed an implementation plan with associated objectives, milestones, and resources, it has stated that it needs to do additional work to develop a clear understanding of the life-cycle logistics skills a PSM would require across a program’s life cycle and to design a new training curriculum to include logistics, engineering, finance, contracting, and acquisition. Thus, DOD and all of the military services, in coordination with the Defense Acquisition University, have taken some initial steps in establishing a defined career path and the associated guidance or plans to develop, train, and support future PSMs. However, DOD, in coordination with the military services, has not developed a plan—to include objectives, milestones, and resources—to implement and institutionalize a comprehensive PSM career path. As noted above, each of the services has identified additional steps that remain to be taken to implement and institutionalize a comprehensive career path to develop, train, and support its future PSMs. Standard practices for project management call for agencies to conceptualize, define, and document specific goals and objectives in the planning process, along with the appropriate steps, milestones, time frames, and resources needed to achieve those results. In addition, the John Warner National Defense Authorization Act for Fiscal Year 2007 established the goal for DOD and the military departments of ensuring that certain development- and acquisition-related positions for each major defense acquisition program be performed by a properly qualified member of the armed forces or full- time employee of DOD within 5 years from enactment, and required the Secretary of Defense to develop and begin implementation of a plan of action for recruiting, training, and ensuring appropriate career development of personnel to achieve this objective. The National Defense Authorization Act for Fiscal Year 2010 added PSMs to that list of positions. A similar provision was subsequently codified at section 1706 of Title 10, U.S. Code. DOD policy similarly directs that the PSM position for ACAT I and II systems be filled by a properly qualified and certified military servicemember or full-time DOD employee. Further, DOD Instruction 5000.66 requires the DOD components to provide education, training, and experience opportunities with the objective of developing a professional, agile, motivated workforce, and ensuring that individuals are qualified to perform the activities required of them in their positions.that individuals selected for PSM positions are qualified, each of the military services has identified additional steps that are necessary to implement a defined, comprehensive career path to develop, train, and support future PSMs. While there are individuals serving in the PSM role today for most major weapon systems, until a defined career path is finalized and institutionalized within DOD, including within each of the services, the department may not be well positioned to ensure that the services will be able to fill PSM positions with properly qualified personnel in the future. DOD has issued guidance for implementing PSMs; however, a recent update to DOD’s guidance omits certain information, contains a potentially confusing description of responsibilities, and—according to service officials—is not sufficiently clear. Standards for Internal Control in the Federal Government states that federal agencies should, among other things, design and document internal control activities, such as policies and procedures, to help ensure compliance with applicable laws and regulations. In October 2010, DOD issued Directive-Type Memorandum (DTM) 10-015, which established the department’s policy to implement and institutionalize the requirement that PSMs be assigned to support each of its major weapon systems. Among other things, this document outlined the PSM’s duties and required that PSMs be certified in the life-cycle logistics career field, which includes fulfilling general educational, training, and experience requirements. The memorandum indicates that it was intended to be a provisional policy that would eventually be incorporated into the next update of its defense acquisition system guidance—DOD Instruction 5000.02—which describes the operation of the defense acquisition system, including product support. In November 2013, DOD issued an interim update to its defense acquisition system guidance that canceled and, according to the update, incorporated a number of memorandums, including the PSM-related DTM 10-015. However, the newly issued acquisition system instruction does not include all of the information from DTM 10-015. For example, the instruction does not list all of the responsibilities of a PSM. Although the instruction identifies PSMs among the key leadership positions for major defense acquisition programs, it does not include a statement that it is DOD policy for PSMs to be assigned to all major weapon systems. OSD officials told us that interim DOD Instruction 5000.02 does not contain this information because instructions are meant to offer clarification of issues, not to recite what is already in statute. OSD officials also told us that the policy to assign PSMs to each major weapon system was now included in a separate memorandum issued on July 11, 2013, which is not cited within Interim DOD Instruction 5000.02. They said that there are no differences between the information on PSM assignment, roles, and responsibilities covered previously in DTM 10-015 and what is now covered in Interim DOD Instruction 5000.02, memorandums from July and November 2013, and the Defense Acquisition Guidebook. However, each of the military department headquarters offices responsible for implementing PSMs told us that the current guidance is not sufficiently clear when addressing product support and the implementation of PSMs. They stated that the interim guidance does not discuss PSMs at the same level of detail as the DTM 10-015. Specifically, as previously mentioned, the responsibilities of PSMs are not listed in the new guidance. The instruction discusses the roles and responsibilities of the Program Manager at length, but only alludes to the responsibilities of PSMs, citing section 2337 of Title 10, U.S. Code and discussing the requirement to revalidate business-case analyses. The interim instruction also contains a potentially confusing provision and omits certain information that is important to the implementation of the PSM position. For example, it states that the Program Manager will develop and implement an affordable and effective performance-based product support strategy. Although the Program Manager is ultimately responsible for accomplishing program objectives, including for the sustainment phase, and for developing and implementing performance- based logistics strategies in the context of sustainment planning, the responsibilities of the PSM in section 2337 include developing and implementing a comprehensive product support strategy for the weapon system. While DTM 10-015 specifically identified the responsibilities of the PSM, the interim instruction does not, which could result in confusion regarding the role of the PSM and the nature of the support provided to the Program Manager. Each of the military department headquarters offices responsible for implementing PSMs told us that they found the language from the canceled DTM 10-015 to be very useful as the services developed their own service-level policies and guidance to implement PSMs for their assigned major weapon systems. Service officials said that they believed there was value in having all of the PSM-related guidance in one document, so that current and future product support personnel would not have to refer to multiple documents. Officials from one of the military services added that a life-cycle logistician would now have to look up PSM-related policy and information in law, in Interim DOD Instruction 5000.02, and in the July 2013 memorandum instead of just referring to DTM-015—which clearly laid out that information in one document. In addition, these officials expressed concern that it was no longer clear who should assign PSMs. They also noted that DTM 10-015 identified the Component Acquisition Executive as the individual responsible for identifying and assigning a PSM for every major weapon system. However, the officials noted that the interim instruction does not specify which individual or office is responsible for identifying and assigning a PSM. Moreover, these officials expressed particular concern about institutionalizing the implementation of PSMs, noting that, unlike DOD instructions, memorandums like the July 2013 memorandum are not stored in a central repository. These officials told us that the institutional knowledge behind the evolving PSM-related guidance and policy would be lost, and they questioned whether new personnel would know where to find all of the PSM-related guidance. In the absence of clear and comprehensive guidance, DOD and military service officials may not understand which office or individual is responsible for identifying and assigning PSMs, and there may be an increased risk of DOD personnel confusing the responsibilities of Program Managers and PSMs. Further, without centralized guidance that serves to institutionalize the implementation of PSMs, DOD may be hindered in its ability to implement future PSMs for its major weapon systems. Each of the military departments has issued its own guidance for implementing PSMs, but the Army’s guidance on PSM implementation is currently unclear regarding responsibilities and reporting relationships for certain support personnel involved in the sustainment of weapon systems. For example, the Navy issued a memorandum, entitled Product Support Manager (PSM) Implementation, in May 2011 to discuss the requirement that major weapon systems be supported by the PSM who would provide weapon systems product support subject-matter expertise to the Program Manager. Similarly, in March 2013, the Air Force issued Air Force Instruction 63-101/20-101, Integrated Life Cycle Management, which incorporates various PSM requirements and responsibilities. Moreover, the Air Force issued a guidebook on life-cycle logistics in October 2013, which discusses the implementation and responsibilities of the PSM position within the Air Force. Government standards for internal control state that a good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. The Army issued a memorandum to help implement its PSMs and also developed a PSM Concept of Operations, which identifies PSM responsibilities and establishes the Army’s framework for integrating the new PSM position into its organizational structure. This Concept of Operations gives PSMs—who reside organizationally under ASA(ALT)—responsibility for total life-cycle product support of their assigned systems, including sustainment, in support of the Program Manager. However, Army Regulation 10-87—which predates the implementation of PSMs—notes AMC roles and responsibilities for sustainment and for integrated materiel life-cycle management in partnership with program executive offices and Program Managers. AMC continues to have a significant role in providing assistance to the Program Manager and PSM and in executing the sustainment support for major weapon systems. Figure 2 shows the relationship between ASA(ALT) and AMC for product support activities. AMC provides sustainment support in the form of personnel—consisting of AMC contractors or government logistics managers—who are sometimes assigned to ASA(ALT) programs to provide sustainment support to PSMs. While these personnel are “matrixed” to the program office, they are AMC personnel and, according to officials, therefore remain under AMC’s chain of command. Thus, the PSM provides input into their annual performance ratings but does not officially rate them and, according to Army officials, does not have direct authority over them. This lack of authority may make it difficult for PSMs to achieve some of their goals. ASA(ALT) officials stated that major weapon systems program offices have raised the issue of the lack of clear roles and responsibilities of these personnel and, according to a senior AMC official, AMC discussed this issue with their personnel in an attempt to address this issue. However, in one specific example, an Army PSM we spoke with noted that while he has responsibilities as a PSM, he has no authority over the matrixed personnel from AMC who are assigned to support him and his assigned programs. He therefore faces the risk of these individuals not complying with his direction, which could hinder his ability to conduct his job as PSM. Specifically, according to this PSM, in 2012 the Joint Logistics Board (a senior-level governance body) provided guidance that maintenance work for one of his programs was to be conducted at a particular location, and he directed his AMC support personnel to stop pursuing and promoting their own depot with his program office’s resources. However, the life cycle management command and the AMC-matrixed personnel continued to pursue the work at their own depot. It took this Army PSM a year’s worth of effort going through the appropriate chain of command to ensure that the AMC personnel followed the Joint Logistics Board’s guidance for the designated location. As a result of these unclear reporting relationships, this PSM was unable to effectively plan or proactively manage his assigned weapon systems’ life-cycle sustainment decisions. According to senior Army officials, ASA(ALT) and AMC are working to resolve this issue and have held meetings to determine the best approach to enable PSMs to effectively perform their duties while simultaneously enabling AMC to perform its mission of providing sustainment support to the Army’s weapon systems’ life cycles. However, the Army has not yet issued guidance clarifying the roles and responsibilities of ASA(ALT) and AMC in light of the new requirement for PSMs to be assigned to major weapon systems—particularly for AMC personnel assigned to support ASA(ALT) program offices and for PSMs. The Army is currently drafting a revision to Army Regulation 700-127 and developing a new Department of the Army Pamphlet 700-127-1. According to Army officials, these publications will further define the Army policy and guidance on PSM responsibilities, relationships with AMC, and career-path development, among other items. According to an Army official, this regulation and pamphlet are planned to be published in June 2014. Yet, the Army has been working on this effort since March 2013 and has not finalized these documents over the last year due to delays, in part as a result of multiple reviews. Until the Army finalizes this guidance, which is expected to clarify the roles and responsibilities of ASA(ALT) and AMC with respect to matrixed personnel, Army PSMs and the AMC personnel who support them may lack clear reporting lines. Without clear guidance detailing responsibilities and reporting relationships for AMC support personnel involved in the sustainment of weapon systems, PSMs may be hindered in their ability to effectively manage and conduct their daily product support responsibilities. DOD is not fully aware of how or to what extent PSMs are affecting life- cycle sustainment decisions for major weapon systems because it has not systematically collected or evaluated information on the effects of PSMs. In the absence of department- and service-wide information on the effects PSMs are having on life-cycle sustainment decisions, we interviewed product support personnel at 12 program offices, and program officials identified several good practices and challenges associated with the effects, if any, that PSMs are having on life-cycle sustainment decisions. For example, one challenge we found was that some Army PSMs may not be able to fulfill their daily product support responsibilities because they do not have greater visibility into how much sustainment funding their weapon systems will receive, including prior to the year of execution of funds, to the extent possible. DOD does not fully know how or to what extent PSMs are affecting life- cycle sustainment decisions because it is not systematically collecting or evaluating information on the implementation or effect of PSMs. Officials from OSD and each of the military department headquarters responsible for implementing PSMs told us that the PSM designation garners more respect than other similar product support positions have in the past and that it has elevated the importance of sustainment planning within weapon systems’ program offices. This was also the widespread consensus among product support personnel we spoke to—including all 12 PSMs and the 5 Program and Deputy Program Managers whom we interviewed. Over the years, OSD has engaged in several activities aimed at providing oversight, collecting some information on the effects that PSMs are having on life-cycle sustainment decisions, and recognizing the achievements of PSMs. For example, OSD officials stated that they review life-cycle sustainment plans created by PSMs to ensure that their assigned weapon system demonstrates continued reliability and performance, so as not to adversely affect the system’s readiness or O&S costs. In addition, these officials told us that the Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness leads a quarterly logistics workforce meeting, comprising service representatives and other officials from DOD’s acquisition community, to discuss PSM-related life-cycle logistics initiatives and challenges. Since 2013, the USD(AT&L) has issued an annual award to highlight outstanding individual PSM performance across the services. This award recognizes PSMs’ contributions to controlling increases in weapon system cost, addressing long-term affordability, and promoting industry competition and innovation. It also recognizes outstanding achievements in the development, implementation, and execution of affordable and effective product support strategies for weapon systems. According to guidance from the USD(AT&L), award recipients are selected from a small pool of candidate submissions based on the following criteria, among others: reducing life-cycle cost; significantly increasing current or future operational suitability; and developing, implementing, or executing effective and affordable product support arrangements for their assigned weapon systems. Officials from one of the military services told us that they have been asked by their senior leadership to develop objective measures to evaluate the effectiveness of current initiatives—including sustainment efforts for major weapon systems—in which PSMs play a key role. These officials mentioned that there may be various mechanisms with which to evaluate the effects that PSMs are having on their assigned major weapon systems’ life-cycle sustainment decisions. For instance, they stated that they currently review and evaluate the quality of life-cycle sustainment plans and business-case analyses, among other logistics assessments, and that continuing to conduct these types of reviews and evaluations—including evaluations on the effects of these efforts—may help them to better understand the extent to which PSMs are carrying out their responsibilities or are affecting life-cycle sustainment decisions for their assigned systems. Program evaluation guidance states that evaluations can play a key role in program planning, management, and oversight by providing feedback—on both program design and execution—to Program Managers, Congress, executive-branch policy officials, and the public. Additionally, this guidance indicates that outcome and impact evaluations are helpful in assessing (1) the extent to which a program achieves its outcome-oriented objectives and (2) the net effect of a program, by comparing the program’s outcomes with an estimate of what would have happened in the absence of the program. Such evaluations can also be useful for identifying various trends—such as good practices and challenges related to the effects PSMs are having on life-cycle sustainment decisions—to help enhance future product support efforts across the department. Although OSD and the military services have various product support efforts under way—including those cited above— in the years since the PSM legislation was enacted, DOD has not systematically collected and evaluated information on the effects, if any, that PSMs are having on life-cycle sustainment decisions for major weapon systems. Department and military service officials stated that DOD is still in the early stages of implementation, and it is therefore too early to conduct such an evaluation of the PSM program. These officials also stated that isolating the effects of a PSM is challenging because different factors may influence a PSM’s effects; the PSM position is one position of many that can affect decisions regarding life-cycle sustainment for a major weapon system, and a PSM reports directly to the Program Manager, who makes final decisions related to the PSM’s assigned system. However, based on good practices we have identified in our previous work, we believe that it is important to start an evaluation program as early as possible to collect baseline information against which future effectiveness could be measured. Moreover, OSD already collects some information on the effects of PSMs through the annual PSM award submissions and the documentation of some information regarding PSM initiatives at its quarterly logistics workforce meeting. Therefore, with PSMs now in place for most major weapon systems and with the existence of various PSM-led efforts, conducting evaluations of the effects PSMs are having on programmatic decision making at this stage of the implementation could help inform whether the PSM position—as it is currently being implemented—will help to improve product support, and whether changes are needed to guidance or other areas to enhance PSMs’ contributions. In the absence of department- or service-wide information systematically documenting the effects PSMs are having on life-cycle sustainment decisions, we conducted interviews with product support personnel assigned to 12 major weapon systems, and program offices identified several good practices being employed as well as several challenges that PSMs face. For example, in fiscal year 2011, a Virginia-class submarine PSM led an effort to conduct an analysis focused on reducing life-cycle sustainment costs by minimizing the time the system spends in depot maintenance, in order to maximize its availability for missions. As a result of this effort, the Virginia-class submarine program office has adopted this practice and now conducts similar analyses on a recurring basis. Additionally, the PSM assigned to the Abrams Tank is currently conducting several analyses on components that affect the sustainment of the Abrams Tank. Specifically, the Abrams Tank PSM is analyzing staffing information on both Abrams Tank variants—the first already in sustainment and the second approaching sustainment—to determine future staffing levels for the systems. This PSM is also examining warfighters’ total ownership costs to sustain the Abrams Tank, and the reliability of the system’s engine, to help reduce O&S costs. Army officials stated that once these efforts are completed, the Abrams Tank PSM will be able to conduct business-case analyses to determine if there is a more cost-effective approach to sustaining both variants. Similar predictive analysis and modeling tools are currently being developed by the PSM for the KC-46A Tanker aircraft. For instance, the PSM is developing a model to prioritize component overhaul processes based on the frequency, uniqueness, and cost of a repair. This PSM is also developing the analytical components of an internal analysis system that is aimed at correcting deficiencies in the performance and effectiveness of the KC- 46A’s scheduled and unscheduled maintenance programs. According to the PSM, this tool will also be used to gather and assess various engineering, logistics, and cost factors to make timely adjustments to the KC-46A’s sustainment operations. In conducting interviews with product support personnel, program officials also identified challenges that may have prevented PSMs from making or influencing life-cycle sustainment decisions for their assigned weapon systems. For example, 4 of 12 PSMs we spoke with from 3 of the military services stated that they did not have sufficient sustainment funding to effectively conduct their daily product support responsibilities and manage sustainment decisions for their assigned major weapon systems. This has affected their ability to anticipate sustainment issues and manage potential risks regarding the reliability, availability, and readiness of their systems. Additionally, product support personnel we interviewed from the Army and Air Force told us that their respective services do not have enough product support personnel to fully support all major weapon systems and that, consequently, they conducted not only their own PSM duties and responsibilities but those of other logistics-related positions, such as senior logisticians, directors of logistics, and assistant product managers for logistics. Moreover, the shortage of funding and personnel led one of the services to assign multiple major weapon systems to two of their PSMs in order to ensure that each major weapon system is supported by a PSM. According to these two PSMs, they were collectively assigned to support 17 major weapon systems and, as a result of not having enough product support personnel, they faced increased risks— such as low system availability and readiness rates—of not being able to effectively influence sustainment costs and prevent undesirable performance outcomes for their assigned systems. According to internal Army documentation, the Office of the ASA(ALT) has recognized that while program offices have the responsibility to sustain the systems they manage, they have little influence on how resources are allocated or executed. The Defense Acquisition Guidebook and the Army’s PSM Concept of Operations both note the ultimate responsibility of the Program Manager for accomplishing program objectives over the life cycle of a system, including sustainment, and discuss the assistance provided by the PSM through product support expertise and oversight of product support activities. Army regulations note the involvement of AMC in sustainment planning and execution, including a role in the development of funding requirements. For example, according to Army Regulation 10-87, AMC provides integrated materiel life-cycle management of systems and equipment in partnership with program executive offices and Program Managers, and serves as the maintenance process owner for national-level sustainment. Army Regulation 70-1 discusses AMC support for program executive offices and Program Managers through oversight of AMC life-cycle management command development and submission of sustainment funding requirements. According to officials, AMC assists in life-cycle logistics planning and executes the product support activities planned by the Program Manager and PSM. Although funding requests are generated in collaboration, distribution of approved funding for execution is handled by AMC. Moreover, ASA(ALT) and Army officials from two of six program offices expressed concern that Army PSMs may not be able to positively affect their assigned system’s life-cycle sustainment decisions because PSMs lack information on sustainment funding decisions. Army PSMs from these offices stated that they have very little input into funding decisions related to the sustainment of their systems and said that it is a challenge for them to manage their assigned systems without greater visibility—specifically, knowledge prior to the year of execution of the funds, to the extent possible—into how much sustainment funding their programs will receive, because the Army’s processes for requesting and distributing sustainment funds is not transparent. According to ASA(ALT) officials, the PSM provides input into funding requests that are developed in support of the system and these funding requests are then vetted internally and submitted through the appropriate Army life-cycle management command for review and prioritization. Once the life-cycle management command completes its review and prioritization of the requested funds, AMC then conducts its review and prioritization to make the final command-level decision on the distribution of sustainment funding for the Army’s major weapon systems. However, some Army officials we spoke with said that AMC does not consistently communicate with program offices about how it prioritizes competing funding requests and distributes sustainment funds. For example, some Army PSMs told us that they are often surprised when they receive less sustainment funding then they had anticipated in the year of execution of funds and must quickly shift sustainment funding provided for other efforts within their program to cover the shortage of sustainment funding for their assigned systems. According to AMC officials, because their organization is responsible for sustaining all Army weapon systems, they can provide the strategic overview necessary to prioritize competing funding requests. These officials also told us that AMC is responsible for balancing the distribution of funding across all systems under sustainment based on the level of Headquarters Department of the Army funding provided to AMC. They noted that some of their life-cycle management commands have formed councils where they regularly discuss sustainment funding issues with program offices. However, these officials also acknowledged that some PSMs are not receiving complete information on the status of sustainment funding decisions in the year of execution of funds. In this regard, in fiscal year 2014 the Army began a pilot on one major weapon system with the goal of more-closely tracking sustainment funding in an effort to help identify ways to provide more clarity and visibility on the resources distributed to the system. According to AMC officials, this should improve the transparency of resources for the PSMs to better manage their assigned major weapon systems. As previously stated, ASA(ALT) and AMC are continuing to work to clarify roles and responsibilities and have held high-level departmental meetings to determine the best approach to enable PSMs to effectively perform their duties while simultaneously enabling AMC to perform its mission of providing sustainment support to the Army’s weapon systems. Furthermore, ASA(ALT) officials told us that the current process and supporting policies for prioritizing and managing sustainment funding should be updated to reflect PSM responsibilities. We discussed this issue with service officials and PSMs from the Navy, Marine Corps, and Air Force, and each said that this problem does not exist for them in their service. They are aware in advance of the amount of sustainment funding they will receive for their programs and are able to plan accordingly. However, until the Army reviews the current process for requesting and distributing sustainment funding for major weapon systems and makes the adjustments necessary to ensure that PSMs have greater visibility over the allocation of sustainment funding their assigned weapon systems will receive—including prior to the year of execution of funds, to the extent possible—some PSMs in the Army may not be able to plan, proactively manage, or affect life-cycle sustainment decisions for their assigned systems. Since fiscal year 2010, DOD has made progress in implementing PSMs for its major weapon systems, and department officials and product support personnel have stated that the PSM designation garners more respect than other similar product support positions have in the past. While DOD and all of the services have taken some steps to develop a comprehensive career path and associated guidance to develop, train, and support future PSMs, DOD, in coordination with the military services, has not developed a plan—to include objectives, milestones, and resources—to implement and institutionalize a comprehensive PSM career path. Until DOD develops such a plan, the department may not be able to ensure that the services can fill PSM positions with properly qualified personnel in the future. Moreover, DOD guidance for implementing PSMs is not sufficiently clear to ensure effective implementation of PSMs across the services going forward. Without clear, comprehensive, and centralized implementation guidance, DOD may be hindered in its ability to implement future PSMs for its major weapon systems. Likewise, until the Army clarifies roles and responsibilities in its guidance for the sustainment portion of the life cycle for major weapon systems, PSMs may be hindered in their ability to effectively manage and conduct their daily product support responsibilities. Although the PSM program is relatively new, there is anecdotal evidence of the effects PSMs are having on life-cycle sustainment decisions for major weapon systems. While program officials we spoke with were able to identify several good practices and challenges facing PSMs, DOD is not well positioned to make changes or enhancements to the PSM program because it has yet to systematically collect or evaluate information on the effects, if any, that PSMs are having on their assigned systems’ sustainment decisions. One such change that DOD could have identified if it had been collecting evaluative information would be to examine the current process for making sustainment funding decisions in the Army to ensure that Army PSMs have greater visibility into the funding decisions affecting the sustainment of their systems, to the extent possible, including prior to the year of execution of funds. With PSMs now in place for almost all major weapon systems, information on the effects PSMs are having on life-cycle management and sustainment decisions could help inform DOD, the services, and Congress on the extent to which the PSM position is helping to improve product support efforts or whether changes are needed to guidance or to roles and responsibilities to enhance the contributions of PSMs. To help DOD improve the implementation of Product Support Managers (PSM), we recommend that the Secretary of Defense take the following five actions. To ensure the development of a sufficient cadre of qualified, trained personnel to meet future requirements for Product Support Managers (PSM), we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics (USD)—in coordination with the Defense Acquisition University and the Secretaries of the Army, Navy, and Air Force—to develop and implement a plan with objectives, milestones, and resources to implement and institutionalize a comprehensive career path and associated guidance to develop, train, and support future PSMs. To better enable the military services to implement and institutionalize the roles and responsibilities of Product Support Managers (PSM), we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics (USD)—in coordination with the Secretaries of the Army, Navy, and Air Force—to issue clear, comprehensive, centralized guidance regarding the roles and responsibilities of PSMs and the officials that assign them. To better enable Army Product Support Managers (PSM) to fulfill their product support responsibilities, we recommend that the Secretary of Defense direct the Secretary of the Army—in coordination with the Assistant Secretary of the Army for Acquisition, Logistics and Technology (ASA) and the Commander of Army Materiel Command (AMC)—to clearly define Army-wide roles and responsibilities for the sustainment portion of the life cycle of major weapon systems, to include the reporting relationships of AMC support personnel assigned to Army weapon system program offices, by issuing new, or revising existing, Army guidance. To help inform departmental and congressional oversight of the status of Product Support Manager (PSM) implementation and the influence, if any, that PSMs have in life-cycle sustainment decisions for major weapon systems, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics (USD)—in conjunction with the Secretaries of the Army, Navy, and Air Force—to systematically collect and evaluate information on the effects, if any, that PSMs are having on life-cycle sustainment decisions for their assigned major weapon systems. To better enable Army Product Support Managers (PSM) to fulfill their daily product support responsibilities, including planning and proactively managing sustainment efforts for their assigned weapon systems, we recommend that the Secretary of Defense direct the Secretary of the Army—in coordination with the Assistant Secretary of the Army for Acquisition, Logistics and Technology (ASA) and the Commander of Army Materiel Command (AMC)—to review the current process for requesting and distributing sustainment funding for major weapon systems and to take necessary actions to ensure that PSMs have greater visibility of the amount of sustainment funds their weapon systems will receive including prior to the year of execution of funds, to the extent possible. In written comments on a draft of this report, DOD concurred with four of our recommendations and partially concurred with one recommendation. DOD’s comments are reprinted in appendix IV. DOD also provided technical comments, which we have incorporated into our report where appropriate. DOD concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics—in coordination with the Defense Acquisition University and the Secretaries of the Army, Navy, and Air Force—to develop and implement a plan with objectives, milestones, and resources to implement and institutionalize a comprehensive career path and associated guidance to develop, train, and support future PSMs. DOD stated that the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics will work over the next year with the staffs of the Secretaries of the Army, Navy, and Air Force, along with the Defense Acquisition University and the Human Capital Initiatives Directorate via the Life Cycle Logistics Functional Integrated Product Team to define a methodology and plan for institutionalizing a comprehensive career path and associated guidance for developing, training, and supporting future PSMs. We agree that, if fully implemented, this action should address our recommendation. DOD also agreed with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics—in coordination with the Secretaries of the Army, Navy, and Air Force—to issue clear, comprehensive, centralized guidance regarding the roles and responsibilities of PSMs and the officials that assign them. DOD stated that the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics will work over the next year with the staffs of the Secretaries of the Army, Navy, and Air Force to develop clear, comprehensive, centralized guidance regarding the roles and responsibilities of PSMs and the officials that assign them. While DOD did not provide details on how it will develop such guidance, we agree that, if fully implemented, this action should address our recommendation. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army—in coordination with the Assistant Secretary of the Army for Acquisition, Logistics and Technology and the Commander of Army Materiel Command—to clearly define Army- wide roles and responsibilities for the sustainment portion of the life cycle of major weapon systems, to include the reporting relationships of Army Materiel Command support personnel assigned to Army weapon system program offices, by issuing new, or revising existing, Army guidance. DOD stated that the Army sees no ambiguity in the Army-wide roles and responsibilities for the sustainment portion of the life cycle of major weapon systems, including the reporting requirements of Army Materiel Command personnel providing matrix support to the Program Managers. DOD further noted that the Army leadership has been coordinating the role of the PSM and is finalizing its capstone policy to solidify required changes as part of the statutory implementation. While our report acknowledges the Army is currently drafting a revision to Army Regulation 700-127 and developing a new Department of the Army Pamphlet 700- 127-1, which Army officials told us will further define the Army policy and guidance on PSM responsibilities, relationships with Army Materiel Command, and career-path development, among other items, these documents have not yet been finalized. We also acknowledge in our report that the Army has been working on this guidance since March 2013, but note that it has not finalized these documents over the last year due to delays. We continue to believe that until the Army finalizes guidance that clarifies the roles and responsibilities of the program offices and Army Materiel Command with respect to matrixed personnel, Army PSMs and the Army Materiel Command personnel who support them may lack clear reporting lines. DOD concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics—in conjunction with the Secretaries of the Army, Navy, and Air Force—to systematically collect and evaluate information on the effects, if any, that PSMs are having on life-cycle sustainment decisions for their assigned major weapon systems. DOD stated that the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics will work over the next year with the staffs of the Secretaries of the Army, Navy, and Air Force to define a methodology and plan for systematically collecting and evaluating information on the effects, if any, that PSMs are having on the life-cycle sustainment decisions for their assigned major weapon systems. We agree that, if fully implemented, this action should address our recommendation. Finally, DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army—in coordination with the Assistant Secretary of the Army for Acquisition, Logistics and Technology and the Commander of Army Materiel Command—to review the current process for requesting and distributing sustainment funding for major weapon systems and to take necessary actions to ensure that PSMs have greater visibility of the amount of sustainment funds their weapon systems will receive including prior to the year of execution of funds, to the extent possible. DOD stated that Army Staff, in coordination with the Commander of Army Materiel Command, will work over the next year to review the current process for requesting and distributing sustainment funding for major weapon systems and take necessary actions to ensure that PSMs and all other stakeholders have greater visibility of the amount of sustainment funds their weapon systems will receive. We agree that, if fully implemented, this action should address our recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine what steps, if any, the Department of Defense (DOD) and the military services have taken to implement Product Support Managers (PSM) for major weapon systems, we collected and analyzed DOD and service data on the PSMs assigned to these systems. We also interviewed and obtained pertinent documents from acquisition, program management, and logistics officials—including PSMs—to understand how the PSM position has been implemented to-date. These documents included DOD directives and instructions, Army regulations, memorandums, other guidance, and lists of assigned PSMs. To determine the extent to which DOD has evaluated the effects, if any, that PSMs are having on life-cycle sustainment decisions for major weapon systems, we spoke with Office of the Secretary of Defense (OSD), military department headquarters, and military service command officials. Additionally, we selected and interviewed a nongeneralizable sample of PSMs, program management, and other product support personnel assigned to a total of 12 major weapon systems to identify good practices that some PSMs have found helpful in enabling them to make or affect life-cycle sustainment decisions for major weapon systems as well as challenges that may have prevented PSMs from making or affecting such decisions. In identifying a nonprobability sample of PSMs (and related program staff) to interview, we selected PSMs who were assigned to systems that reflected varied characteristics, such as military service, Acquisition Category (ACAT) level, acquisition phase, type of system (e.g., aviation, ground, naval), and total estimated system cost. The 12 systems we chose were: (1) the Army’s Abrams Tank; (2) the Army’s Thermal Weapon Sight, AN/PAS-13; (3) the Army’s Distributed Common Ground System; (4) the Army’s Long Range Advanced Scout Surveillance System; (5) the Army’s Counter Radio Controlled-Improvised Explosive Device Electronic Warfare Duke; (6) the Army’s Prophet Enhanced Spiral 1; (7) the Navy’s Virginia-class submarine; (8) the Navy’s Littoral Combat Ship; (9) the Marine Corps’ CH-53K Helicopter; (10) the Army and Marine Corps’ Joint Light Tactical Vehicle; (11) the Air Force’s KC-46A Tanker; and (12) the Air Force, Navy, and Marine Corps’ F-35 Program. From these interviews, we obtained more-in-depth information on the effects, if any, that PSMs have on life-cycle sustainment decisions. For more information on these systems, please see appendix III. The results from this nonprobability sample cannot be used to make inferences about all PSMs or the respective major weapon systems to which they were assigned, because a nonprobability sample may not reflect all characteristics of a population. However, this information provided a broad representation of PSMs’ perspectives on their position’s implementation status and their effects on life-cycle sustainment decisions. To obtain information on the overall size and cost of DOD’s ACAT I systems, we also analyzed data from DOD’s Selected Acquisition Reports and other information in the Defense Acquisition Management Information Retrieval Purview system. We obtained similar data for ACAT II systems, where available, that the services maintained on their respective systems. We assessed the reliability of the PSM- related data we obtained from DOD and the services, along with the information we obtained from the Defense Acquisition Management Information Retrieval Purview system, through questionnaires and interviews with knowledgeable officials and determined that these data were sufficiently reliable for the purposes of assessing the implementation of PSMs for major weapon systems and discussing the findings in this report. To address our reporting objectives, we visited or contacted knowledgeable officials and reviewed relevant documents from the following organizations: Department of Defense Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Office of the Assistant Secretary of Defense (Logistics and Materiel Readiness) Office of the Deputy Assistant Secretary of Defense (Systems Engineering) Department of the Army Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology U.S. Army Deputy Assistant Secretary of the Army for Acquisition U.S. Army Communications-Electronics Command TACOM Life Cycle Management Command Army Program Management Office for Soldier, Sensors, and Lasers Army Program Executive Office Soldier Army Program Executive Office Intelligence, Electronic Warfare & Night Vision/Reconnaissance, Surveillance, and Target Long Range Advance Scout Surveillance System Program Office Counter Radio Controlled-Improvised Explosive Device Electronic Warfare Duke Program Office Distributed Common Ground System-Army Program Office Prophet Enhanced/Spiral 1 Program Office Army Program Executive Office Ground Combat Systems Abrams Tank Program Office Department of the Navy Office of the Deputy Assistant Secretary of the Navy—Expeditionary Office of the Assistant Secretary of the Navy—Financial Management Deputy Assistant Secretary of the Navy—Management and Budget Assistant Secretary of the Navy Research Development and The Department of the Navy Director, Acquisition Career U.S. Naval Air Systems Command CH-53K Helicopter Program Office U.S. Naval Sea Systems Command NAVSEA 21 Virginia-Class Submarines Program Office Program Executive Office Littoral Combat Ship Littoral Combat Ship Program Office Space and Naval Warfare Systems Command U.S. Marine Corps U.S. Marine Corps Systems Command U.S. Marine Corps Acquisition Logistics and Product Support Department of the Air Force U.S. Air Force Headquarters Office of the Assistant Secretary of the Air Force, Installations, Office of the Assistant Secretary of the Air Force, Acquisition U.S. Air Force KC-46A Tanker Program Office Joint Program Offices Department of the Army and U.S. Marine Corps Joint Light Tactical Vehicle Program Office Department of the Navy, U.S. Marine Corps, and Department of the Air Force F-35 Joint Program Office We conducted this performance audit from April 2013 through April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions, based on our audit objectives. We selected and interviewed a nongeneralizable sample of Product Support Managers (PSM), program management, and other product support personnel assigned to a total of 12 major weapon systems to identify good practices and challenges that may have helped or prevented PSMs in making or affecting life-cycle sustainment decisions for their assigned systems. This appendix contains descriptions of the 12 major weapon systems we selected. Each description contains information on the military service or services to which these systems belong, their respective Acquisition Category (ACAT) levels, the status of the system, and a brief description of the system. In addition to the contact named above, the following staff members made key contributions to this report: Alissa H. Czyz, Assistant Director; Jerome A. Brown; Yecenia C. Camarillo; Joanne Landesman; Michael C. Shaughnessy; Michael D. Silver; Amie M. Steele; Tristan T. To; and Matthew R. Young. Defense Acquisitions: Where Should Reform Aim Next? GAO-14-145T. Washington, D.C.: October 29, 2013. Defense Acquisitions: Goals and Associated Metrics Needed to Assess Progress in Improving Service Acquisition. GAO-13-634. Washington, D.C.: June 27, 2013. Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. Washington, D.C.: March 28, 2013. Weapons Acquisition Reform: Reform Act Is Helping DOD Acquisition Programs Reduce Risk, but Implementation Challenges Remain. GAO-13-103. Washington, D.C.: December 14, 2012. Defense Logistics: Improvements Needed to Enhance Oversight of Estimated Long-Term Costs for Operating and Supporting Major Weapon Systems. GAO-12-340. Washington, D.C.: February 2, 2012. Defense Management: DOD Needs Better Information and Guidance to More Effectively Manage and Reduce Operating and Support Costs of Major Weapon Systems. GAO-10-717. Washington, D.C.: July 20, 2010. Defense Acquisitions: Fundamental Changes Are Needed to Improve Weapon Program Outcomes. GAO-08-1159T. Washington, D.C.: September 25, 2008. Defense Logistics: Opportunities to Improve the Army’s and the Navy’s Decision-making Process for Weapons System Support. GAO-02-306. Washington, D.C.: February 28, 2002.
DOD spends billions of dollars annually to sustain weapon systems. With the prospect of tighter defense budgets, DOD has placed more attention on controlling total life-cycle costs with initiatives aimed at ensuring that weapon systems are more affordable over the long term. Section 2337 of Title 10, U.S. Code, requires that each major weapon system be supported by a PSM and lays out the responsibilities of the PSM, including developing and implementing a comprehensive product support strategy for the system. GAO was asked to review DOD's progress in implementing PSMs for major weapon systems. This report examines (1) the steps, if any, that DOD and the military services have taken to implement PSMs for major weapon systems and (2) the extent to which DOD has evaluated the effects, if any, that PSMs are having on life-cycle sustainment decisions for their assigned systems. To conduct this review, GAO obtained information and interviewed product support personnel assigned to 12 of 332 major weapon systems that reflected varying characteristics—such as military service and system costs—and analyzed documentation from DOD and the military services. The Department of Defense (DOD) and the military services have taken steps to implement Product Support Managers (PSM) for major weapon systems, but certain aspects of the implementation process remain incomplete. The services have assigned PSMs to almost all of their major weapon systems. For example, as of February 2014, 325 of 332 PSM position requirements across DOD for major weapon systems—approximately 98 percent—were filled. While DOD and all of the services have taken some steps to develop a comprehensive career path and associated guidance to develop, train, and support future PSMs, DOD, in coordination with the military services, has not developed a plan—to include objectives, milestones, and resources—to implement and institutionalize a comprehensive PSM career path. Until DOD develops such a plan, it may not be able to ensure that the services can fill PSM positions with qualified personnel in the future. Moreover, DOD's PSM implementation guidance is not centralized and future product support personnel may be hindered in their ability to easily access and implement such guidance. Also, because the latest DOD guidance lacks detail and contains a potentially unclear provision, personnel may confuse the responsibilities of Program Managers and PSMs. Without clear, comprehensive, and centralized implementation guidance, DOD may be hindered in its ability to institutionalize the implementation of PSMs for its major weapon systems going forward. Additionally, the Army has been working for a year to clarify the roles and responsibilities of certain product support personnel, who support PSMs, for the sustainment portion of the life cycle for major weapon systems. According to officials from the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology, major weapon systems program offices have raised the issue of the lack of clear roles and responsibilities of these personnel, which has prompted senior-level Army meetings to attempt to resolve the issue. However, the Army has not yet finalized guidance that clarifies roles and responsibilities, which may hinder PSMs in their ability to effectively manage and conduct their daily product support responsibilities. DOD does not fully know how or to what extent PSMs are affecting life-cycle sustainment decisions because it has not systematically collected and evaluated information on the effects PSMs are having on their assigned weapon systems. Program evaluation guidance states that evaluations can play a key role in program planning, management, and oversight by providing feedback to managers on programs. Evaluations can show whether PSMs are conducting good practices that could be shared across the department as well as whether changes are needed to guidance or other areas to enhance the contributions of PSMs. In the absence of DOD information on the effects PSMs are having on life-cycle sustainment decisions, weapon system program offices identified several good practices and challenges associated with PSMs. For example, several PSMs told us that they had initiated analyses focused on reducing life-cycle sustainment costs for their assigned weapon systems. One challenge that Army headquarters officials noted was that PSMs do not have knowledge of how much sustainment funding their systems will receive prior to the year of execution of funds. Without greater visibility over the allocation of sustainment funding for their assigned weapon systems, these PSMs may be hindered in their ability to proactively manage and influence their system's life-cycle sustainment decisions. GAO recommends that DOD and the services develop a plan to institutionalize a career path for PSMs; issue clear, comprehensive, and centralized PSM implementation guidance; evaluate the effects PSMs have on sustainment decisions; and improve Army PSMs' visibility over sustainment funding. DOD generally agreed with the recommendations.
Pipelines transport the bulk of natural gas and hazardous liquids in the United States. Specifically, pipelines carry nearly all of the natural gas and about two-thirds of the crude oil and refined oil products. Three primary types of pipelines form a 2.4-million-mile network across the United States. Natural gas transmission pipelines transport natural gas over long distances from sources to communities (about 327,000 miles, primarily interstate). Natural gas distribution pipelines continue to transport natural gas from transmission lines to consumers (about 1.9 million miles, primarily intrastate). Hazardous liquid pipelines transport products, such as crude oil, to refineries and the refined product on to product terminals (about 161,000 miles, primarily interstate). Pipelines have an inherent safety advantage over other modes of freight transportation because they are primarily located underground, away from public contact. By one measure, the reduction in accidents overall, the hazardous liquid pipeline industry has greatly improved its safety record over the past 10 years. (See fig. 2.) From 1994 through 2003, accidents on interstate hazardous liquid pipelines decreased from 245 in 1994 to 126 in 2003, or almost 49 percent. These accidents resulted in an average of 2 fatalities and 8 injuries per year. However, the industry’s safety record has not improved for accidents with the greatest consequence—those resulting in a fatality, an injury, or in property damage of $50,000 or more—called serious accidents in this report. The number of serious accidents stayed about the same over the 10-year period. The lack of significant change over time in the number of serious accidents on interstate hazardous liquid pipelines may be due in part to the relatively small number of these accidents—about 88 every year. The accident rate—which considers the amount of product and the distance it is shipped—followed a similar pattern. The accident rate for hazardous liquid pipelines overall decreased from about 0.41 accidents per billion ton-miles shipped in 1994 to about 0.25 accidents per billion ton-miles shipped in 2002. The accident rate for serious interstate hazardous liquid pipeline accidents stayed the same, averaging about 0.15 accidents per billion ton-miles shipped from 1994 through 2002. In contrast to hazardous liquid pipelines, accidents on interstate natural gas pipelines increased from 81 in 1994 to 97 in 2003, or almost 20 percent. (See fig. 3.) These accidents resulted in an average of 3 fatalities and 10 injuries per year. The number of serious accidents on interstate natural gas pipelines also increased, from 64 in 1994 to 84 in 2003, though they have fluctuated considerably over this period. Information on accident rates for natural gas pipelines is not available because of the lack of data on natural gas shipped through pipelines. As with hazardous liquid pipelines, the lack of significant change over time in the number of total accidents and serious accidents on interstate natural gas pipelines may be due in part to the relatively small number of these accidents—about 65 every year. OPS, within RSPA, administers the national regulatory program to ensure the safe operation of the nation’s natural gas and hazardous liquid pipelines. OPS has carried out its oversight responsibility by developing and issuing prescriptive minimum safety standards and enforcing these standards. Recently, the agency has developed additional standards that are risk based and focus on improving pipeline operators’ management of their operations rather than on meeting prescriptive requirements. In 1999, to reduce the risk of accidents attributable to human error, OPS issued a new operator qualification regulation requiring pipeline operators to develop programs for ensuring that individuals working on their pipeline systems are qualified to do so. In 2000, to better focus on safety risks that are unique to individual pipelines, OPS issued the first in a series of integrity management regulations requiring operators to better protect pipeline segments where a leak or rupture could have a significant effect on densely populated or environmentally sensitive areas (called high- consequence areas). Under this new risk-based regulatory approach, operators must, in addition to meeting minimum safety requirements, develop comprehensive plans for identifying the range of risks facing these segments and taking actions to mitigate these risks. According to OPS, it is devoting a large portion of its resources to implementing the integrity management program. OPS issued integrity management requirements for large hazardous liquid pipeline operators (those with 500 or more miles of pipeline) in December 2000, for small hazardous liquid pipeline operators in January 2002, and for natural gas transmission pipeline operators in December 2003. The agency is carrying out inspections of operators’ compliance with these requirements in separate phases, starting with inspections of large hazardous liquid operators from September 2002 through April 2004. In all, the agency will need to inspect the integrity management programs of more than 1,000 individual operators of hazardous liquid and natural gas transmission pipelines. To improve pipeline safety, OPS carries out several types of activities. First, it develops and issues pipeline safety regulations and supports national consensus standards, which provide additional guidance to pipeline operators in managing their pipeline systems safely. In addition, OPS undertakes oversight activities, which include inspections to determine compliance with its regulations, accident investigations, and enforcement. Finally, OPS administers other programs—including, for example, research and development to enhance pipeline safety technologies, data collection to better define pipeline-related problems and concerns, and education to prevent excavation-related damage. When OPS finds a violation—such as the failure of an operator to inspect various aspects of its pipeline—during an inspection or an investigation after an accident, it may take one of several types of enforcement or administrative actions depending on the nature and severity of the violation. (See table 1.) An enforcement action may require the operator to correct an unsafe condition or practice, or the enforcement action may be a civil penalty (monetary fine). An administrative action notifies a pipeline operator of a safety concern that is not serious enough to require an enforcement action. When imposing civil penalties, OPS must by law consider seven factors: (1) the nature, circumstances, and gravity of the violation; (2) the degree of the operator’s culpability; (3) the operator’s history of prior offenses; (4) the operator’s ability to pay; (5) any good faith shown by the operator in attempting to achieve compliance; (6) the effect on the operator’s ability to continue doing business; and (7) other matters as justice may require. Before OPS imposes a civil penalty, it issues the pipeline operator a notice of probable violation that documents the alleged violation and identifies the proposed civil penalty amount. OPS then allows the operator to present additional evidence either in writing or in an informal hearing. Attorneys from RSPA’s Office of Chief Counsel preside over these hearings. Evidence presented by the operator may result in the civil penalty being affirmed, reduced, or withdrawn. If, after this step, the hearing officer determines that a violation occurred, OPS’s associate administrator issues a final order that requires the operator to correct the safety violation (if needed) and pay the penalty (termed “assessed penalties” in this report). The operator has 20 days after the final order is received to pay the penalty. FAA collects civil penalties for OPS. From 1992 through 2002, federal law allowed OPS to assess up to $25,000 for each day that a violation continued, not to exceed $500,000 for any related series of violations. In December 2002, the Pipeline Safety Improvement Act increased these amounts to $100,000 and $1 million, respectively. OPS is a small federal agency. In fiscal year 2003, OPS employed about 150 people—about half of whom were pipeline inspectors. In contrast, the Federal Railroad Administration, another agency within the Department of Transportation, employs 855 people, including more than 400 inspectors to enforce rail safety regulations. In addition, FAA, the agency within the Department of Transportation responsible for the safety of civil aviation, employed about 48,500 people in fiscal year 2003. About 4,000 of these employees were safety inspectors. For fiscal year 2003, OPS received about $66.8 million in appropriations and about $17.5 million from the Pipeline Safety Fund. OPS retains full responsibility for enforcing regulations on interstate pipelines, and it certifies states to perform these functions for intrastate pipelines. Currently, OPS has agreements with 11 states, known as interstate agents, to help it inspect segments of interstate pipeline within these states’ boundaries. However, OPS undertakes any enforcement actions identified through inspections conducted by interstate agents. In 2002, about 400 state pipeline safety inspectors assisted OPS in overseeing pipeline safety within their states, according to the latest available data. Although in recent years OPS has made a number of changes in its enforcement strategy that have the potential to improve pipeline safety, the effectiveness of this strategy cannot currently be determined because the agency has not incorporated three key elements of effective program management—clear program goals, a well-defined strategy for achieving those goals, and performance measures linked to the program goals. OPS’s enforcement strategy, as well as its overall approach for overseeing pipeline safety, has undergone significant changes in the last 5 years. Before 2000, the agency had emphasized partnering with the pipeline industry to improve pipeline safety rather than punishing noncompliance. In 2000, in response to concerns that its enforcement was weak and ineffective, the agency decided to institute a “tough but fair” enforcement approach and committed to making greater use of all its enforcement tools, including larger civil penalties. In 2001, to further strengthen its enforcement, the agency began issuing more corrective action orders requiring operators to address safety problems that led to pipeline accidents. In 2002, OPS created an Enforcement Office to put more focus on enforcement and help ensure consistency in enforcement decisions. However, the agency has not yet filled key positions in this office. OPS was making these changes in its enforcement strategy at the same time that it was significantly changing its overall approach for overseeing pipeline safety. In particular, in 2000 the agency began implementing its new integrity management program, which requires operators to systematically manage risks to their pipelines in areas where an accident could have the highest consequences. The agency believes that pipeline accidents in these high-consequence areas will decrease because operators are required, under this risk-based approach, to identify and repair significant defects in pipelines located in these areas. Officials have emphasized that they believe this program is improving the safety culture of the pipeline industry and has a greater potential to improve safety than enforcing OPS’s traditional minimum safety standards. According to these officials, in the last several years, they have placed a priority on developing and implementing this risk-based regulatory approach and on developing a sound approach for overseeing pipeline operators’ fulfillment of the agency’s new requirements. For example, OPS has developed detailed protocols and guidance for inspecting operators’ identification of risks and resulting repairs and has developed new information systems for tracking the status of issues identified in these inspections. OPS has developed a similar approach for overseeing these companies’ fulfillment of its new requirements for ensuring that their employees are qualified to operate pipeline systems. According to OPS officials, the agency plans to use these new oversight approaches as a model for improving its oversight of operators’ compliance with its minimum safety requirements. Officials have emphasized that their efforts to raise safety standards, inspect pipeline operators against these standards, investigate accidents, and take enforcement actions collectively represent an overall systematic approach to improving pipeline safety. In 2002, OPS began to enforce its new integrity management and operator qualification standards, in addition to its minimum safety standards. For integrity management, the agency has primarily used notices of amendment, which require improvements in procedures rather than stronger enforcement actions to give pipeline operators time to learn how to build programs that meet OPS’s complex standards. OPS has recently started to make greater use of civil penalties in enforcing these standards. The agency has also used a mix of enforcement actions in enforcing its operator qualification standards. OPS’s use of civil penalties, corrective action orders, and notices of amendment was significantly greater in 2003 than it was in 1999, the year before OPS started changing its enforcement strategy. (See fig. 4.) According to OPS’s associate administrator, the agency has made significant progress in implementing its integrity management program and now needs to devote more attention to strengthening the management of its enforcement program. Consequently, OPS has recently begun to “reengineer” this program. Efforts under way include developing a new enforcement policy and guidelines, developing a streamlined process for handling enforcement cases, modernizing and integrating the agency’s inspection and enforcement databases, meeting with stakeholders to obtain their views on how to make the enforcement action fit the violation, and hiring additional staff devoted to enforcement. Some aspects of these plans are discussed in more detail in the following sections. Although OPS has overall performance goals, the agency has not established specific goals for its enforcement program. According to OPS officials, the agency’s enforcement program is designed to achieve OPS’s overall performance goals of (1) reducing the number of pipeline accidents by 5 percent annually and (2) reducing the number of spills of oil and other hazardous liquids from pipelines by 6 percent annually. A number of other agency efforts—including the development of new safety standards, inspections, and initiatives to help communities prevent damage to pipelines—are also designed to achieve these goals. The above performance goals are useful agencywide safety goals because they identify the end outcomes, or ultimate results, that OPS seeks to achieve through its various efforts. However, OPS has not established goals for its enforcement program that identify the intermediate outcomes, or direct results, the enforcement program seeks to achieve. Intermediate outcomes show progress toward achieving end outcomes. For example, enforcement actions can result in improvements in pipeline operators’ safety performance that can subsequently result in reduced pipeline accidents and spills. OPS managers have told us that the desired direct results of enforcement actions are deterring noncompliance with safety standards, reducing repeat violations of specific standards, and influencing pipeline operators’ safety performance by requiring safety improvements to correct identified problems. Program outputs, such as enforcement actions, can lead to such intermediate outcomes, which in turn can result in the desired end outcomes of reduced accidents and spills. (See fig. 5.) We have reported that it is a useful practice for federal programs to complement end outcome goals with intermediate outcome goals in order to help show a program’s contribution to desired end outcomes. OPS is considering establishing a goal to reduce the amount of time it takes to issue final enforcement actions. While such a goal could be useful for improving the management of the enforcement program, it does not reflect the direct results the agency hopes to achieve through enforcement. Clear goals for the enforcement program that specify intended intermediate outcomes (such as a reduced number of repeat offenders) would be useful to OPS and to external stakeholders to show how enforcement efforts contribute to pipeline safety. OPS has not fully defined its strategy for using enforcement to achieve its goals. According to OPS officials, the agency’s increased use of civil penalties and corrective action orders reflects a major change in its enforcement strategy. However, although OPS began to implement these changes in 2000, it has not yet developed a policy that describes this new, more aggressive, enforcement strategy or how the strategy will contribute to the achievement of OPS’s performance goals. In addition, although OPS’s authorizing statutes and regulations provide general guidance on the use of various types of enforcement actions, the agency does not have up-to-date detailed internal guidelines on the use of its enforcement actions that reflect its current strategy. For example, OPS has an enforcement manual that provides general guidance on the various types of enforcement actions and how each should be used, but this guidance reflects the agency’s earlier, more lenient, approach to enforcement and does not specify the types of situations that may warrant certain types of actions. In addition, although OPS began enforcing its integrity management standards and received greater enforcement authority under the Pipeline Safety Improvement Act in 2002, it does not yet have guidelines in place for enforcing these standards or implementing the new authority provided by the act. An important internal control practice is to have policies and procedures for each agency activity. According to agency officials, OPS management has communicated enforcement priorities and ensured consistency in enforcement decisions through frequent internal meetings and detailed inspection protocols and guidance. However, without enforcement policies and guidelines in place that reflect its current strategy, the agency lacks reasonable assurance that this strategy is being carried out effectively. For example, OPS regional and state inspector staff may not be fully aware of the agency’s current strategy, and regional directors may be less likely to make complex judgments about enforcement in a uniform manner. Agency officials recognize the need to develop an enforcement policy and up-to-date detailed enforcement guidelines and have been working on various aspects of this task. According to OPS officials, the agency has been in a period of “recreating” its enforcement policy. To date, the agency has completed an initial set of enforcement guidelines for its operator qualification standards and has developed various other draft guidelines. According to OPS officials, the policy and remaining guidelines, when completed, will reflect the agency’s increased emphasis on civil penalties and corrective action orders and provide detailed guidance on the use of these and other enforcement tools; cover the enforcement of OPS’s traditional safety standards, as well as its new integrity management standards; and discuss how OPS will implement the greater enforcement authority provided to it by the Pipeline Safety Improvement Act of 2002. Agency officials anticipate that the new enforcement policy and remaining guidelines will not be finalized until sometime in 2005 because of the complexity of these tasks. While the development of an enforcement policy and guidelines should help to define OPS’s enforcement strategy, it is not clear whether this effort will link this strategy with results, since agency officials have not established goals specifically for their enforcement efforts. We have reported on the importance for effective program management of connecting strategies to desired results by clearly defining program strategies and developing and presenting a rationale for how these strategies contribute to the achievement of goals. According to OPS officials, the agency uses three types of performance measures to determine the effectiveness of both its enforcement activities and other oversight efforts: (1) the achievement of agency performance goals, (2) agency inspection and enforcement activity, and (3) the integrity management performance of pipeline operators, such as pipeline repairs made in response to the agency’s new requirements. (See table 2.) These measures provide useful information about the agency’s efforts to improve pipeline safety. For example, measures of pipeline repairs made in response to the agency’s integrity management requirements provide information on the intermediate outcomes, or the direct results, of this new regulatory approach and help demonstrate how this approach leads to reductions in pipeline accidents and spills. However, OPS’s current measures do not clearly indicate the effectiveness of its enforcement strategy because they do not measure the intermediate outcomes of enforcement actions that can contribute to pipeline safety, such as improved compliance, fewer repeat violations of specific standards, or the implementation of safety improvements required to correct identified problems. As part of efforts to improve its information systems, OPS is considering developing the following additional types of measures of the effectiveness of its enforcement and other oversight activities (see table 2): Measures related to the management of enforcement actions. OPS is developing these new measures as part of efforts to integrate and modernize its inspection and enforcement databases and improve its handling of enforcement cases. Measures of safety improvements that were ordered by OPS. The agency has recently started to collect new data on actions by pipeline operators in response to corrective action orders and may also collect such data for safety orders in 2005, when it plans to start using these types of orders. The results of OPS’s inspections of operator integrity management and operator qualification programs. OPS has developed new databases that track the safety issues identified in integrity management and operator qualification inspections, as well as enforcement actions. In subsequent inspections, inspectors will follow up on these issues and record their status. Some of the measures under consideration could provide more information on the intermediate outcomes of the agency’s enforcement strategy, such as the extent of repeat violations and repairs made in response to corrective action orders, as well as other aspects of program performance, such as the timeliness of enforcement actions. In addition, measures of the results of integrity management and operator qualification inspections could provide further information on the intermediate outcomes of these new regulatory approaches. We have found that agencies that are successful in measuring performance strive to establish measures that demonstrate results, address important aspects of program performance, and provide useful information for decision making. While OPS’s efforts to develop new measures have the potential to eventually produce better information on the performance of its enforcement program than is currently available, the agency has not fully adopted key practices for achieving these characteristics of successful performance measurement systems. The following sections discuss these characteristics and the extent to which OPS has fulfilled them in developing measures of enforcement performance. Measures should be tied to program goals and demonstrate the degree to which the desired program results are achieved. These program goals should in turn be linked to overall agency goals. The new measures that OPS is considering are not based on such linkages, because the agency has not established goals for its enforcement program. Leading organizations seek to establish clear hierarchies of performance goals and measures that link the goals and measures for each organizational level to each successive level. Without such clear hierarchies, an agency will lack a straightforward road map showing how daily activities contribute to attaining agencywide goals. Although OPS is considering some new measures that could provide more information on the intermediate outcomes of its enforcement strategy, without first setting clear goals that identify the various direct results the agency seeks to achieve through enforcement, it may not choose the most appropriate measures of results and may not follow through in developing such measures. For example, although OPS adopted a more aggressive enforcement strategy starting in 2000, without appropriate goals and measures the agency cannot determine the effects of this new strategy on operators’ compliance with its safety standards. OPS officials acknowledge that it is important to develop such intermediate goals and related measures but emphasize that it is challenging to do so because of the diversity of pipeline operations and the complexity of OPS’s regulations. Measures of program results can help hold agencies accountable for the performance of their programs. Congress needs information on program results to support its oversight of agencies and their budgets. Stakeholders can use this information to accurately judge program effectiveness. We asked a variety of pipeline safety stakeholders—including representatives of industry; federal, state, and local agencies; and advocacy groups—for their views on the effectiveness of OPS’s enforcement efforts. While many of them stated that they believe OPS’s enforcement program has improved in recent years, many also stated that they could not comment on the impact of the agency’s enforcement actions on pipeline safety. Some noted that this effectiveness is difficult to judge because of a lack of data. For each goal, programs should select a few measures that cover key performance dimensions and take different priorities into account. While the new measures that OPS is considering cover a wider range of performance aspects than do its current measures, the agency may not be able to make sound decisions about which measures are the most important without first setting goals for its enforcement program. An agency official told us that a key factor in choosing final measures would be the availability of supporting data. However, the most essential measures, such as measures showing the enforcement program’s progress in achieving compliance, may require the development of new data. Developing appropriate performance measures requires carefully coordinated planning, including a systematic approach for identifying and refining potential measures that address various important aspects of program performance. OPS has not comprehensively examined its needs for measuring enforcement results as well as the results of other oversight efforts to ensure that its choice of measures will take into account and balance its various priorities. For example, OPS has developed databases that will track the status of safety issues identified in integrity management and operator qualification inspections, but has not yet developed the capability to centrally track the status of safety issues identified in determining compliance with its minimum safety standards. The results of follow-up by inspectors on the status of these issues are maintained at the regional office level but are not recorded in the agency’s inspection or enforcement databases. Agency officials have told us that they are considering how to add this capability as part of an effort to modernize and integrate these databases and that the integrity management and operator qualification databases will serve as a model for this effort. However, the agency has not yet put in place a systematic integrated approach for designing measures of oversight performance, including enforcement performance. Performance measures should provide agency managers with timely, action-oriented information in a format that helps them make decisions that improve program performance, including decisions to adjust policies and priorities. OPS uses its current measures of enforcement performance in a number of ways to oversee pipeline safety, including monitoring pipeline operators’ safety performance and planning inspections. While these uses are important, they are only indirectly related to the management of enforcement results. Agency officials have made progress in this area by identifying possible new measures of enforcement results and other aspects of program performance, such as measures of the timeliness of enforcement actions that may prove more useful for managing the enforcement program. Not having adequate measures limits OPS’s ability to make informed decisions about its enforcement strategy. Although OPS has made major changes in its enforcement strategy in the last several years, it has decided on these changes with little information on the effectiveness of its prior strategy. Agency officials explained that they decided to increase the use of civil penalties and corrective action orders to improve public confidence in the agency’s ability to enforce its standards, following the major pipeline incidents in Bellingham, Washington, and Carlsbad, New Mexico, in 1999 and 2000, respectively. They also noted that their decisions about enforcement policy are part of their overall approach for overseeing and improving pipeline safety and are not based on trends in performance measures. In response to criticism that its enforcement activities were weak and ineffective, OPS increased both the number and the size of the civil monetary penalties it assessed beginning in 2000. Pipeline safety stakeholders we spoke with expressed differing views on whether OPS’s civil penalties are effective in deterring noncompliance with pipeline safety regulations. Most of the penalties that OPS assessed have been paid; however, OPS and FAA lack important management controls to ensure that penalties are collected. The civil penalty results we present are mostly for OPS’s enforcement of its minimum safety standards because OPS did not begin to enforce its integrity management standards until 2002. OPS proposed and assessed more civil penalties during the past 4 years— under its current “tough but fair” enforcement approach—than it did in the previous 5 years, when it took a more lenient “partnering” enforcement approach. (See table 3. Also, see the previous section and app. II for a discussion of changes in OPS’s enforcement approaches.) From 2000 through 2003, OPS proposed 127 civil penalties (about 32 per year on average) compared with 94 civil penalties (about 19 per year on average) from 1995 through 1999. Furthermore, of these proposed civil penalties, 88 were assessed from 2000 through 2003 (22 per year on average), whereas 70 were assessed from 1995 through 1999 (about 14 per year on average). During the first 5 months of 2004, OPS proposed 38 civil penalties. While the recent increase in the number and the size of OPS’s civil penalties occurred under the agency’s new “tough but fair” enforcement approach, other factors, such as more severe violations, may be contributing factors as well. Overall, OPS does not use civil penalties extensively. Civil penalties represent about 14 percent (216 out of 1,530) of all enforcement actions taken over the past 10 years. OPS makes more extensive use of other types of enforcement actions that require that operators act to correct safety violations. In contrast, civil penalties do not require a safety improvement, but represent a monetary sanction for violating safety regulations. Finally, OPS expects to make greater use of civil penalties for violations identified during integrity management inspections as it gains more experience with implementing this safety approach. The sizes of the civil penalties have increased. From 1995 through 1999, the average proposed civil penalty was about $19,000. From 2000 through 2003, the average proposed civil penalty increased by over 132 percent to about $45,000. Similarly, although to a lesser degree, assessed penalties increased. From 1995 through 1999, the average assessed civil penalty was about $18,000. From 2000 through 2003, the average assessed civil penalty increased by 62 percent to about $29,000. (All amounts are in current year dollars. Inflation was low during this period. If the effects of inflation are considered, the average assessed penalty for 1995 through 1999 would be $21,000, and the average assessed penalty for 2000 through 2003 would be $30,000, in 2003 dollars.) We excluded two proposed penalties totaling over $5 million resulting from the Bellingham and Carlsbad incidents from our analysis because both were extraordinarily large (no other proposed penalty exceeded $674,000), and OPS, as of mid-July, had not assessed a penalty for the Carlsbad incident. (RSPA referred the penalty to the Department of Justice for judicial action.) Including these proposed penalties would have skewed our results by making the average penalty appear larger than it actually is. For the 216 penalties that were assessed from 1994 through 2003, OPS assessed the penalty that it proposed 69 percent of the time (150 civil penalties). (See table 4.) For the remaining 66 penalties, OPS reduced the assessments by about 37 percent—from a total of about $2.8 million to about $1.7 million. However, the dollar difference between the proposed and the assessed penalties would be over three times as large had our analysis included the extraordinarily large penalty for the Bellingham, Washington, incident. For this case, OPS proposed a $3.05 million penalty and had assessed $250,000 as of July 2004. If we include this penalty in our analysis, then over this period OPS reduced total proposed penalties by about two-thirds, from a total of about $5.8 million to about $2 million. According to an OPS official, the agency reduces penalties, among other things, when the operator presents evidence that the inspector’s finding is weak or wrong or when the pipeline’s ownership changes during the period between the proposed and the assessed penalty. OPS’s database does not provide summary information on why penalties are reduced. It was not practical for us to gather information on a large number of penalties that were reduced because to do so would have required reviewing each penalty record and discussing each penalty with headquarters and regional officials. As a result, we are not able to provide information on the most common reasons why penalties were reduced. To provide examples of reasons why penalties were reduced, we reviewed several of these penalties. OPS reduced one of the penalties we reviewed because the operator provided evidence that OPS inspectors had miscounted the number of pipeline valves that OPS said the operator had not inspected. Thus, the violation was not as severe as OPS had stated, and OPS reduced the proposed penalty from $177,000 to $67,000. OPS reduced another proposed penalty from $45,000 to $27,000 because the operator took immediate action to correct the violation. As indicated earlier in this report, good faith efforts to achieve compliance by operators are one factor that OPS must, by law, consider in imposing civil penalties. Because we reviewed only a few instances in which penalties were reduced, we cannot say whether these examples are typical. Our results may be different from the results that OPS reports because of the way the data are organized. OPS reports an action in the year in which it occurred. For example, OPS may propose a penalty in one year and assess it in another year (and possibly collect it in still another year). The data for this action would show up in multiple years. Thus, OPS’s data represent the activity that took place in any one year, but this presentation does not allow users to determine the extent to which the proposed penalties resulted in assessed penalties or whether the proposed penalty amounts were reduced, since these actions may be contained in OPS reports for different years. To better track the disposition of civil penalties, we associated assessed penalties and penalty amounts with the year in which they were proposed—even if the assessment occurred in a later year. Although OPS has increased both the number and the size of the civil penalties it has imposed, the effect of this change, if any, on deterring noncompliance with safety regulations is not clear. The stakeholders we spoke with expressed differing views on whether OPS’s civil penalties deter noncompliance. The pipeline industry officials we contacted said that to a certain extent OPS’s civil penalties encourage pipeline operators to comply with pipeline safety regulations. One group of pipeline industry officials said that pipeline companies want to be on the record as being in compliance with pipeline safety regulations and therefore try to avoid any situation that would require OPS to issue an enforcement action. However, some industry officials said that OPS’s enforcement actions are not the operators’ primary motivation for safety. Instead, they said that the pipeline operators are motivated to operate safely because they need to avoid any type of accident, incident, or OPS enforcement action that impedes the flow of products through pipelines, hindering operators’ abilities to provide good service to their customers. Pipeline industry officials also said that they want to operate safely and avoid pipeline accidents because such accidents negatively affect the public’s perception of the company. In addition, other industry officials noted that OPS has other enforcement actions, such as corrective action orders, that give operators more incentive to operate safely because corrective action orders can cost companies much more money than civil penalties. For example, according to an OPS official, the corrective action order OPS imposed after the 1999 pipeline accident in Bellingham, Washington, cost Olympic Pipeline more than $100 million. This sum includes about $53 million to repair and replace needed infrastructure and an estimated $50 million in lost revenue. Finally, the three pipeline operators with whom we spoke indicated that any enforcement action would deter noncompliance with pipeline safety regulations because of the resulting negative publicity and the potential for costly private litigation against the operator. Most of the interstate agents and representatives of their associations, insurance company officials, and the local representative from one state expressed views similar to those of the pipeline industry officials. They said that, to a certain extent, they believe civil penalties deter operators’ noncompliance with regulations. For example, some of the interstate agents said that civil penalties—no matter what the amount—are a deterrent because the penalty puts the pipeline operator in the public eye. However, a few disagreed with this point of view. For example, representatives of the state associations and the local representative from one state said that OPS’s civil penalties are too small to be a deterrent and that other OPS actions and the costs resulting from accidents, including private litigation, are better deterrents. As discussed earlier, the average civil penalty that OPS assessed from 2000 through 2003 was about $29,000. Pipeline safety advocacy groups that we talked to also believed that the civil penalty amounts OPS imposes are too small to have any deterrent effect on pipeline operators. However, a representative from one of the groups thought that the threat of additional civil penalties from OPS should influence a pipeline operator to comply with pipeline safety regulations in the future. According to economic literature on deterrence, pipeline operators may be deterred if they expect a sanction, such as a civil penalty, to exceed any benefits of noncompliance. Such benefits could, in some cases, be lower operating costs. The literature also recognizes that the negative consequences of noncompliance—such as those stemming from lawsuits, bad publicity, and the value of the products lost from accidents—can deter noncompliance along with regulatory agency oversight. Thus, for example, the expected costs of a legal settlement could overshadow the lower operating costs expected from noncompliance, and noncompliance might be deterred. According to OPS, its policy since 1999 has been to make civil penalty information available to the public by publishing the final orders for all enforcement actions and to also publish all administrative actions on its Web site. We found that from 2000 through 2003 (the period OPS describes as its “tough but fair” enforcement era), OPS had posted 58 percent of final orders involving assessed civil penalties on its Web site. An agency official explained that OPS has not posted the remaining penalties because of high staff turnover. To the extent that publicizing noncompliance information on the Web site does deter noncompliance, OPS’s incomplete posting of assessed civil penalty information is not facilitating the achievement of this goal. For the 216 penalties that OPS assessed from 1994 through 2003, pipeline operators paid the full amounts 93 percent of the time (200 instances) and reduced the amounts 1 percent of the time (2 instances). (See fig. 6.) Fourteen penalties (6 percent) remain unpaid, totaling about $837,000 (about 18 percent of the penalty amounts). In some instances, pipeline operators pay their penalties based on the proposed—rather than assessed—amount. Our results do not include an analysis of the number of penalties paid prior to assessment because FAA’s and OPS’s data lacked information necessary to complete the analysis. We followed up in one of the two instances in which the operator paid less than the assessed amount. In this instance, the operator requested that OPS reconsider the civil penalty, and OPS reduced the assessed penalty from $5,000 to $3,000 because the operator had a history of cooperation and OPS wanted to encourage future cooperation. Neither FAA’s nor OPS’s data show why the 14 unpaid penalties have not been collected. To learn why, we spoke with both agencies about the status of these penalties and, based on the information provided, we determined that OPS closed 2 of the penalty cases without collecting the penalties, operators are appealing 5 penalties, OPS recently assessed 3 penalties, and OPS acknowledged that 4 penalties (totaling $45,200) should have been collected. For some penalties, the information that FAA and OPS provided about the collection status conflicted. For example, FAA reported to us that 2 penalties had been paid recently, which was not reflected in the information reported to us by OPS. Regarding the 4 penalties that should have been collected, OPS files indicated that final assessments had been made, but because FAA records did not include final orders, FAA lacked the information it needed to take collection action. After we brought these penalties to OPS’s attention, OPS sent FAA the information it needed to pursue collection. As of June 2004, FAA had created accounts for these penalties and will begin sending balance due notices after the proper waiting period has expired. We were not able to determine the extent to which operators’ payments were timely (operators have 20 days to pay penalties) because we judged that the data elements in OPS’s and FAA’s databases were not reliable enough to do so. (See app. I.) Even though most civil penalties are paid, their payment is more likely due to operators’ willingness to pay than to FAA’s or OPS’s actions. FAA is not aware of the full range of civil penalties that it may ultimately be responsible for collecting because OPS does not routinely notify FAA of proposed or assessed civil penalties. We found that for the period from 1994 through 2003, FAA had no record of 44 of the 290 civil penalties (totaling about $500,000 in assessments) that OPS had proposed. It is important for FAA to be aware of all proposed civil penalties because operators may choose to pay the proposed penalty rather than waiting for the final assessment. When FAA does not have a record of a civil penalty for which it receives a payment, it has to contact OPS for information about the penalty. In addition, if FAA does not know that OPS has assessed a civil penalty, it cannot act to collect the penalty if the operator does not pay on time. Staff from OPS and RSPA’s Office of Chief Counsel told us that they had not provided FAA with documentation of proposed or assessed penalties because each thought the other office was doing so. When we brought this apparent communication gap to OPS’s attention, OPS agreed that it should provide FAA with civil penalty documentation so that FAA would be aware of the penalties that may be and are assessed. As of mid-July 2004, OPS had not begun to provide the documentation. Although OPS is responsible for enforcing pipeline safety, it does not monitor the extent to which FAA collects civil penalties for pipeline safety violations. OPS does not request or receive regular updates from FAA about the status of penalties or overall collections, although such reports are available. In addition, FAA does not routinely make available to OPS its reports on the status of civil penalties, although it does send them to RSPA’s Office of Chief Counsel. We found that OPS was unaware that FAA prepares regular reports about penalties that are overdue or have been paid but not closed out. OPS does not evaluate the civil penalty data that it maintains in its enforcement database or review the data to ensure that its information about civil penalties is complete and up to date. OPS also does not compare its civil penalty data with FAA’s data to identify missing or incomplete data. Finally, OPS does not evaluate its own enforcement database to identify overdue penalties and check with FAA on their status. After we brought these issues to OPS’s attention, OPS officials told us that OPS is looking into setting up a system to monitor case activities and notify OPS and the deputy chief counsel when it is time to move a case to the next step. OPS has been focusing much of its effort on safety initiatives in areas other than the enforcement of minimum standards, such as its integrity management program and operator qualification standards, because the agency believes that these initiatives will result in major improvements in the overall safety of the pipeline industry. In light of the progress OPS has made in these other areas and the issues we raised with OPS in preparing this report, OPS has indicated that it will devote more attention to managing its enforcement program than it has previously. However, because OPS cannot measure the effects of changes in its enforcement strategy on operators’ performance, it will not know whether any management changes it makes lead to improvements in the industry’s compliance. Without goals for its enforcement program, a well-defined strategy for achieving these goals, and performance measures linked to program goals, OPS cannot demonstrate how its enforcement efforts contribute to pipeline safety or learn from changes in its enforcement policy. Although operators pay the vast majority of the civil penalties that OPS proposes or assesses, their compliance is more likely due to their willingness to pay than to OPS’s or FAA’s efforts because neither agency has been providing the other with the information needed to ensure effective penalty collections. If FAA does not know that OPS has imposed civil penalties, it cannot take actions to collect them, and if FAA does not communicate the status of its collections to OPS, OPS misses opportunities to understand the effects of its enforcement actions on operators’ behavior. Finally, OPS’s incomplete implementation of its policy to post its civil penalty actions on its Web site limits the public’s ability to understand the enforcement actions that OPS has taken. We are making a total of six recommendations; three to improve OPS’s enforcement strategy and three to improve management controls over the collection of civil penalties. To improve OPS’s ability to determine the effectiveness of its enforcement strategy and make adjustments to this strategy as needed, we are recommending that the Secretary of Transportation direct the Associate Administrator for Office of Pipeline Safety to take the following three actions: OPS should establish goals for its enforcement program. OPS should fully define its strategy for achieving these goals. OPS should establish a systematic approach for designing performance measures that incorporates identified key practices. We also recommend that the Secretary of Transportation direct the Associate Administrator, OPS, and the Administrator, FAA, as appropriate, to take the following three actions to improve management controls over the collection of civil penalties and the public dissemination of information on enforcement actions: OPS should inform FAA of all proposed and assessed civil penalties so that FAA can carry out its collection functions. FAA should share its reports on collections with OPS so that OPS will know the status of civil penalty enforcement actions. OPS should post all enforcement actions on its Web site, consistent with its policy. In commenting on a draft of this report, the Associate Administrator for Pipeline Safety in RSPA and other officials told us that OPS welcomed the insights provided by the report and generally concurred with the report and its recommendations. The associate administrator emphasized that OPS enforcement has improved since 1999 through use of its full range of enforcement tools, including civil penalties to punish violators, corrective action orders to address immediate and potential safety concerns, and notices of amendment to require and monitor changes to safety programs. OPS told us that it continues to seek constructive interactions with the industry through the integrity management program; however, companies breaking the law must expect to be punished, and OPS will use all of its enforcement authority to achieve 100 percent safety compliance. Regarding the effectiveness of OPS’s enforcement strategy, OPS told us that, to help attain its primary performance goals (reducing the number of accidents by 5 percent annually and reducing the number of pipeline spills by 6 percent annually), it will establish intermediate outcomes as discussed in the report to help improve its ability to evaluate the effectiveness of specific enforcement tools. The officials also indicated that OPS expects to learn more about enforcement effectiveness through its two statutorily mandated technical advisory committees, which are reviewing, among other things, proposed modifications and improvements to the enforcement program. These committees were established pursuant to the Natural Gas Pipeline Safety Act of 1968 and the Hazardous Liquid Pipeline Safety Act of 1979, in part to serve as a sounding board for discussing pipeline safety policy issues. We are pleased with OPS’s constructive response to our draft report and that the department plans to implement our recommendations on establishing goals for its enforcement program, defining a strategy for achieving those goals, and establishing a systematic approach to designing performance measures. Regarding its use of civil penalties, OPS indicated that it plans to automate enforcement tracking to better ensure consistent application of policy across regional offices and improve management controls over the collection of civil penalties. To improve management controls over civil penalties collections, OPS explained that it envisions a solution using information technology to provide a transparent and real time tracking of civil penalty assessment activity between FAA, RSPA’s Office of Chief Counsel, and OPS. We are pleased that until these enhancements can be deployed OPS has agreed to take the steps we recommended to improve management controls over the collection of civil penalties. We are sending copies of this report to congressional committees and subcommittees with responsibility for transportation safety issues; the Secretary of Transportation; the Administrator, RSPA; the Administrator, FAA; the Associate Administrator, OPS; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or James Ratzenberger at [email protected]. Alternatively, we can be reached at (202) 512-2834. Staff who made key contributions to this report are listed in appendix V. To evaluate the effectiveness of the Office of Pipeline Safety’s (OPS) enforcement strategy, we identified key elements of effective program management by reviewing our products on this subject, Office of Management and Budget guidance, and studies by the National Academy of Public Administration and the Urban Institute. We then determined the extent to which the office’s strategy incorporates these three elements (clear program goals, a well-defined strategy for achieving goals, and measures of performance that are linked to program goals). For each element, we obtained information from OPS on its activities and plans and compared this information to the published criteria. We also reviewed the Web sites of selected regulatory agencies to determine how these other agencies measure enforcement results. As part of this work, we monitored OPS’s efforts to develop a strategy that applies to all its enforcement activities and to improve its performance measurement capabilities. We supplemented these activities by interviewing pipeline safety stakeholders to obtain their views on the effectiveness of OPS’s enforcement efforts. These stakeholders included industry trade associations, federal agencies, state agencies and associations, a local representative from Virginia, and pipeline safety advocacy groups. (Stakeholders that we contacted for this and other aspects of our work are listed at the end of this appendix.) To examine OPS’s civil penalty actions, we reviewed legislation; OPS regulations; and OPS manuals, guidelines, and protocols setting forth OPS’s legal authority and policies and procedures for implementing this authority. We also obtained information from OPS, the Research and Special Programs Administration (RSPA), and the Federal Aviation Administration (FAA) about pipeline safety enforcement policies and procedures. (Since 1993, FAA’s general accounting division has managed the accounts receivable for OPS’s pipeline safety and RSPA’s hazardous materials programs.) We discussed FAA’s collection activities with OPS, RSPA’s Office of Chief Counsel, and FAA officials. In developing information about OPS’s use of civil penalties, we analyzed civil penalty data from OPS’s enforcement database and FAA’s civil penalty receivables and collections database. Because neither OPS’s nor FAA’s civil penalty data were complete, we combined them into a single database. The data were incomplete in three ways. First, the OPS data set included penalties that were not in the FAA data set. Second, each data set had fields (variables) not contained in the other data set. Third, certain fields were common to both data sets, but the data for a particular enforcement action were present in one data set and missing in the other data set. For such variables, we substituted the available data for the missing data when possible. In addition, OPS and FAA sometimes used different methods for numbering penalties. We converted all penalty numbers to a standard format so we could combine the two data sets. When data were inconsistent with other logically related data, we corrected the data using our best judgment. For example, if OPS data showed that the assessed amount was $75,000, but both OPS and FAA data showed that the proposed amount was $10,500 and FAA data showed that both the final order and collected amount was $7,500 without any notation for the difference between $7,500 and $75,000, we assumed that the $75,000 amount was a data entry error and corrected the assessed amount to be consistent with the collected amount. When possible, we supplemented data in the two data sets with data from other sources. For example, if OPS’s Web site included a final notice with the assessed penalty amount, but this information was not captured in the data sets, we added the information to the combined data set. We also discussed our preliminary results with OPS and FAA, and their comments led to further corrections. For example, the combined OPS and FAA data suggested that 16 penalties had not been collected. After we discussed this information with OPS and FAA, they provided documentation showing that many fewer penalties were uncollected. For example, as discussed in the body of the report, FAA reported to us that 2 penalties had been paid recently, which was not reflected in the information reported to us by OPS. In addition, we determined that OPS closed 2 of the penalty cases without collecting the penalties, operators are appealing 5 penalties, OPS recently assessed 3 penalties, and OPS acknowledged that 4 penalties (totaling $45,200) should have been collected. In determining how OPS used its enforcement options to address noncompliance with pipeline safety regulations, we analyzed enforcement data for enforcement actions opened between 1994 and 2003 from OPS’s enforcement database and FAA’s civil penalty database. OPS’s enforcement database contains information about instances when OPS has taken some type of enforcement action. OPS officials acknowledged that OPS’s enforcement database lacked complete information on penalty collections and indicated that FAA tracks the collection of OPS’s civil penalties. To assess the reliability of OPS’s and FAA’s data, we (1) performed electronic testing for obvious errors in accuracy and completeness and (2) interviewed officials from OPS’s enforcement office and FAA’s general accounting division who are knowledgeable about the data and how the data were entered. We consulted regularly with these officials to resolve the handling of problematic data entries. After these actions, and after making needed corrections, we determined that the data were sufficiently reliable for the types of analyses we wanted to pursue for this report except to determine whether operators paid penalties in a timely manner. In this instance, the discrepancies between OPS’s and FAA’s data and between FAA’s data and the case files we reviewed were too great for the data to be judged reliable. For example, we found that the collection dates in FAA’s database did not match the hard-copy documentation for about half of 20 cases that we reviewed by hand. OPS identifies each enforcement action in its enforcement database with a unique number. This number identifies the region, the year OPS initiated the action, the type of operator, and some types of enforcement actions. OPS’s numbering system had certain characteristics that limited our ability to analyze the data as fully as we wanted. For example, the system does not identify enforcement actions where OPS used more than one type of enforcement, such as those that have both a compliance order and a notice of amendment, and it does not identify actions that have a compliance order. Therefore, we asked OPS to provide additional information detailing the enforcement and administrative actions taken in each instance. We used these data to divide the enforcement data into four sets of enforcement actions: civil penalties, other enforcement actions, administrative actions, and complex actions. As discussed earlier in this report, our reporting of enforcement actions differs from OPS’s. Whereas we report these actions for the year when OPS first responded to the related violation, OPS, in its annual enforcement summary report, reports the enforcement actions it has taken during that year, regardless of when it first responded to the related violations. The two reporting methods are not comparable. As a means of better understanding OPS’s civil penalty process, we reviewed 20 civil penalty actions initiated between 1992 through 2003. We chose penalties that, on their face, appeared large or small, seemed to have gone unpaid for a long period, may have involved repeat offenders, or appeared to have been reduced between the assessment and the collection. We reviewed the case file documentation and discussed the penalty with headquarters and regional enforcement officials. The number of penalties we reviewed was not large enough (usually about two to three penalty actions for each criterion we used) to draw any insights. Reviewing these files was time consuming, and reviewing a larger number of files, as well as obtaining any supporting documentation from the regional offices that initiated them, was not practical. In determining whether OPS’s civil penalties deter noncompliance, we interviewed pipeline safety stakeholders to obtain their views on the deterrent effect of OPS’s civil penalties. These stakeholders included industry trade associations, pipeline companies, state agencies and associations, insurance companies, a local representative from Virginia, and pipeline safety advocacy groups. We supplemented the stakeholders’ comments with information from economic literature on deterrence. The literature on deterrence that we reviewed included: A. Mitchell Polinsky and Steven Shavell, The Economic Theory of Public Enforcement of Law, Journal of Economic Literature, Vol. XXXVIII (March 2000); Oren Bar-Gill and Alon Harel, “Crime Rates and Expected Sanctions: The Economics of Deterrence Revisited,” Journal of Legal Studies, Vol. XXX (June 2001), pp. 485-501; Isaac Ehrlich, “Crime, Punishment, and the Market for Offenses,” Journal of Economic Perspectives, Vol. 10 (Winter 1996), pp. 43-67; Richard A. Posner, “Economic Analysis of Law.” 3rd edition (1986), Little, Brown and Company; and Steven D. Levitt, “Why Do Increased Arrest Rates Appear to Reduce Crime: Deterrence, Incapacitation, or Measurement Error?” working paper #5286, National Bureau of Economic Research (September 1995). In determining how OPS’s policies and procedures have changed over time, we conducted activities as described above to cover the period 1994 through 2003. In determining whether OPS substitutes civil penalties for corrective action orders, we reviewed the underlying purposes of each enforcement action and discussed them with OPS headquarters and regional enforcement officials. We reviewed a very limited number of enforcement cases involving civil penalties, and none of these records indicated that one form of penalty had been substituted for another. Because of the substantial effort involved, it was not practical to review a large number of enforcement cases. In learning how OPS’s use of civil penalties compares to FAA’s for air carriers, we used the information described above to summarize OPS’s civil penalty information. We compared this information to similar information gathered under a concurrent engagement on FAA’s enforcement activities. We chose FAA as the comparison agency because it is another transportation safety agency and because information was readily available. Because considerable time and effort are needed to understand agencies’ enforcement policies and practices, as well as to collect, ensure the quality of, and analyze data, it was not practical to expand the comparison to other agencies. In determining the extent to which OPS had implemented the recommendations involving state activities in our May 2000 report, we asked OPS officials to describe the actions taken to implement the recommendations. We then interviewed all of the interstate agents to determine the extent to which they believed OPS had implemented our recommendations. In some instances, following interviews with interstate agents, we discussed with OPS the overall nature of the interstate agents’ views. In assessing the extent to which OPS had implemented our recommendation on civil penalties, we discussed with OPS how it had responded to our recommendation. To determine whether it had made more use of the full range of its enforcement options, we examined data from OPS’s pipeline incident processing enforcement system database and FAA’s civil penalty receivables and collections database. We analyzed the data to determine the degree to which OPS used enforcement and administrative actions from 1995 through 2000 and compared these results with the use of these actions from 2000 through 2003. Finally, in comparing changes in OPS’s enforcement actions with industry and economic trends, we interviewed OPS officials to determine factors that they said influenced the trends in the number of enforcement actions they took from 1994 through 2003. We also asked pipeline safety stakeholders to identify factors that might have influenced OPS’s enforcement and administrative actions during this period and used our own knowledge of the area to select others. The factors identified are those discussed in appendix IV. We then examined data from OPS’s pipeline incident processing enforcement system database and FAA’s civil penalty receivables and collections database by analyzing the trends in the number of enforcement and administrative actions that OPS took from 1994 through 2003 and comparing these data visually with the selected factors. We obtained data from OPS on pipeline accidents, pipeline mileage, and OPS’s inspection activities. We obtained data on natural gas and petroleum consumption from the Energy Information Administration. We obtained data on new construction trends from the Census Bureau. To address other issues required under the 2002 pipeline safety act and additional issues of interest to you, we examined (1) how OPS’s enforcement policies and procedures have changed since 1990, (2) whether OPS substitutes corrective action orders for civil penalties, and (3) how OPS’s policies and enforcement actions compare with those of FAA. OPS’s enforcement approach has evolved as its policies and procedures have changed. OPS policies have gone through three phases since 1990: (1) the standard inspection phase (1990 through 1994), (2) the risk management demonstration phase (1995 through 1999), and (3) the integrity management phase (2000 to the present). Standard inspection phase—During this phase, OPS enforced its minimum safety standards, conducting what it called standard inspections, to ensure that each pipeline operator complied with each pipeline safety regulation. OPS trained inspectors to complete inspection forms that covered all operations, but did not differentiate between high-risk and low-risk requirements. Individual OPS inspectors primarily conducted inspections on a unit basis. OPS used all enforcement options. Risk management demonstration phase—During this phase, OPS still focused most of its resources on enforcing minimum safety standards, but it also began to encourage individual operators to focus their resources on the greatest risks to their pipeline systems. OPS also began to use teams of OPS inspectors to evaluate an operator’s entire pipeline system. The inspection goal was to determine whether the operator had any systemic safety issues that it needed to address. OPS emphasized partnering with the pipeline operators to improve pipeline safety rather than punishing noncompliance. As a result, OPS issued fewer civil penalties and more administrative actions to address noncompliance. Integrity management phase—In this phase, OPS shifted its focus from enforcing minimum safety standards to more comprehensive inspections of pipeline operators, known as the integrity management program. As a result, OPS conducted fewer inspections because each inspection took more time and covered more miles of pipeline than a standard inspection. However, the integrity management inspections identified more violations than if OPS had continued inspections of the more established minimum safety standards. OPS concentrated its enforcement actions on ensuring that operators’ risk identification and mitigation procedures were sufficient and primarily relied on its notices of amendment to give operators experience in implementing the complex regulations. More recently, OPS has begun to propose civil penalties. OPS is also developing a new enforcement tool—the safety order—that encourages operators to take action to remedy safety- related conditions. OPS plans to use the safety order to direct the operator to remedy safety-related conditions that could significantly change or restrict pipeline operations; however, unlike the conditions identified in a corrective action order, these would be conditions that did not pose an immediate threat to life or property. Tremendous pipeline failures in Bellingham, Washington, in June 1999 and in Carlsbad, New Mexico, in August 2000, and reports by the Department of Transportation’s Inspector General and by us led OPS to abandon its partnering approach in favor of what it termed a “tough but fair” enforcement approach. According to OPS, the agency does not substitute corrective action orders for civil penalties because OPS levies corrective action orders and civil penalties for different reasons. OPS imposes a corrective action order on a pipeline operator when it finds a situation that presents an imminent hazard to life or property that needs to be addressed. OPS does not have to find that the operator has violated its regulations before issuing a corrective action order. For example, earlier this year, OPS issued a corrective action order that directed an operator to reduce operating pressure when OPS could not determine the cause of a pipeline failure. In contrast, civil penalties are not used to correct the underlying safety violations. Rather, OPS uses civil penalties as sanctions for violating federal pipeline safety regulations and uses other enforcement tools, such as corrective action orders, to correct safety violations. For example, in 2001, OPS assessed a $37,500 civil penalty against an operator that did not follow OPS’s procedures during an annual test of the company’s emergency shutdown system. OPS used the penalty as a sanction and took no further action. In this case, the operator already had a procedure in place but did not follow it. If OPS had found that the operator’s procedures were inadequate, it could have both issued a notice of amendment requiring the operator to bring its procedures in line with OPS’s regulations and imposed a civil penalty as a sanction for having inadequate procedures. Both OPS and FAA, in its oversight of the aviation industry, use civil penalties, among other enforcement actions, to deter noncompliance with their safety regulations. While OPS regulates and may issue civil penalties to operators of pipeline systems, FAA regulates and may issue civil penalties to aircraft operators, airports, and individuals involved in air transport, such as pilots and mechanics. Both OPS’s and FAA’s processes for issuing civil penalties allow for due process, through opportunities afforded to regulated entities to present evidence that may lead the regulator to reduce or withdraw the penalties. (See table 5.) The maximum penalties that each agency can impose differ significantly. OPS may impose penalties for pipeline operators up to a statutory maximum of $100,000 per day per violation up to a statutory maximum of $1 million per case, whereas FAA may impose much smaller penalties for aircraft operators—up to $11,000 per violation of federal aviation regulations or up to $30,000 per violation of RSPA’s hazardous materials regulations. However, FAA has no statutory maximum penalty per case. Another difference between the two agencies’ civil penalty processes is that FAA has more detailed guidance on setting penalty amounts than OPS. FAA’s guidance lists types of violations with a corresponding range of civil penalty amounts for each and also lists factors that should be considered in setting the penalty level. OPS considers broad statutory factors in determining civil penalty amounts and is developing more detailed guidance for making these determinations as part of its efforts to develop an enforcement policy and detailed internal guidelines that reflect its current enforcement strategy. FAA issues many more civil penalties each year than OPS. (See table 6.) From 1994 through 2002, FAA issued more than 10 times as many civil penalties to aircraft operators as OPS issued to pipeline operators. However, the average civil penalties that FAA assessed to the aircraft operators and that we included in our analyses were lower than the average civil penalties that OPS assessed to pipeline operators ($14,100 versus $21,300). In addition, during this period, FAA reduced the civil penalties it had proposed before assessing them to a much greater degree than did OPS. Specifically, the total assessed penalties that FAA issued to aircraft operators were 59 percent lower than the total proposed penalties ($34.7 million versus $84.0 million), whereas the total assessed penalties that OPS issued to pipeline operators were 19 percent lower than the total proposed penalties ($4.2 million versus $5.2 million). Our comparison of OPS’s and FAA’s use of civil penalties was designed to provide some descriptive information but not to evaluate the two agencies’ use of these penalties or to investigate the reasons for any differences. In May 2000, we made three recommendations to the Secretary of Transportation to improve OPS’s pipeline safety program. Two of the recommendations proposed wider use of interstate agents and the third dealt with OPS’s use of civil penalties. We found that OPS implemented two of these recommendations and implemented the intent of the third. In response to our recommendation that OPS work with state pipeline safety officials to determine how best to involve them in federal pipeline safety activities, OPS told us that it had modified its interstate pipeline oversight program to allow more opportunities for state participation. OPS informed us that the 11 qualified states may inspect the construction of new pipelines, oversee rehabilitation projects and integrity management programs, investigate accidents, conduct inspections, and participate in nonregulatory program initiatives. In addition, according to OPS, states that do not qualify as interstate agents may apply to participate in specific, short-term activities such as inspecting the construction of a new pipeline or investigating a pipeline accident. We contacted all 11 interstate agents to determine the extent to which they participate with OPS in implementing federal pipeline safety efforts. Ten of the 11 interstate agents told us that OPS was implementing our recommendation as OPS said it was doing. These 10 states said they assisted OPS by participating in at least one of the activities mentioned above. The eleventh state said that OPS had not changed its method of involving the state; however, this state agreed that communication between the two parties had improved. Although nearly all of the 11 interstate agents said that OPS was implementing this recommendation, 7 said OPS was too slow in letting them know which actions it had taken or planned to take in response to the potential noncompliance that the interstate agents had discovered during inspections. For example, two interstate agents commented that once they notified OPS of noncompliant activity, the case seemed to go into what they described as a “black hole”—indicating that they never heard anything else from OPS about the matter. One of these two interstate agents told us that it was very difficult for it to conduct adequate follow-up inspections (i.e., those conducted at pipeline companies to determine whether previously found problems have been corrected) without knowing what actions, if any, OPS had taken or planned to take. This interstate agent also noted that once, after it alerted OPS to noncompliant activity at one company, it found the same violation 2 years later during the next scheduled inspection cycle. We brought the interstate agents’ concerns to OPS’s attention and, according to the agency, it is now providing interstate agents with information on the actions it took or will take in response to the agents’ notices of noncompliant activity. OPS officials told us that effective November 2003, OPS began disposing of noncompliance cases in writing for interstate agents within 60 days after receiving notices of operator noncompliance from interstate agents—as required by the Pipeline Safety Improvement Act of 2002. On the basis of our discussions with OPS and the interstate agents, we believe that OPS has implemented this recommendation. In response to our recommendation that OPS allow interstate agents to help review integrity management programs developed by the pipeline companies that operate in their states to ensure that these companies have identified and adequately addressed safety risks to their pipeline systems, OPS told us that it had revised its interstate agent agreements with qualified states to implement this recommendation. In determining the extent to which interstate agents participate with OPS in reviewing integrity management plans, we contacted the six interstate agents that have agreements with OPS under the agency’s hazardous liquid integrity management program. Five of them agreed that OPS was implementing this recommendation as OPS told us it was doing. The one interstate agent that did not believe OPS had implemented this recommendation (the same interstate agent that did not believe OPS had implemented our previously discussed recommendation) told us that while it was allowed to attend and observe one integrity management inspection, it was not allowed to participate—that is, it was not allowed to ask questions during the inspection. According to OPS, this was before RSPA’s Chief Counsel provided interstate agents with verbal guidance stating that states could participate in integrity management inspections held outside their boundaries if the hosting state granted permission. On the basis of our discussions with OPS and the interstate agents, we believe that OPS has implemented this recommendation. In discussing both recommendations, we asked the interstate agents about the degree of partnership between them and OPS. For the first recommendation, 7 of the 11 interstate agents said there was or was close to being a true partnership with OPS. However, 3 others thought that better communication could improve the partnership between the two entities. The remaining interstate agent thought that by acknowledging the pipeline safety expertise that interstate agents acquired under the states’ intrastate pipeline programs, OPS could also improve the partnership between the two parties. For the second recommendation, 4 of the 6 interstate agents said there was or was close to being a true partnership with OPS. One of the 4 interstate agents thought the partnership could improve if OPS gave more advance notice so that interstate agents could make travel arrangements to attend integrity management inspections. The fifth interstate agent told us it did not want to offer an opinion on whether it thought a partnership with OPS existed. It wanted an opportunity to work with OPS on implementing integrity management requirements for natural gas. The remaining interstate agent (the same interstate agent that did not believe OPS had implemented either of these recommendations) thought there was no partnership with OPS because the agent had been allowed only to observe the integrity management inspections. We recommended that the Secretary of Transportation require that OPS determine whether its reduced use of civil penalties has maintained, improved, or decreased compliance with pipeline safety regulations. OPS said that it could not determine the impact of its reduced use of civil penalties on compliance because it did not have sufficient data to do so. The agency concluded that its decreased reliance on civil penalties did not allow it to adequately address safety concerns and was perceived negatively by the public and Congress. OPS subsequently changed its enforcement policy to make fuller use of its range of enforcement tools, including increasing the number and size of civil penalties. While OPS did not strictly implement our recommendation, its actions to make fuller use of all its enforcement tools adhere to the intent of the recommendation. In 1994, at the very end of OPS’s standard inspection phase, OPS issued 42 administrative actions and 95 enforcement actions. (See fig. 7.) From 1995 through 1998, during its partnering phase, OPS increased its use of administrative actions, while issuing fewer civil penalties and non-civil-penalty enforcement actions. After 1998, OPS decreased its use of administrative actions. However, the agency did not increase its use of enforcement actions until 2000 when it began its “tough but fair” phase. (See app. II for more information on OPS’s policy phases.) The primary influence on trends in OPS’s enforcement and administrative actions has been changes in OPS’s enforcement policies. These policy changes coincided with changes in OPS’s leadership. Other factors that contributed to OPS’s policy changes and ultimately influenced trends in the agency’s enforcement and administrative actions were two serious pipeline accidents and reports on them from the Department of Transportation’s Inspector General and from us on improvements needed in OPS’s pipeline safety program. To explore whether there were other possible explanations for the trends in enforcement and administrative actions, we analyzed trends in pipeline accidents, pipeline mileage, OPS’s inspection activities, natural gas and petroleum consumption, and new construction and compared them with the trends in OPS’s enforcement and administrative actions. We found that none of these data series appear to be strongly associated with the trends in OPS’s enforcement and administrative actions. OPS’s use of enforcement and administrative actions has evolved with changes in the agency’s enforcement policies and leadership. As discussed in appendix II, OPS’s enforcement policies have gone through three phases since 1990: (1) the standard inspection phase (1990 through 1994), which emphasized across-the-board compliance; (2) the risk management demonstration phase (1995 through 1999), which focused on partnering with the industry to address the highest risks; and (3) the integrity management phase (2000 to the present), which continued to focus on the highest risks but also took a “tough but fair” approach to enforcement. OPS officials told us that the enforcement policy changes reflected in these three phases have been the primary influence on the trends in OPS’s enforcement and administrative actions. In addition, we observed that these policy phases appear to coincide with changes in OPS’s leadership; new associate administrators came on board in 1995 and mid-2000. Finally, according to OPS officials, the agency’s latest policy change was also influenced by two major pipeline accidents in 1999 and 2000 that focused public and congressional attention on pipeline safety and led to the previously cited reports by the Department of Transportation’s Inspector General and by us. The changes in OPS’s enforcement policies and leadership roughly parallel the trends in OPS’s enforcement actions. (See fig. 7 in app. III.) As previously discussed, in 1994, at the very end of OPS’s standard inspection phase, OPS issued 42 administrative actions and 95 enforcement actions. From 1995 through 1998, during its partnering phase, OPS increased its use of administrative actions, while decreasing its use of civil penalties and other enforcement actions. After 1998, OPS’s use of administrative actions decreased. In 2000, when OPS initiated its “tough but fair” phase, its use of enforcement actions started to rise. Long-term trends in the numbers of serious pipeline accidents and in the accident rate for interstate hazardous liquid pipelines do not appear to be associated with trends in OPS’s enforcement and administrative actions. As discussed earlier in this report, trends in the numbers of serious accidents for interstate natural gas and hazardous liquid pipelines were mixed from 1994 through 2003. (See figs. 2 and 3.) These trends do not parallel the wide fluctuations in the numbers of enforcement and administrative actions that OPS took during the same period. Over this same period, the accident rate for interstate hazardous liquid pipelines—that is, the number of serious accidents per billion ton-miles of hazardous liquids shipped— decreased, while the numbers of OPS enforcement and administrative actions fluctuated. For the number of all pipeline accidents per 10,000 miles of pipeline (where volume of products supported is not included), there appears to be some association between the number of accidents per 10,000 miles of pipeline and OPS’s enforcement actions but no association between the number of accidents per 10,000 miles of pipeline and OPS’s administrative actions. (See fig. 8.) This metric, like the standard inspections that OPS conducted from 1990 through 1994, does not take risk into account; it considers only the mileage of pipelines in place, not the amounts of products shipped—or, by implication, the risks involved in shipping them. The number of accidents per 10,000 miles of pipeline increased somewhat steadily during the period of our review, growing by almost 50 percent, from about 2.2 accidents per 10,000 miles of interstate pipeline in 1994 to 3.3 such accidents in 2002 (latest data available). At least through 2000, the number of accidents per 10,000 miles of interstate pipeline and the number of OPS enforcement actions moved together. This parallel movement might suggest, if all else were equal, that OPS primarily took enforcement actions when serious accidents occurred. However, most OPS enforcement actions were the result of its routine inspections—not as a result of accident investigations. The number of accidents per 10,000 miles of interstate pipeline does not appear to coincide with the number of OPS administrative actions during this period. The trend in the number of serious interstate pipeline accidents (those causing a fatality, an injury, or $50,000 or more in property damage) per 10,000 miles of pipeline, like the trend in the number of all pipeline accidents, appears to parallel the trend in OPS’s enforcement actions, but not the trend in OPS’s administrative actions. (See fig. 9.) During the period of our review, the number of serious interstate pipeline accidents rose by more than 6 percent, from 156 in 1994 to 166 in 2003, and generally followed the same pattern as the number of enforcement actions. However, for serious accidents as for all accidents, there does not appear to be a logical connection between these trends. Furthermore, for serious accidents, as for all accidents, the trends in the number of accidents and in OPS’s administrative actions do not appear to move together. The trend in the number of pipeline miles since 1994 does not appear to be associated with the trends in OPS’s enforcement and administrative actions. (See fig. 10.) From 1994 through 2002 (latest data available), the miles of pipeline in the United States increased by almost 11 percent, from almost 2.2 million miles in 1994 to more than 2.4 million miles in 2002. However, the numbers of enforcement and administrative actions that OPS issued during this period varied. Our analysis shows no strong apparent association between the number of OPS inspectors on line, the number of inspections conducted by OPS inspectors, or the number of days OPS inspectors spent away from the office conducting inspections and the number of OPS’s enforcement and administrative actions. (See fig. 11.) The number of OPS inspectors on line more than doubled from 1994 through 2003, increasing from 28 in 1994 to 73 in 2003 at a fairly steady rate. Over the same period, the number of OPS’s enforcement and administrative actions fluctuated widely. The number of inspections conducted by OPS inspectors varied from 1994 through 2003. (See fig. 12.) The fewest inspections occurred in 1994 and 1999 (about 730 each year), and the most took place in 1996 (almost 1,100) and 2003 (about 1,120). These changes in the number of inspections do not appear to be associated with the trends in OPS’s enforcement and administrative actions, which also varied over the same period but often in different directions and at different times. The time OPS inspectors spent conducting inspections does not appear to be associated with the numbers of enforcement and administrative actions they took. (See fig. 13.) The number of days they spent away from the office conducting inspections more than doubled, increasing fairly steadily from more than 1,700 in 1994 to almost 5,300 in 2003, while the numbers of enforcement and administrative actions they took over the same period fluctuated widely. Our analysis points to a possible association between the types of inspections conducted and OPS’s enforcement and administrative activity. The number of standard inspections—those designed to assess compliance with OPS’s minimum safety standards—appears to be loosely associated with the numbers of enforcement actions and administrative actions taken over the period. (See fig. 14.) The number of standard inspections varied from about 580 in 1994 to about 550 in 2003, peaking at 830 in 1996. The number of other inspections that OPS conducts, including construction inspections, accident investigations, and integrity management inspections, increased from about 150 in 1994 to about 560 in 2003. For many but not all of the years covered, the numbers of enforcement and administrative actions paralleled the numbers of standard and other inspections, albeit at a later date. One interpretation of this apparent linkage is that changes in inspection activity led to similar but later changes in enforcement activity. (The lag in enforcement activity reflects the time taken for inspectors to interpret their inspection results and gain management approval for any enforcement actions to be taken.) However, this interpretation is not consistent with the data for the entire 10-year period. Trends in natural gas and petroleum consumption do not appear to be associated with trends in OPS’s enforcement and administrative actions. From 1994 through 2002 (latest data available), natural gas consumption increased by 5.7 percent, rising from about 21.2 trillion cubic feet to almost 22.5 trillion cubic feet. (See fig. 15.) This trend is not consistent with the fluctuations in OPS’s enforcement and administrative actions. Petroleum consumption also increased, rising by almost 11 percent at a fairly steady rate, from almost 18 million barrels per day in 1994 to more than 19 million barrels per day through 2002 (latest data available). Over the same period, the number of enforcement and administrative actions that OPS issued varied widely. (See fig. 16.) Historical trends in new construction do not appear to be associated with trends in OPS’s enforcement and administrative actions. Pipeline accidents often result from construction activities, such as excavation. From 1994 through 2003, new construction steadily increased, as measured by Census Bureau indicators of privately owned housing units started and completed. For example, the number of new, privately owned housing units started increased by almost 27 percent at a fairly steady rate, from about 1.5 million in 1994 to about 1.8 million in 2003. However, trends in OPS’s enforcement and administrative actions varied widely over the same period. In particular, the trend in the number of OPS’s enforcement actions does not appear to be associated with the trend in the number of new homes started. (See fig. 17.) Similarly, the value of new homes completed does not appear to be associated with trends in OPS’s enforcement activity. (See fig. 18.) From 1994 through 2003, in constant dollars, the annual value of construction put in place increased steadily by almost 23 percent, from about $732 billion in 1994 to about $898 billion in 2003. Over the same period, the number of OPS’s enforcement and administrative actions fluctuated widely. In addition to the above, Jennifer Clayborne, Elizabeth Eisenstadt, Bert Japikse, Judy Guilliams-Tapia, Bonnie Pignatiello Leer, Gail Marnik, and Gregory Wilmoth made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
While pipelines are inherently safer to the public than other modes of freight transportation, pipeline accidents involving natural gas and hazardous liquids (such as gasoline) can have serious consequences. For example, a natural gas pipeline ruptured near Carlsbad, New Mexico, in 2000, killed 12 people, and resulted in $1 million in damages or losses. The Office of Pipeline Safety (OPS) administers the national regulatory program to ensure safe pipeline transportation. OPS uses its enforcement program, when safety problems are found, as one means to do so. This study reports on (1) the effectiveness of OPS's enforcement strategy and (2) OPS's actions for assessing monetary sanctions (civil penalties), among other things. The effectiveness of OPS's enforcement strategy cannot be determined because the agency has not set goals for its enforcement program, fully defined its strategy, or established performance measures linked to goals that would allow an assessment of results. These are key elements of effective management. Without these elements, the agency cannot determine whether recent changes in its strategy are having the desired effects on pipeline safety. Over the past several years, OPS has placed priority on other areas--developing a new risk-based regulatory approach--and it believes these efforts will change the safety culture of the industry. OPS now intends to devote more attention to strengthening the management of the agency's enforcement program. In particular, OPS is developing an enforcement policy that will help define its enforcement strategy and has made some initial steps toward identifying new performance measures. However, OPS does not anticipate finalizing such a policy until sometime during 2005 and lacks a systematic approach for incorporating some of the key practices identified for achieving successful performance measurement systems. OPS has increased both the number and the size of the penalties it has assessed against pipeline operators over the last 4 years (2000 through 2003) following its decision to be "tough but fair" in assessing penalties. During this period, OPS assessed an average of 22 penalties per year, compared with an average of 14 per year for the previous 5 years (1995 through 1999), a period of more lenient enforcement. In addition, the average penalty amount increased from $18,000 to $29,000 over the two periods. While civil penalty use and size has increased, it is not clear whether this action will help deter noncompliance with the agency's safety regulations. Stakeholders expressed differing views: some thought that any penalty had a deterrent effect if it kept the pipeline operator in the public eye, while others told us that the penalties were too small to be effective sanctions. About 94 percent of the 216 penalties levied from 1994 through 2003 have been paid. However, OPS lacks effective management controls to assure that penalties are collected. For example, OPS does not routinely inform its collection agent of penalties it has assessed.
In its July 1999 Morton decision, the U.S. Court of Appeals for Veterans Claims ruled that the VA did not have a duty to assist in developing claims unless they were “well-grounded” as required by federal statute. Prior to this court decision, VA policy was to assist claimants in developing a well- grounded claim. This practice, however, was not required by law, and VBA regional offices varied in the amount of assistance they provided. The VCAA (P.L. 106-475), commonly referred to as the “duty to assist” law, was enacted in November 2000. This law repealed the requirement that claims be well-grounded and it obligated VA to assist a claimant in obtaining evidence that is necessary to establish eligibility for the benefit being sought. VCAA requires VBA to take specific steps to assist claimants once they have filed a complete claim for benefits. Specifically, the VCAA requires VBA to: (1) notify claimants of the information necessary to complete the application; (2) indicate what information not previously provided is needed to prove the claim, and distinguish between the portion of the information for which the claimant will be responsible and the portion for which VA will be responsible; (3) make reasonable efforts to assist claimants in obtaining evidence to substantiate claimants’ eligibility for benefits, including relevant records; and (4) inform claimants when relevant records are unable to be obtained. The VCAA also allowed for the re-adjudication of claims denied as not well-grounded between the date of the Morton decision, July 14, 1999, and the effective date of the VCAA, November 9, 2000. The act stated that this rework could be done at the veteran’s request or on VBA’s initiative. VBA decided to review all such claims and perform any necessary work, such as sending additional notifications or making new rating decisions. The compensation program pays monthly benefits to veterans who have service-connected disabilities (injuries or diseases incurred or aggravated while on active military duty). The pension program pays monthly benefits based on financial need to wartime veterans who have low incomes and are permanently and totally disabled for reasons not service-connected.VA expects to provide about $25 billion in compensation and pension benefits in fiscal year 2002 to over 3 million veterans and their dependents and survivors. Disability compensation benefits are graduated in 10 percent increments based on the degree of disability from 0 percent to 100 percent. Eligibility and priority for other VA benefits and services such as health care and vocational rehabilitation are affected by these VA disability ratings. Basic monthly payments range from $103 for 10 percent disability to $2,163 for 100 percent disability. Generally, veterans do not receive compensation for disabilities rated at 0 percent. About 65 percent of veterans receiving disability compensation have disabilities rated at 30 percent or lower; about 8 percent have disabilities rated at 100 percent. The most common impairments for veterans who began receiving compensation in fiscal year 2000 were skeletal conditions; tinnitus; auditory acuity impairment rated at 0 percent; arthritis due to trauma; scars; and post-traumatic stress disorder. Veterans may submit claims to any one of VBA’s 57 regional offices. To develop veterans’ claims, veterans service representatives at the regional offices request and obtain the necessary information to evaluate the claims. This includes veterans’ military service records; medical examinations and treatment records from VA medical facilities; and treatment records from private providers. Once claims are developed and “ready to rate,” rating veterans service representatives (hereafter referred to as rating specialists) evaluate the claimed disabilities and assign ratings based on degree of disability. Veterans with multiple disabilities receive a single, composite rating. For veterans claiming pension eligibility, the regional office determines if the veteran served in a period of war, is permanently and totally disabled for reasons not service-connected, and meets the income thresholds for eligibility. If a veteran disagrees with the regional office’s decision, he or she can ask for a review of that decision or appeal to VA’s Board of Veterans’ Appeals (BVA). BVA makes the final decision on such appeals and can grant benefits, deny benefits, or remand (return) the case to the regional office for further development and reconsideration. After reconsidering a remanded decision, the regional office either grants the claim or returns it to BVA for a final VA decision. If the veteran disagrees with BVA’s decision, he or she may appeal to the U.S. Court of Appeals for Veterans Claims. If either the veteran or VA disagrees with the court’s decision, they may appeal to the U.S. Court of Appeals for the Federal Circuit. In fiscal year 1999, VBA implemented the Systematic Technical Accuracy Review (STAR) system to measure the accuracy of its claims processing for its rating-related work. Under the STAR system, VBA considers a claim to have been processed accurately if the regional office determines basic eligibility correctly, obtains all required medical and nonmedical documentary evidence, decides service-connection correctly, gives the correct rating to each impairment, determines the correct payment amount, and properly notifies the veteran of the outcome of his or her claim. If a claim has any errors in any of these areas, VBA counts the entire claim as incorrect for accuracy rate computation purposes. For the nation as a whole, VBA reported an accuracy rate of 81 percent for fiscal year 2001. VBA’s goal for fiscal year 2002 is 85 percent, and its strategic goal is to achieve a national accuracy rate of 96 percent by fiscal year 2006. Beginning with fiscal year 2002, VBA has revised its accuracy measure to focus on whether regional office decisions to grant or deny are correct. Prior to this change, VA’s accuracy rate included whether the decision to grant or deny claims were correct and also included errors stemming from procedural and technical issues, such as failure to include all the documentation in the case file. This revision to VBA’s quality assurance program for compensation claims processing is consistent with recommendations made by the VA Secretary’s 2001 Claims Processing Task Force. Issues related to benefit entitlement decisions would be the basis for future revision based on clear and unmistakable error or would result in a BVA remand if not otherwise corrected during the appeal process. To implement the VCAA, VBA has issued guidance, obtained and responded to regional office staff questions, conducted an informal review of cases, and issued clarifying instructions based on the questions it received and the results of its review. To better hold regional offices accountable for proper implementation, VBA revised its quality assurance system to reflect the VCAA requirements. However, recent quality reviews show that VCAA requirements are not always being met. Though VBA does not know the underlying reason why regional offices may not be meeting VCAA requirements, it has attempted to correct the implementation deficiencies by requiring regional office managers to certify that staff had read and understand VCAA guidance. On October 19, 2000, in anticipation of enactment of the VCAA, VBA instructed regional offices to stop denying claims as not well-grounded under the Morton decision. On November 17, 2000—8 days after the VCAA’s enactment—VBA issued its first VCAA implementation guidance and rescinded guidance on implementing Morton. This guidance was provided pending the revision of VA’s adjudication regulations to conform with the VCAA. To clarify and supplement its initial guidance, VBA issued several other guidance letters through February 2001. VBA supplemented this written guidance with teleconferences and questions and answers posted on VBA’s Intranet site. This guidance covered the development and adjudication of claims (1) denied as not well-grounded under Morton, (2) pending when the VCAA was enacted, and (3) received after the law was enacted. The guidance also covered the handling of appealed claims. In February 2001, VBA issued guidance for the review of about 98,000 claims that regional offices had previously denied as not well-grounded under Morton. VBA required regional offices to complete reviews of these claims by October 1, 2001; it later extended this deadline to December 31, 2001. Where a new decision was required, regional offices were to follow the VCAA guidance on notifications to veterans and claims development. This included sending “duty to assist” letters to veterans requesting any additional evidence the veterans may have to substantiate their claims; developing any previously or newly identified evidence; obtaining medical examinations, if appropriate; and making a new rating decision. If the veteran did not respond to the regional office’s request for information within 60 days, VBA could deny the claim again for lack of evidence. As of the end of March 2002, VBA has completed about 81 percent of its reviews. The areas in which VBA clarified and supplemented its initial guidance included: (1) requesting VHA medical examinations and medical opinions; (2) pursuing records from federal agencies and private providers; and (3) notifying veterans, including requests for evidence, and notifications that VBA was unable to obtain identified evidence. For example, in response to staff questions about the criteria for scheduling medical exams and requesting medical opinions, VBA advised that medical exams should be scheduled unless it is absolutely clear that no relation exists between the veteran’s current disability and military service. Also, in response to staff questions on what to do if federal and private provider records are unavailable, VBA advised that regional offices needed positive confirmation that federal records do not exist. Regional offices also asked if they needed to develop all claims denied under Morton as not well- grounded or simply re-rate the claims without performing additional development. VBA responded that for all such claims that required readjudication, VBA must develop the claim in accordance with the VCAA’s requirements. Furthermore, VBA provided templates for VCAA development letters. Veterans Service Organization (VSO) officials we spoke with at the regional offices we visited expressed concerns about the clarity and necessity of VCAA pre-decision notification letters. They said that some veterans did not understand why they were receiving the letters— particularly if they had already responded to previous VBA letters requesting evidence. Also, the officials said that the letters were not always clear and were often not tailored to the circumstances of individual veterans’ claims. We reported in April 2002 that 43 percent of our sample of development letters did not clearly explain the actions that claimants were to take to support their claims. We recommended that VBA eliminate deficiencies in its development letter to clarify the actions that the claimant should take to substantiate a claim. In response to our recommendations, VBA agreed to revise its development letter. In an effort to assess the impact of VCAA on the outcome of claims and to assess regional office compliance with VCAA, VBA conducted an informal review in the summer and fall of 2001 of claims that had been denied as not well-grounded under Morton. VBA found that its VCAA implementation instructions had not been followed in some of the cases it sampled. In particular, the letters notifying the veteran of necessary evidence were not being sent in about 20 percent of the cases. As a result of this study, VBA issued instructions in August 2001 that emphasized the need to follow the previous written guidance, particularly the need to fully and completely develop claims. This included providing notice to the veteran of any additional evidence needed, pursuing records from federal agencies and private providers, and obtaining medical examinations when needed to make a decision on the claim. VBA noted that failure to take these actions would cause STAR reviewers to find the claim to be in error and could serve as a basis for BVA to remand the claim, if appealed. To ensure accountability by regional offices and their claims processing staffs for VCAA compliance, VBA has incorporated the requirements into its STAR quality assurance review checklists. These revised checklists— which began to be used to review claims decisions made in October 2001—include two specific VCAA-related questions: (1) Was VCAA pre- decision notice provided and adequate? and (2) Does the record show VCAA compliant development to obtain all indicated evidence (including a VA exam, if required) prior to deciding the claim? Early fiscal year 2002 data show that benefit entitlement errors are still occurring because of VCAA implementation errors. Of the STAR sample of 830 rating-related decisions made from October 2001 through January 2002, the overall accuracy rate—under VBA’s new standard, which focuses on the accuracy of the decision on entitlement to benefits—was 71 percent. VBA found that about half (142 of 288) of the entitlement decision errors involved noncompliance with VBA’s guidance on the VCAA. Of these errors, 60 involved a pre-decision notice that was not adequate or not provided at all and 82 showed that not all indicated evidence was obtained as required. VBA considered the error rate for VCAA compliance to be significant enough that in April 2002 it asked regional offices to “retrain” staff on the VCAA guidance and certify that the staff have read and understand the guidance, by the end of April 2002. As of May 7, 2002, 56 of the 57 offices had certified that their staff had read and understand the guidance. Although ensuring that staff have read and understand the guidance is a positive step, this may not be enough. VBA had already issued a series of implementing guidance letters to answer staff questions and to reinforce guidance prior to the STAR review. However, the STAR review showed that regional offices continued to experience problems with implementation. VBA has not determined the reasons why the regional offices are not properly implementing the VCAA. VBA is managing the slowdown in case processing by attempting to significantly increase regional offices’ rating decision production. VCAA contributed to the slowdown in claims processing because VBA reworked many claims based on the VCAA’s new requirements and because new claims must also be processed under these more time-consuming requirements. VBA has set production and inventory goals for fiscal year 2002, which it believes will put it on track to reducing the average time to process claims to 100 days by the end of fiscal year 2003. Although VBA has made some progress in increasing production, it faces challenges in meeting these production and inventory goals. Monthly production will need to significantly increase in the second half of the fiscal year if VBA is to meet its goal for the year. Even if VBA achieves its production and inventory goals, it still faces additional challenges to achieving its end of fiscal year 2003 goal of processing claims in an average of 100 days. VBA attributes a significant part of the increase in pending claims inventory in fiscal year 2001, and the associated increase in claims processing times, to the VCAA’s impact. According to VBA, the VCAA added to the inventory because of the need to rework many claims. VBA also believes that VCAA will lengthen the processing time of new claims, but could not quantify the extent. Several other factors, such as the addition of diabetes as a presumptive service-connected disability for veterans who served in Vietnam, the implementation of VBA’s new claims processing software, and the hiring and training of a large number of staff, also impacted VBA’s workload and production in fiscal year 2001. As shown in table 1, VBA received about 95,000 more claims and produced about 120,000 fewer claims decisions in fiscal year 2001 than in the prior fiscal year. The VCAA contributed to VBA receiving more claims in fiscal year 2001 than the prior fiscal year. The VCAA required VA, if requested by a veteran, to readjudicate claims that were denied as not well-grounded under the Morton decision. It also allowed VA to readjudicate these claims on its own initiative. VBA undertook a review of about 98,000 veterans’ disability claims that it had identified as previously denied as not well grounded. In addition, VBA had an inventory of about 244,000 rating-related claims pending when the VCAA was enacted in November 2000. VBA decided to review these claims to ensure that VCAA requirements were met. VBA had completed about 64,000 of these claims as of April 29, 2002. In addition to the VCAA, VBA has cited other factors as contributing to the increase in its claims inventory. For example, the recent addition of diabetes as a presumptive service-connected disability for veterans who served in Vietnam has caused an influx of new disability claims. By the end of fiscal year 2003, VBA expects to have received 197,500 diabetes claims. The addition of new claims processing staff during fiscal year 2001 has also temporarily hampered the productivity of experienced staff. According to officials at some of the regional offices we visited, experienced rating specialists had less time to spend on rating work because they were helping train and mentor new rating specialists. The learning curve and implementation difficulties with VBA’s new automated rating preparation system (Rating Board Automation 2000) also hampered regional offices’ productivity. Furthermore, the VCAA has significantly impacted VBA’s work processes. According to VA officials, the most significant change is the requirement to fully develop claims even in the absence of evidence showing a current disability or a link to military service. Under Morton, if a veteran could not provide enough information to show that the claim was plausible, VBA could deny the claim as not well-grounded. These claims must now be developed and evaluated under the expanded procedures required by the VCAA. For example, officials at one regional office we visited noted that they are requesting more medical examinations than they did before the VCAA was enacted. Also, time can be added in waiting for evidence. For example, VBA must make repeated efforts to obtain evidence from federal agencies—stopping only when the agency certifies that the record does not exist, or VBA determines that further efforts to obtain the evidence would be futile. VBA is addressing its claims processing slowdown by taking steps to increase production and reduce its claims inventory. VBA believes that it will be able to reduce its inventory to a level that will enable it to process cases in an average of 100 days by the end of fiscal year 2003. Specifically, VBA has established an end of fiscal year 2002 inventory goal of about 316,000 claims. To meet this goal, VBA plans to complete about 839,000 rating-related claims during the fiscal year. The regional offices are expected to complete about 792,000 of these claims. This level of production is greater than VBA has achieved in any of the last 5 fiscal years—as shown in table 1, VBA’s peak production was about 702,000 claims in fiscal year 1997. However, VBA has significantly more rating staff now than it did in any of the previous 5 fiscal years. VBA’s rating staff has increased by about 50 percent since fiscal year 1997 to 1,753. To reach VBA’s fiscal year 2002 production goal, rating specialists will need to complete an average of about 2.5 cases per day—a level VBA achieved in fiscal year 1999. VBA expects this production level to enable it to achieve its end-of-year inventory goal of about 316,000 rating-related claims, which VBA believes would put the agency on track to meet the Secretary’s inventory goal of 250,000 cases by the end of fiscal year 2003. To meet its production goal, in December 2001, VBA allocated its fiscal year 2002 national production target to its regional offices based on each regional office’s capacity to produce rating-related claims given each office’s number of rating staff and their experience levels. For example, an office with 5 percent of the national production capacity received 5 percent of the national production target. In February 2002, VBA revised how it allocated the monthly production targets to its regional offices based on input from regional offices regarding their current staffing levels. In allocating the target, VBA considered each regional office’s fiscal year 2001 claims receipt levels, production capacity, and actual production in the first quarter of fiscal year 2002. In March 2001, VBA allowed regional offices to suspend or alter several VBA initiatives in order to increase production. Offices were allowed to revert back to an early version of VBA’s Rating Board Automation (RBA) software for ratings where the new software (RBA 2000) was significantly impeding productivity. In an effort to increase rating decision output while VBA continued its training of new rating specialists, offices were directed to have their decision review officers—who handle veterans’ appeals of regional office decisions—spend half their time rating claims. Also, offices were given latitude to vary from VBA’s case management principles, under which claims processing teams handle most types of claims, and realign staff to perform specialized processing of certain types of claims. To hold regional office managers accountable, VBA incorporated specific regional office production goals into regional office performance standards. For fiscal year 2002, regional office directors are expected to meet their annual production target or their monthly targets in 9 out of 12 months. Generally, the combined monthly targets for the regional offices increase as the year progresses and as the many new rating specialists hired in previous years gain experience and become fully proficient claims processors. At the same time as it is expecting regional offices to complete more claims, VBA has implemented two initiatives to expedite claim decisions and supplement regional office capacity. In October 2001, VBA established the Tiger Team at its Cleveland Regional Office to expedite decisions on claims by veterans aged 70 and older and clear from the inventory claims that have been pending for over a year. The Tiger Team relies on 17 experienced rating specialists, complemented by a staff of veterans service representatives. The Tiger Team also relies on expedited access to evidence needed to complete claims development. For example, VA and the National Archives and Records Administration completed a Memorandum of Understanding in October 2001 to expedite Tiger Team requests for service records at the National Personnel Records Center (NPRC) in St. Louis, Missouri. Also, VBA and the Veterans Health Administration (VHA) established procedures and timeframes for expediting Tiger Team requests for medical evidence and examinations. As of the end of May 2002, the Tiger Team had completed about 10,000 claims requested from 49 regional offices. From December 2001 through May 2002, the team’s production exceeded its goal of 1,328 decisions per month. According to Tiger Team officials, its experienced rating specialists were averaging about 4 completed ratings per day. Officials added that in the short term, completing old claims might increase VBA’s average time to complete decisions. VBA also established nine Resource Centers to supplement regional offices’ rating capacity. The Resource Centers receive claims from nearby regional offices that are “ready to rate,” but which are awaiting decisions. From October 2001 through May 2002, the Resource Centers had completed about 22,000 ratings. The Tiger Team and Resource Centers are expected to complete 47,000 claim decisions in fiscal year 2002; as of the end of May 2002, they had completed about 32,000 decisions. VBA’s ability to achieve this increase in production, and reduction in inventory, depends on (1) increasing productivity of new claims processing staff over the second half of fiscal year 2002 and (2) receipts being consistent with projected levels. VBA’s monthly goals for fiscal year 2002 assume that its large number of new rating specialists will become more productive, with additional experience and training, as the fiscal year progresses. However, VBA lacks historical data on the productivity of staff by experience level. Meanwhile, receipts of new claims must not exceed VBA’s projections. VBA received about 359,000 rating-related claims–-about 3,000 fewer than projected–-in the first half of fiscal year 2002. However, an unexpected surge in receipts could mean that, even if VBA achieved its production goal for the fiscal year, it might not meet its inventory goal. External factors beyond VBA’s control, such as the decisions made by the U.S. Court of Appeals for Veterans Claims, could affect VBA’s workload and its ability to make sustained improvements in performance. As stated in our April 2002 testimony, even if VBA meets its production and inventory goals, it still faces challenges in meeting its 100-day goal. Improving timeliness depends on more than increasing production and reducing inventory. VBA continues to face some of the same challenges that we identified in the past that can lengthen claims processing times. For example, VBA needs to continue to make progress in reducing delays in obtaining evidence, ensuring that it will have enough well-trained staff in the long term, and implementing information systems to help improve claims processing productivity. Figure 1 shows that VBA will need to cut average processing time from 224 days to 100 days by the end of fiscal year 2003. This is less than half its fiscal year 2002 goal and 65 days less than its fiscal year 2003 goal. VBA officials noted that the link between increasing production and improving timeliness is not clear. Thus, the officials could not show how meeting VBA’s production and inventory goals would result in a specific level of timeliness improvement. Given this uncertainty, it is possible that VBA could meet its fiscal years 2002 and 2003 production and inventory goals but not meet the 100-day goal. To its credit, VBA has taken a number of steps over the last year and a half to provide guidance to its regional offices on the proper application of the VCAA requirements for both new and pending veterans’ claims. However, despite VBA’s efforts, results from VBA’s quality assurance reviews indicate a decrease in rating accuracy due to regional office noncompliance with VCAA requirements. In an effort to improve rating accuracy, VBA recently instructed regional office management to ensure that all claims processing employees read and understand VCCA-related guidance. But, VBA may need to do more than verify that claims processors have read and understood the VBA guidance. In the past, we have noted that VBA needs better analysis of case-specific data to identify the root causes of claims processing problems and target corrective actions. If VCAA-related accuracy problems continue, VBA will need to determine the underlying causes for the improper implementation as part of its continuing efforts to monitor proper implementation of the VCAA. Without proper implementation of VCAA, some veterans may not receive the benefits to which they are entitled by law. If VBA continues to experience significant problems with implementing the VCAA, we recommend that the Secretary of Veterans Affairs, direct the Under Secretary for Benefits to identify the causes of the VCAA-related errors so that more specific corrective actions can be taken. We received written comments on a draft of this report from VA (see app. I). In its comments, VA concurred with our recommendation that if VBA continues to experience problems with implementing the VCAA, VBA identify the causes of the VCAA-related errors so that more specific corrective actions can be taken. We will send copies of this report to the Secretary of the Department of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-7101 or Irene Chu, Assistant Director, at (202) 512-7102. In addition to those named previously, Steve Morris, Corinna Nicolaou, Martin Scire, and Greg Whitney made key contributions to this report.
The Veterans Claims Assistance Act of 2000 was passed in response to concerns expressed by veterans, veterans service organizations, and Congress over a 1999 decision of the U.S. Court of Appeals for Veterans Claims that held that the VA did not have a duty to assist veterans in developing their claims unless they were "well-grounded." The Veterans' Benefits Administration (VBA) has taken a number of steps, including issuing guidance, revising and supplementing this guidance based on questions raised by regional offices, and reinforcing the guidance based on the results of its accuracy reviews. Despite these efforts, VBA has found problems with consistent regional office compliance with the law. While taking steps to implement the act, VBA is also focusing on significantly increasing production and reducing the claims inventory to manage the slowdown in case processing. In fiscal year 2002, VBA plans to complete 839,000 claims to reduce its inventory to 316,000 claims.
Americans rely on wastewater systems to protect public health and the environment. These systems are composed of a network of pipes and pumps that collect wastewater from homes, businesses, and industries and transport it to treatment facilities where it is treated prior to being discharged to surface waters. Historically, wastewater systems in the United States have been owned and operated by public agencies at the municipal level. In fact there are about 16,000 publicly owned wastewater treatment plants in the United States, which serve about 97 percent of U.S. residents served by sewers. The remaining 3 percent are served by privately owned wastewater treatment facilities. Laws and regulations applying to wastewater treatment and the financing of wastewater infrastructure often differ based on whether a treatment facility is publicly or privately owned. EPA sets standards for the quality of wastewater that can be discharged under the Clean Water Act. Under this law, the National Pollutant Discharge Elimination System (NPDES) program limits the types and amounts of pollutants that industrial and municipal wastewater treatment facilities may discharge into the nation’s surface waters. Both public and private wastewater treatment facilities discharging into U.S. waters are required to have NPDES permits authorizing their discharges. Generally speaking, municipal wastewater treatment facilities are designed to treat typical household wastes and certain pollutants in commercial and industrial wastes, primarily those identified in the Clean Water Act as conventional pollutants. Municipal facilities, however, may not be designed to treat toxic pollutants, such as heavy metals, which more typically occur in industrial waste streams. The Clean Water Act authorized EPA to develop pretreatment standards—implemented as the National Pretreatment Program—to prevent certain pollutants, such as toxics discharged by industries into sewers, from passing through municipal wastewater facilities and into surface waters, or from interfering with the facilities’ treatment processes. The National Pretreatment Program regulations require publicly owned wastewater facilities treating more than 5 million gallons of wastewater per day, and receiving certain pollutants from industrial users, to develop pretreatment programs. It further requires that municipalities possess adequate authority to require industrial users to pretreat their wastewater before discharging it into sewers. The pretreatment standards do not, however, apply to industrial discharges into privately owned wastewater facilities. Without such standards or a municipal pretreatment program, privately owned wastewater facilities may use alternative mechanisms to ensure that nonconventional waste is properly treated before it enters the sewer system, which according to EPA may be more costly and difficult. The Clean Water Act also authorized significant federal construction grants to help municipalities build eligible wastewater treatment facilities. In the 1980s, concerns about the federal deficit, among other factors, led to a transition from these grants to the CWSRF program, which was established in 1987. Under this program, the federal government provides capitalization grants to states, which in turn must match at least 20 percent of the federal grants. The states then use the money to provide generally low-interest loans to fund a variety of water quality projects at the municipal level, and loan repayments are cycled back into the fund to be loaned out for other projects. In 2008, states provided CWSRF loans totaling about $5.8 billion to municipalities and other recipients. States can loan CWSRF funds to publicly owned wastewater treatment facilities, but privately owned facilities are generally not eligible for CWSRF loans. The federal government also helps finance wastewater infrastructure by subsidizing municipalities’ use of the bond markets through the tax code. Municipalities sell bonds to investors to gain an up-front sum to use for infrastructure or other purposes; the investors are then paid back over time, with interest. The federal government subsidizes municipalities’ bond issuances by exempting the interest investors earn on these bonds from federal income tax, thus lowering borrowing costs for municipalities. The Congressional Budget Office estimated that the federal subsidy of municipal bonds for all types of infrastructure amounted to $26 billion in foregone tax revenue annually between 2008 and 2012. The federal government restricts the level of private involvement in projects financed by tax-exempt municipal bonds, limiting the extent to which private companies can benefit from the federal subsidy. There are several types of bonds that municipalities can issue to finance publicly owned wastewater infrastructure, including general obligation bonds and revenue bonds. General obligation bonds are backed by the full faith and credit of the issuing municipality, meaning that the municipality pledges to use revenue from taxes to pay back the bond. Municipalities’ capacity to issue general obligation bonds is often limited by state law. In contrast, revenue bonds are backed by the revenue from the facility being constructed with bond proceeds—in the case of wastewater, revenue bonds are usually backed by revenue from sewer rates. In cases where a private company’s involvement in a wastewater facility exceeds thresholds for issuing municipal bonds, the municipality may still be able to issue another type of tax-exempt bond called a qualified private activity bond. The Department of the Treasury limits the volume of private activity bonds that can be issued in each state in a given year; the national limit for calendar year 2010 was $30.86 billion. In order to issue qualified private activity bonds for a wastewater project, a municipality must receive an allocation of private activity bonds from their state, which can be difficult because wastewater projects generally must compete against projects in other sectors, which may include affordable housing, education, and health care. Although the federal government contributes significant funds to wastewater infrastructure through the CWSRF and tax code, municipalities have primary responsibility for financing wastewater infrastructure. According to U.S. Census Bureau estimates, in fiscal year 2007 municipalities spent about $43 billion on wastewater operations and capital projects, while states spent about $1.4 billion. Most municipalities pay for wastewater infrastructure improvements with sewer rate revenues and by issuing municipal bonds. A 2005 National Association of Clean Water Agencies survey of 141 utilities serving more than 81 million people asked respondents which sources of revenue they used to pay for capital improvements to wastewater systems. The 75 utilities responding to this question said that 49 percent of revenues supporting capital improvements came from municipal bonds (both revenue bonds and general obligation bonds) and other types of debt, 16 percent from CWSRF loans, 16 percent from user charges such as sewer rates, and 19 percent from other sources. In addition to obtaining funding for new infrastructure, municipalities are also generally responsible for overseeing the planning, design, and construction of wastewater facilities. Conventionally, wastewater projects follow a design-bid-build approach in which the municipality contracts with separate entities for the discrete functions of a project, generally keeping much of the project responsibility and risk with the public sector. To meet the continuing need for wastewater infrastructure, some municipalities have used alternatives to this design-bid-build procurement approach, including a variety of types of PPPs, which are described in figure 1. In the last 30 years, hundreds of municipalities have entered into PPPs for the operations and maintenance of their wastewater facilities. In addition, some communities have entered into PPPs—often called design- build-operate agreements—in which the private sector designs, constructs, and then operates new wastewater infrastructure for a period of time. PPPs can also be developed to include private financing, which can serve as an alternative to traditional wastewater infrastructure funding sources. Policymakers and wastewater groups have proposed numerous approaches to bridge the potential gap between current levels of federal, state, and local spending and future infrastructure needs. Two such approaches build on traditional ways of financing wastewater infrastructure: increasing funding for the CWSRF and implementing EPA’s Sustainable Water Infrastructure Initiative. The CWSRF has seen an increase in funding in recent years, from $689 million in fiscal year 2009 to $2.1 billion in fiscal year 2010. In addition, $4 billion was appropriated to the CWSRF as part of the American Recovery and Reinvestment Act of 2009. EPA’s Sustainable Water Infrastructure Initiative encourages wastewater and drinking water utilities to improve the management of their systems, to plan ahead for infrastructure needs, and to charge the full cost of their services—including the costs of building, maintaining, and operating a wastewater system over the long term. In its 2002 report about the clean water infrastructure gap, EPA noted that if wastewater utilities implemented annual rate increases of 3 percent over inflation over a 20- year period, the infrastructure gap would disappear. In addition, wastewater stakeholders and policymakers have also proposed a number of alternative approaches that could be used to bridge the wastewater infrastructure financing gap. For example, one option would be for Congress to create a federal clean water trust fund. We have previously examined design issues that would need to be addressed in establishing such a fund, including how a trust fund should be administered and used; what type of financial assistance should be provided; and what activities should be eligible to receive funding from a trust fund. In addition, a clean water trust fund would require a source of revenue. We found that, while a number of options have been proposed to generate revenue for a clean water trust fund—including excise taxes, a corporate income tax, and a water use tax—several obstacles would have to be overcome in implementing these options, including defining the products or activities to be taxed, establishing a collection and enforcement framework, and obtaining stakeholder support. Policymakers and wastewater stakeholders have also suggested that Congress create an NIB to finance many types of infrastructure, including wastewater facilities. Since 2007, three bills have been introduced that outline different visions for an NIB or similar entity that would finance wastewater infrastructure: The National Infrastructure Development Bank Act of 2009 (H.R. 2521) proposed establishing a government corporation to finance infrastructure projects across sectors, prioritizing those that contribute to economic growth, lead to job creation, and are of regional or national significance. It would have the authority to issue loans, bonds, and debt securities, as well as to provide loan guarantees. The National Infrastructure Bank Act of 2007 (S.1926 and H.R. 3401) proposed creating an independent federal entity to finance infrastructure projects that have “regional and national significance” with a public sponsor and a potential federal investment of at least $75 million. It would be authorized to issue up to $60 billion in bonds, which would carry the full faith and credit of the United States; the bond proceeds could be used to finance direct subsidies and loans, among other things. The National Infrastructure Development Act (H.R. 3896), introduced in 2007, proposed creating two government corporations with an intended initial capitalization of up to $9 billion in federal appropriations over the initial 3 years. Thereafter, the corporations would be self-financed through business income with the possibility of converting to government- sponsored enterprises (GSE). Yet another approach for closing the wastewater financing gap is to encourage private investment in wastewater projects, including through privately financed wastewater PPPs at the municipal level. The 1992 Executive Order directed federal agencies to review and modify federal policies related to federally-financed infrastructure to encourage appropriate privatization—including long-term leases—of infrastructure at the local level. Figure 2 shows that the privately financed PPPs discussed in this report generally fall into two categories: design-build-finance- operate (DBFO) partnerships and lease partnerships. DBFO. For new infrastructure or significant upgrades, a municipality and a company enter into a DBFO partnership in which the company is responsible for designing, constructing, and financing the infrastructure and then operating and maintaining it for the term of the contract. The municipal partner typically makes payments to the company covering both debt service and operations and maintenance. Lease partnership. For existing infrastructure, a municipality and a company enter into a lease partnership in which the municipality leases wastewater infrastructure assets (such as a treatment plant) to the company, which is then responsible for operating and maintaining those assets for a set period of time. The company makes a lease payment to the municipality in exchange for the opportunity to operate and maintain the facility. This payment may be a onetime up-front payment, called a concession fee, or lease payments could be spread out over the life of the lease. Over the course of the lease, the municipality, or the ratepayers, make payments to the company for operations and maintenance services and to repay the company’s periodic lease payments or initial investment (i.e., the concession fee). While private financing can serve as an alternative to traditional infrastructure funding sources, we have previously reported that private financing is not “free money”—rather this funding is a form of private capital that must be repaid to investors seeking a return on their investment. Depending on how a privately financed PPP agreement is structured, it may also result in joint public-private ownership of the wastewater assets being financed, which could result in the facility losing its regulatory status as a publicly owned wastewater facility as defined pursuant to the Clean Water Act. Joint public-private ownership could also result in the loss of the municipality’s ability to issue tax-exempt bonds. Stakeholders who responded to our questionnaire addressed a variety of issues in three key areas that would need to be considered in designing an NIB: mission and administrative structure, financing authorities, and project eligibility and prioritization. Appendix II lists the organizational and individual stakeholders who responded to our questionnaire. Appendix III lists the questions asked in the questionnaire and provides the full range of stakeholder responses we received. About three-quarters of stakeholders (20 of 27) responding to our questionnaire supported the creation of an NIB. Seven of these stakeholders supported an NIB because it could provide another source of funding for critical infrastructure projects. In contrast, 1 of 27 stakeholders opposed the creation of an NIB for water and wastewater, instead supporting increased financing for the CWSRF, which according to the stakeholder is a proven mechanism for providing cost-effective and sustainable financing. In addition, 6 of 27 stakeholders selected “other”— neither supporting nor opposing the creation of an NIB—and cited a variety of reasons. For example, two of these stakeholders indicated that their positions on an NIB would depend on its authorizing legislation and expressed concerns about how a new entity would affect the CWSRF. Another indicated that a clear need for an NIB had not been established. Stakeholders had varying views on an NIB’s mission and the infrastructure sectors it should finance. Of the 20 stakeholders who supported the creation of an NIB, about two-thirds (13) indicated that its mission should be to fund infrastructure in multiple sectors, such as transportation, energy, water, and wastewater. Among the reasons these stakeholders cited for supporting a cross-sector NIB are that it would allow for coordination across sectors and that financial experts at a cross-sector NIB would be able to easily apply their expertise to financing a wide range of projects. In contrast, about one-third of stakeholders who supported the creation of an NIB (7 of 20) thought its mission should be to fund only water and/or wastewater infrastructure. Stakeholders suggested a variety of options when asked how an NIB should interact with the CWSRF—currently the largest source of federal financial assistance for wastewater infrastructure. About half of stakeholders (13 of 29) suggested that an NIB assist the CWSRF in a variety of ways including, for example, providing additional capital for the CWSRF and helping states leverage their CWSRF funds. About a third of stakeholders (11 of 29) suggested that an NIB act as a complement to the CWSRF. For example, according to four stakeholders with this view, an NIB should fund larger projects that the CWSRF typically does not have the funds to accommodate or multistate projects that can be administratively difficult under the CWSRF. In addition, 3 of 29 stakeholders suggested that an NIB not have any relationship with the CWSRF; one of these noted that state CWSRF programs do not need assistance from an NIB because they already have access to federal and state funds, as well as bond markets for leveraging. In addition, there was no consensus among stakeholders on whether an NIB should be administered as a new responsibility for an existing federal agency, structured as a government corporation, or structured as a GSE. More specifically, 4 stakeholders indicated that an NIB should be a new responsibility for an existing federal agency, 7 indicated that an NIB should be structured as a government corporation, and 4 indicated that an NIB should be structured as a GSE. We have previously reported that an entity’s administrative structure affects the extent to which it is under federal control, how its activities are reflected in the federal budget, and the risk exposure of U.S. taxpayers. Specifically: Federal agencies are generally subject to greater federal control than government corporations and GSEs. For example, federal agencies receive the preponderance of their financial support from congressionally appropriated funds, and Congress can use appropriations, hearings, other lawmaking, and confirmation of senior leadership, as management tools. The President also has significant means of control, for example through responsibility for agencies’ budget proposals, administrative requirements, and the appointment of leadership. Although no two government corporations are completely alike, Congress has generally established government corporations to provide market- oriented public services, such as the Commodity Credit Corporation, which stabilizes and protects farm income and prices. In general, government corporations are not as dependent upon annual appropriations as federal agencies to fund operations—instead, or in addition, receiving funds from consumers of their products and services. As a result of this corporate structure, government corporations have been given greater operational flexibility by Congress and corporations with mixed public-private ownership may be exempt from many executive branch budgetary requirements and disclosures. Nevertheless, government corporations are subject to some federal oversight by, for example, having some or all board members appointed by the federal government, and/or having their budgets displayed in the federal budget. GSEs are privately owned, for-profit financial institutions that have been federally chartered for a public purpose, such as facilitating the flow of investment to specific economic sectors. GSEs generally do not lend money directly to the public but instead provide liquidity to capital markets by, for example, issuing stock and debt and purchasing and holding loans. GSEs are neither managed directly by the federal government, nor are their activities included in the federal budget. Although the federal government explicitly does not guarantee GSE debt obligations, investors have widely assumed that a GSE facing a financial emergency would receive federal support, which has allowed GSEs to borrow at interest rates below those of other for-profit corporations. We have previously reported that the structure of GSEs as for-profit corporations with government sponsorship has undermined market discipline and provided them with incentives to engage in potentially profitable business practices that were risky and not necessarily supportive of their public missions. Indeed, the federal government extended support to two GSEs—Fannie Mae and Freddie Mac—beginning in September 2008 after they lost billions of dollars due to questionable mortgage-related investments. In addition, we have also reported that developing an oversight system for GSEs can be challenging. For example, regulators must have the resources, expertise, and authorities necessary to help monitor GSEs, which, due to the implied federal guarantee on their financial obligations, may have financial incentives to engage in excessive risk taking. Further, regulators must have the stature and authorities necessary to help ensure that GSEs operate within the missions for which they were established because of incentives for GSEs to engage in activities that are profitable but that do not support their missions. Most stakeholders (20 of 22) agreed that the federal government should provide all or some of the initial capital for an NIB, though 4 stakeholders suggested that federal capitalization be augmented by private funds. In addition, 3 of 22 stakeholders suggested that an NIB’s initial capital come from user fees and/or taxes, similar to a trust fund; such user fees and/or taxes, according to 2 of these stakeholders, would provide an NIB with a stable revenue flow while spreading out the funding burden. Although most stakeholders agreed that the federal government should capitalize an NIB, they were split on whether an NIB should continue to rely on federal funds (9 of 22), or instead become self-sustaining (6 of 22). Two stakeholders who supported a self-sustaining NIB explained that it should function as a bank—investing only in projects that are creditworthy and able to repay their loans. When asked about federal funding for an NIB, staff from the Office of Management and Budget noted that, for budgeting purposes, the cost to the federal government should be determined according to the Federal Credit Reform Act of 1990. This act requires that covered federal entities’ budgets include estimates of the government’s long-term cost of issuing loans or loan guarantees, among other things. Most stakeholders (21 of 23) agreed that an NIB should be authorized to generate its own funds for operating expenses and lending, with a majority of stakeholders (15) supporting an NIB authorized to use multiple mechanisms to generate funds. In their responses to our questionnaire, organizations—which are generally more familiar with the wastewater industry—and individuals—who are generally more familiar with wastewater financing—had different levels of support for some of the mechanisms. Most notably, a higher percentage of organizations supported allowing an NIB to issue tax-exempt bonds, while a higher percentage of individuals supported allowing an NIB to charge fees for technical assistance or other services. Stakeholders offered a variety of reasons for supporting financial mechanisms. For example, several stakeholders emphasized the importance of an NIB having a broad range of financial tools for generating its own funds. In addition, two stakeholders who supported giving an NIB the authority to borrow from the U.S. Department of the Treasury and to issue tax-exempt bonds explained that these two options would provide an NIB with access to low-cost capital, which could then be passed on to projects. When asked about an NIB issuing bonds with tax-exempt status, IRS officials noted that there is a general prohibition on tax-exempt bonds being federally guaranteed. In order for an NIB to issue tax-exempt, guaranteed bonds, it would need a statutory exemption to this prohibition similar to those granted for bonds in other sectors, such as housing. Table 1 lists the financing mechanisms most commonly supported by stakeholders. A majority of stakeholders also agreed on some of the mechanisms an NIB should offer for financing projects. Organizations and individuals had different levels of support for some of the mechanisms—most notably, a higher percentage of organizations than individuals rated pooling loans and issuing tax-exempt bonds as very important mechanisms for an NIB to offer. In explaining the importance of the mechanisms an NIB should offer for financing projects, one stakeholder noted that direct loans, pooled loans, and/or federal loan guarantees from an NIB would help infrastructure projects attract additional sources of capital. When we asked staff from the Office of Management and Budget about financing mechanisms an NIB could offer to projects, they did not have specific views on which mechanisms an NIB should offer but emphasized that an NIB should be subject to the Federal Credit Reform Act. Table 2 shows stakeholder views on the mechanisms an NIB could offer. Finally, stakeholders suggested various measures to mitigate the potential risk of exposing taxpayers to the financial losses that could result from multiple municipalities defaulting on NIB loans. Measures suggested by stakeholders included the use of strict credit and underwriting standards in selecting projects and the maintenance of adequate reserves, which could serve to absorb financial losses. Other suggestions included requiring general- or revenue-obligation pledges or insurance from utilities and municipalities. When asked about risk-mitigation measures, staff from the Office of Management and Budget noted that current infrastructure financing programs have developed a variety of measures to mitigate taxpayer risk. For example, the Department of Agriculture’s Rural Utilities Service provides grants and loans for eligible drinking water and wastewater projects in rural communities. Office of Management and Budget staff said that this program mitigates risk by not releasing grant funds to the recipient communities until the project is completed. Stakeholders had a variety of views on the types of projects that should be eligible for financing from an NIB. Specifically, half of stakeholders (12 of 24) indicated that projects of all sizes should be eligible for NIB financing, while a third (8 of 24) noted that only large projects should be eligible. Three stakeholders explained that they support financing projects of all sizes because smaller projects may address important infrastructure needs. Support for an NIB that finances exclusively large projects was stronger among individual stakeholders than among organizational stakeholders, though few stakeholders defined what they meant by “large.” For example, two stakeholders supported an NIB that finances exclusively large projects because it could fund projects beyond the capacity of the CWSRF. In contrast, another stakeholder opposed an NIB that finances exclusively large projects, explaining that one NIB proposal set a threshold of $75 million or more, which could render many wastewater projects ineligible. Similarly, stakeholders had a variety of views on whether NIB financing should be limited to publicly owned and operated utilities. Specifically, 9 of 23 stakeholders thought all types of utilities should be eligible for NIB financing, while another 9 of 23 thought that only publicly owned utilities should be eligible. Three stakeholders indicated that an NIB should assist private utilities and PPPs—in addition to public utilities—because the utilities’ consumers and the general public would still benefit. Stakeholders generally agreed on what costs should be eligible for NIB financing. More than three-quarters of stakeholders agreed that capital projects (24 of 26) and planning and design costs (19 of 25) should be eligible but that routine operations and maintenance costs (24 of 26) and ratepayer assistance (16 of 20) should not be eligible. Four stakeholders noted that capital and planning and design costs should both be eligible because they are closely linked—planning and designing are essential components of carrying out capital projects. Nine stakeholders explained that operations and maintenance activities and/or ratepayer assistance should be funded by utilities through the rates that they charge their customers. One stakeholder also explained that many utilities have not raised rates enough to invest in the needed operations and maintenance for their systems. Our past work has highlighted similar concerns, noting that many utilities were not routinely charging the full cost for wastewater services. A majority of stakeholders said an NIB should use a combination of methods to allocate funding to eligible projects; such methods include directly funding projects ranked using specific criteria, allocating funding to sectors, or allocating funding to states. Stakeholders had differing views on which combination of methods should be used. The most commonly supported methods were directly funding projects ranked using specific criteria and allocating funding to infrastructure sectors. Stakeholders provided a variety of reasons for supporting these methods. For example, one stakeholder supported directly funding projects ranked using specific criteria to ensure that the projects most in need—including smaller projects—would receive assistance. In addition, 2 stakeholders explained that allocating amounts by sector would be necessary to ensure that each sector receives funding, while 3 others noted that the differences between sectors would make it difficult for an NIB to evaluate projects across sectors. Stakeholders also agreed that an NIB should prioritize projects that address the greatest infrastructure need and that generate the greatest public health and environmental benefits. One stakeholder explained that these three criteria are the main reasons for wastewater regulations. However, another stakeholder questioned how “greatest infrastructure need” would be defined. Our past work has highlighted similar concerns, noting that infrastructure “need” is difficult to define and to distinguish from a wish list of capital projects. It can also be difficult to measure environmental and public health benefits. For example, while the CWSRF uses a uniform set of measures to help determine efficient and effective use of CWSRF resources, our past work has found that a lack of baseline environmental data and technical difficulties made it difficult to attribute benefits specifically to the CWSRF. A complete list of criteria supported by a majority of stakeholders is shown in table 3. We identified seven privately financed wastewater PPPs developed since 1992. Municipal and wastewater services company officials we interviewed identified numerous potential advantages to these partnerships, including faster construction of new facilities, access to alternative sources of financing, increased efficiency, and access to outside experts and technology solutions. Officials also identified numerous potential challenges to these partnerships, including public and political opposition, the higher cost of private financing, and concerns over a loss of municipal control over wastewater equipment, operations, or rates. As shown in table 4, we identified seven municipalities that have developed privately financed wastewater PPPs since 1992. Although all seven of these municipalities entered into privately financed wastewater PPPs, their reasons for doing so differed, as did the contract terms. Two examples illustrate these differences: Santa Paula, California, entered into a DBFO in 2008. The city of Santa Paula had an existing wastewater treatment plant that was not compliant with the waste discharge requirements of the Los Angeles Regional Water Quality Control Board. The city entered into a consent agreement with the board in which it agreed to achieve full compliance with water quality requirements by December 15, 2010, or else face $8.5 million in penalties. According to city officials, the Santa Paula City Council decided to enter into a DBFO partnership because it believed a DBFO would be less expensive than a traditional procurement and could better ensure the city would meet its deadline. The city awarded a contract to Santa Paula Water—a company formed by PERC Water and Alinda Capital—to design, build, and finance a new water recycling facility as well as to operate the facility for 30 years. Through monthly service fees, the city is to repay Santa Paula Water for its investment in the plant and pay for operations, maintenance, repair, replacement, and a profit margin. PERC Water owns the treatment facility over the 30-year contract term, after which ownership reverts to the city. Fairbanks, Alaska, entered into a lease partnership in 1997. Fairbanks’ wastewater treatment system faced a multimillion dollar deficit and needed substantial capital improvements. However, according to a city official, Fairbanks city residents were reluctant to approve bond issuances, and local government officials were reluctant to raise rates. In addition, Fairbanks was in a unique situation in that the city owned several other utilities, including a telephone utility and an electric utility. The city was approached by a consortium of companies that proposed to buy or lease all the city’s utilities, and voters approved the action. As part of this deal, Golden Heart Utilities leased the wastewater treatment plant in 1997 for a 30-year term. In exchange, the company pays Fairbanks about $33,000 per month in lease payments. Golden Heart Utilities also operates and maintains the treatment plant, and its service fee is paid by ratepayers. Municipal and company officials we spoke with identified several potential advantages of privately financed wastewater PPPs for municipalities as compared with traditional publicly financed, operated, and maintained wastewater facilities. The most commonly cited advantage was the potential for faster or more certain delivery times for new facilities or facility upgrades, as compared with traditional public procurement. Three municipalities cited faster delivery times as a reason they entered into privately financed PPPs; in two cases, the municipalities were facing regulatory deadlines that required them to upgrade their facilities or pay fines. Company and municipal officials told us private procurement may be faster because it is more streamlined than public procurement. This view was echoed in a 1992 publication on wastewater treatment privatization, which stated that wastewater industry officials believe PPPs in which a company designs, builds, and operates a facility can save time because design, construction, and operations are not compartmentalized, so design and construction phases can overlap. Similarly, in a 2000 publication, a chapter discussing PPPs in the wastewater sector points out that, in a privately financed PPP, companies are not bound by the same administrative regulations as federal and state construction projects. In addition, officials from Franklin, Ohio, and Woonsocket, Rhode Island, told us that they believe it took less time to secure private financing than public financing, an advantage specific to privately financed PPPs. The next most commonly cited advantage of privately financed PPPs was access to alternative sources of wastewater infrastructure financing. For example, officials from Arvin, California, told us the city did not access the bond market because of its low credit rating, even as the city faced regulatory compliance concerns. Similarly, an official from Fairbanks, Alaska, said it was difficult to convince the public to approve bonds, preventing the city from using municipal bonds to finance wastewater infrastructure. Another advantage cited by company and municipal officials and publications we identified is that privately financed PPPs may bring cost and operational efficiencies to wastewater collection and treatment. Several municipal officials told us companies can take advantage of economies of scale in a privately financed PPP by, for example, buying key supplies, such as chemicals, in bulk. The 2000 chapter that discussed PPPs in the wastewater sector also noted that a primary way companies can reduce costs is through managing their three chief expenses—labor, electricity, and chemicals. By operating a number of plants, a company can spread staff—and costs—more widely. However, other officials we spoke with noted that efficiencies can also be achieved by public utilities without a privately financed PPP. For example, one regional utility said that it achieved economies of scale by constructing regional plants, each of which served multiple municipalities. In addition, according to a 2002 study of privatization of water services by the National Research Council (NRC), the private sector is not necessarily more efficient than the public sector and vice versa. While company officials said a privately financed PPP can operate more efficiently by making better capital investment decisions, this may depend on the terms of the PPP contract. According to officials at one company, municipal governments face political pressure to keep costs down in the short term, which can lead to higher costs in the long run. Company officials told us that a contract that makes the private partner responsible for both capital upgrades and maintenance can incentivize decisions that save money in the long run. For example, according to PERC Water officials, in its privately financed PPP with the city of Santa Paula, the company invested its own funds above the signed contract price for energy efficient equipment expected to reduce energy consumption and operating costs over the 30 year term of the contract. In contrast, if a contract passes capital repair costs through to municipalities, one municipal official told us that companies may have an incentive to underinvest in maintenance. In such circumstances, delaying maintenance could result in savings for the private partner but impose higher costs on the municipality by hastening the need for capital repairs. Another commonly cited advantage of privately financed wastewater PPPs is that the private partner may have greater access to expertise and technology than some municipalities. For example, officials from one company told us it spends $200 million a year on research and development and can draw on this research to solve problems municipalities have not been able to solve on their own. Similarly, according to a 2000 publication on municipal wastewater treatment outsourcing, wastewater treatment companies may have more experienced personnel and better access to the latest technologies if wastewater treatment is the company’s core business. For example, an official from Fairbanks, Alaska, told us that prior to entering into a privately financed PPP, his city had been unable to process the sludge from its wastewater treatment plant into a useful form. Golden Heart Utilities used a technology to convert the sludge into compost, which is now sold to the public. This access to expertise and technology may be particularly important for small- and medium-sized communities, which may lack the expertise to upgrade or operate plants to meet regulatory standards, according to the 2002 NRC study. Several municipal and company officials also cited up-front payments to municipalities as an advantage of privately financed PPPs. Up-front payments to municipalities could be used to finance wastewater infrastructure improvements, but company and municipal officials told us these payments could also be used to finance other priorities, such as a pension fund or municipal budget gap. Although six of the seven municipalities that entered into privately financed PPPs received up-front payments from their private partners, at least three used part of the payment for nonwastewater-related activities. One municipal official told us his municipality was motivated to enter into a privately financed PPP so that it could use the up-front payment to supplement its general fund and scale back a planned property tax increase. Similarly, the mayor of Akron, Ohio, proposed that the city lease its wastewater assets and use the up- front payment to fund a scholarship program that would allow all Akron students to attend the University of Akron. Voters ultimately rejected this proposal. In a 1997 response to congressional questions about wastewater PPPs, EPA pointed out that up-front payments can be viewed as loans from the company to the municipality and will require wastewater users to repay the company, with interest. According to EPA, an increase in user fees can result when an up-front payment exceeds the previously outstanding local debt on the wastewater treatment facilities. We have highlighted similar considerations about the use of up-front payments in the transportation sector. Finally, company and municipal officials said that privately financed PPPs may allow local governments to increase their focus on other functions, such as police and fire services. In contrast, however, some municipal officials told us they would not consider entering into a privately financed wastewater PPP because they believe wastewater treatment is a core municipal duty. According to the 2002 NRC study, local officials are in part drawn to private participation in their wastewater utilities because of the need to focus civic energies and resources on more immediate social problems. Although the role of a municipal government in a privately financed PPP may change, it is still important. For example, according to the NRC study, if a utility’s operations are transferred to the private sector, the public sector’s importance does not diminish but rather changes from that of operator to contract manager—a role that can require new talents and skills. Similarly, an official in Woonsocket, Rhode Island, told us that carrying out a privately financed PPP contract on a daily basis takes more time and expertise than he expected, because even simple questions can require a review of the city’s 1,000-page contract with its private partner. Municipal and company officials also identified a number of potential challenges to considering and developing privately financed wastewater PPPs. The challenge cited most often by municipal and company officials was public and political opposition. These officials told us that the public is sometimes concerned about the possibility that a company would not be as responsive to ratepayers as a municipality, about job losses for municipal employees, and about sewer rate increases. For example, North Brunswick, New Jersey, entered into a privately financed PPP in 1995, but terminated that agreement in 2002, in part because of public reaction to rate increases. An official from Fairbanks, Alaska, told us some residents feel the city “gave away” its wastewater utility in its privately financed PPP deal, and they object to a company profiting from running the utility. In at least one case, opposition from citizens as well as interest groups derailed the development of a privately financed PPP in Akron, Ohio. Municipal and company officials said that making private financing attractive to municipalities may be a challenge for a variety of reasons: Private financing generally costs more than public financing. Municipal and company officials told us that private financing typically costs more than tax-exempt municipal bonds. In its 2002 study, the NRC reported that the federal tax exemption on municipal bonds gave municipal borrowers a 2.5 percent to 3 percent cost advantage over private bonds. The NRC study also reported that, for municipalities, private financing is roughly 20 to 40 percent more expensive than public financing. Municipal officials told us the profit motive of companies may also drive up the cost of a privately financed PPP. However, one municipal official in Woonsocket, Rhode Island, noted that the speed at which private financing can be obtained could still result in a lower overall cost, due to the time saved. Similarly, company officials told us they are able to compensate for the higher cost of financing over the course of a contract term. For example, officials cited tax rules generally allowing companies to depreciate capital, and their ability to find cost savings through efficiencies as ways to offset their costs over the contract term. Combining private financing with public financing is difficult. In writing the contract for a privately financed PPP, the parties must carefully follow IRS tax rules to avoid changing the status of existing tax- exempt municipal bonds to taxable bonds. IRS officials told us that, under the tax code, a municipality in such a partnership could continue to issue tax-exempt general obligation bonds to finance wastewater infrastructure only under certain circumstances. For example, a sewage facility could be financed with 50 percent private financing and 50 percent tax-exempt general obligation bonds, if no payments from the private partner or ratepayers secure the public debt or are used to pay the public debt service. Under these rules, it is especially difficult for a municipality in a privately financed PPP to issue tax-exempt revenue bonds—often the preferred type of bond for wastewater facilities—because the revenue bonds are secured by payments from ratepayers. According to an official from the Office of Chief Counsel, which advises the IRS, a privately financed PPP can be financed with tax-exempt qualified private activity bonds if it meets criteria in applicable statutes and regulations. However, one company official said that the volume caps imposed on the issuance of private activity bonds in each state limit their availability for wastewater projects; he advocated lifting the state volume caps. Several municipal officials told us another challenge is their concern about the loss of control over municipal wastewater facilities and rates. Officials at one municipality told us they chose not to pursue a privately financed wastewater PPP in part because they believed they would lose some control over rate setting and system growth. According to a 2000 chapter that discussed PPPs in the wastewater sector, in a privately financed PPP, a local government’s control over a facility’s operations depends on the contract’s terms. For example, officials in Santa Paula, California, told us they experienced a loss of control over plant design, choice of equipment, and construction oversight after entering into their DBFO. The officials explained that, while the city’s contract with its private partner includes performance specifications, the city has no control over the methods the company uses to achieve those specifications. Further, because the city does not have detailed knowledge of the facility or its operations, it may not be able to pass on such details to other operators when its current contract ends. Municipal and company officials also cited their lack of experience with privately financed wastewater PPPs as a challenge to the development of such partnerships. For example, one municipal official commented that few municipalities will want to be the first to try something new and potentially risky. Another municipal official echoed that concern, commenting that there are few examples showing this model can work effectively in the United States. A company official told us that municipal officials are concerned about being locked into a relationship with a private partner for a long-term contract and the difficulties of maintaining a good relationship during that time. Company officials also cited the need for more education about privately financed PPPs to explain their advantages. Municipal and company officials also told us that developing a contract for a privately financed wastewater PPP can be costly and difficult, in part, because of the lack of experience of companies and municipalities with these contracts and, in part, because of their complexity. For example, an official from Santa Paula, California, told us the city’s attorneys did not have experience with DBFO contracts, so the city hired specialized counsel to develop the DBFO, resulting in legal fees three times greater than for a traditional procurement. A company official told us the complexity of privately financed PPPs and the differences between this type of procurement and traditional procurement can result in slower transactions. One municipal official noted that part of the complexity associated with developing a privately financed PPP contract is transitioning employees from the public to the private sector. In addition, the 2002 NRC study noted that the preparation of adequate contracts is expensive and time-consuming, and outside legal and engineering expertise is usually needed. We have cited similar concerns for highway PPPs. One municipal official noted that communities often look to privately financed PPPs when they are financially stressed, but this might make it difficult to hire experienced contractors and consultants to protect the interests of the community. Finally, municipalities may encounter difficulties entering into privately financed PPPs due to state and federal laws as follows: State laws. Municipal officials cited state laws that, in some cases, outlaw the use of the same contractor to design and build a wastewater treatment facility as a challenge, which would prohibit the use of DBFOs, as well as other design-build PPPs. Specifically, a municipal official in Ohio told us he would like to pursue a DBFO, but state law requires design and construction to be bid separately from one another, and also requires different trades be bid separately, such as electrical and plumbing. Ultimately, he told us, this prevents design-build contracts, with or without private financing. Echoing this point, a company official told us that developing privately financed PPP contracts is complicated by the fact that every state has its own procurement rules. Federal financial interest. According to EPA officials, prior to accepting private financing, municipalities must repay any remaining federal investment for facilities built under the construction grants program of the 1970s and 1980s, as well as any other federal grants. Officials from Franklin, Ohio, told us some of the up-front payment from the private partner was used to repay the existing federal interest in the wastewater plant, since it was built with federal grants in 1972. EPA officials told us that, although most facilities that received funds through the construction grants program are now fully depreciated with no remaining federal financial interest, some other more recent grants, including construction grants that are still awarded to the District of Columbia and U.S. Territories, congressionally directed grants for particular wastewater facilities, and direct grants through states under the American Recovery and Reinvestment Act, would also be subject to early payback. We provided a draft of this report to EPA, IRS, the Office of Management and Budget, and the U.S. Department of the Treasury for review and comment. These agencies did not provide written comments to us. EPA and IRS provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, Secretary of the Treasury, Administrator of EPA, Director of the Office of Management and Budget, Commissioner of IRS, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine stakeholders’ views on the issues to be considered in designing and establishing a national infrastructure bank (NIB), we reviewed past legislative proposals and wastewater industry position papers on establishing an NIB. In addition, we interviewed stakeholders with knowledge of a variety of wastewater infrastructure issues, including individuals and organizations from the water and wastewater industry; financial sector; and federal, state, and local government; and obtained their views on establishing and designing an NIB. Based on the information obtained through these interviews, and our review of reports and legislative proposals, we developed a questionnaire to gather information about stakeholder views on an NIB’s mission and administrative structure, financing authorities, and project eligibility and prioritization. We pretested the questionnaire with four stakeholders from a variety of backgrounds and made changes based on their input. In addition to developing the questionnaire, we identified organizational and individual stakeholders familiar with wastewater infrastructure financing issues and existing NIB proposals. We developed this list based on our preliminary interviews and prior GAO work on wastewater infrastructure financing. We sent the questionnaire to 23 national organizations with expertise in the wastewater industry in one of the following areas: financing and operating wastewater projects, constructing and maintaining wastewater infrastructure, local and state wastewater infrastructure needs, and environmental protection. In addition, we identified individuals involved in wastewater infrastructure financing to provide additional perspective on the creation and design of an NIB. We sent the questionnaire to 14 individuals with expertise in financing wastewater infrastructure, including: consultants who provide advice to municipalities; state financing officials; officials from private investment firms; and policy consultants who have studied an NIB or wastewater infrastructure financing. Although we sought to include stakeholders with a variety of perspectives about an NIB, the views of stakeholders consulted should not be considered to represent all perspectives about an NIB. In addition, although an NIB could potentially finance many types of infrastructure, we limited our stakeholders to those familiar with the wastewater sector. We received responses from 18 organizational stakeholders. Of the 5 organizational stakeholders that did not respond, 2 told us they could not come to a consensus on behalf of their organization. We also received responses from 11 individuals. Our overall response rate was 78 percent. Some stakeholders did not answer all of the questions on the questionnaire, so the number of responses for each question varies. For a list of the organizational and individual stakeholders that responded to the questionnaire, see appendix II. Appendix III provides the responses that stakeholders gave regarding design issues to be considered in creating an NIB. To provide additional context about the potential implications of an NIB’s design on the federal budget, and its risk to U.S. taxpayers, we reviewed prior GAO reports, as well as reports by the Congressional Budget Office. We also spoke with officials at the U.S. Department of the Treasury, the Internal Revenue Service, and the Environmental Protection Agency (EPA). In addition, after analyzing the results from our questionnaire, we interviewed staff from the Office of Management and Budget to discuss how an NIB might affect the federal budget and U.S. taxpayers. We conducted a similar interview with officials at the Department of the Treasury; however because the current administration is still deliberating issues related to an NIB, Treasury officials could not comment on specific issues discussed by stakeholders responding to our NIB questionnaire. To determine the extent to which wastewater public-private partnerships (PPPs) have been privately financed, we conducted a literature search of online databases to identify academic and news articles discussing privately financed wastewater PPPs initiated since 1992, when President Bush signed an Executive Order encouraging such partnerships. Despite these efforts, it is possible that we did not identify all privately financed wastewater PPPs initiated since 1992. For purposes of this report, a privately financed wastewater PPP is a partnership involving the core business of collecting and treating municipal wastewater between a municipality (or other public entity) and one or more private partners in which the private partner(s) contribute private funds to the partnership. For our report, the public partner must retain a long-term interest in the facility. This means that, if the private partner acquires an ownership stake in any of the wastewater assets, the public partner must be able to reacquire the assets on preferential terms at the end of the contract. To determine the potential advantages and challenges of privately financed wastewater PPPs, we conducted interviews with officials from six of the seven municipalities we identified that entered into a privately financed wastewater PPP since 1992; officials from Cranston, Rhode Island, declined to speak with us. In addition, we conducted case studies in four of the states in which privately financed wastewater PPPs have occurred: Alaska, California, New Jersey, and Ohio. As part of our case studies, we spoke with numerous municipalities in each state about their wastewater financing choices to get additional context about why few municipalities have entered into privately financed PPPs. These municipalities were selected to include municipalities of varying sizes, as well as municipalities who are not involved in privately financed wastewater PPPs, but who have considered the option in the past. We also spoke with state officials as needed to understand more about the legal context within each state. Table 5 includes a list of the municipalities and state agencies we spoke with as part of our case studies. To obtain additional information about private sector views on the advantages and challenges of privately financed wastewater PPPs, we interviewed officials at the six largest water and wastewater services companies in the United States: American Water, CH2M Hill, Severn Trent Environmental Services, South West Water Company, United Water, and Veolia Water. We also interviewed officials from PERC Water, a water recycling company involved in the privately financed wastewater PPP in Santa Paula, California. In addition, we interviewed officials from EPA and numerous stakeholders in the water and wastewater industry, including national associations representing wastewater utilities, consultants that advise municipalities on wastewater financing decisions, and representatives from the financial sector involved in water and wastewater infrastructure financing. Finally, we also conducted a literature search to identify publications that discuss the advantages and challenges of privately financed wastewater PPPs in the United States. After reviewing various publications, we included the 10 publications that: (1) focused on the wastewater industry in the United States; (2) discussed the advantages and challenges of wastewater PPPs; and (3) specifically addressed the use of private financing in the context of a PPP. Throughout the report, we cite the advantages and challenges identified in these 10 publications to provide additional context to the information gathered in our interviews. See appendix IV for a complete list of the publications we identified. We conducted our work from June 2009 to June 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. The following stakeholders responded to our questionnaire regarding design issues to be considered in creating a national infrastructure bank. The individuals who responded to our questionnaire presented their personal views and not the views of the organizations for which they work. This appendix provides information on stakeholders’ responses to our questionnaire addressing design issues to be considered in creating an NIB. The questions asked in the questionnaire are reproduced below, along with a tally of stakeholder responses for each closed-ended question. 1. What types of infrastructure should an NIB provide financing for? 2. What should be the mission of an NIB? Stakeholders provided a variety of open-ended responses to this question, which are discussed in the report as appropriate. 3. If an NIB is created, how should it be structured? 4. What relationship, if any, should an NIB have with the existing state- level Clean Water State Revolving Fund programs? Stakeholders provided a variety of open-ended responses to this question, which are discussed in the report as appropriate. 5. How should an NIB initially be capitalized? Stakeholders provided a variety of open-ended responses to this question, which are discussed in the report as appropriate. 6. Should an NIB have the authority to generate its own funds for operating expenses and lending using different financing mechanisms? If you answered “yes” to question 6, which mechanisms should an NIB have the authority to use to generate is own funds? 7. Should an NIB become self-sustaining after its initial capitalization? By self-sustaining, we mean an NIB that is fully reliant on funds that it generates, rather than on continued federal funding. 8. How important is it that an NIB has the authority to provide each of the following financing mechanisms? 9. If an NIB suffers from financial losses due to municipalities defaulting on loans or commercial paper, taxpayers may be at risk to cover those financial losses. How should an NIB mitigate this potential risk to taxpayers? Stakeholders provided a variety of open-ended responses to this question, which are discussed in the report as appropriate. 10. How should an NIB distribute financing to qualified projects? 11. What types of wastewater utilities, if any, should an NIB have the authority to assist? Please check all that apply. 12. Assuming constrained resources, by what method should an NIB prioritize eligible projects for financing? 13. What should be the level of priority for the following criteria that an NIB could use to evaluate projects and select those that should be financed? 14. Should an NIB exclusively finance large infrastructure projects? 15. Should there be a limit on the amount of financing that one project can receive from an NIB? 16. In your opinion, which of the following wastewater infrastructure activities should an NIB finance? 17. In addition to design issues discussed above related to administration, authorities, financing prioritization, and financing eligibility (questions 1 through 16), what other design issues should be considered in designing and establishing an NIB, if any? Stakeholders provided a variety of open-ended responses to this question. 18. Please provide any additional information that would be helpful to GAO in better understanding potential issues related to establishing an NIB. Stakeholders provided a variety of open-ended responses to this question. We identified the following published works which address privately financed wastewater PPPs and were published since 1992: Haarmeyer, David. “Environmental Infrastructure: An Evolving Public- Private Partnership.” in Seidenstat, P., Nadol, M., & Hakim, S. America’s Water and Wastewater Industries: Competition and Privatization. Vienna, VA: Public Utilities Reports, 2000. Heilman, John and Gerald Johnson. The Politics and Economics of Privatization: The Case of Wastewater Treatment. Tuscaloosa, AL: University of Alabama Press, 1992. Landow-Esser, Janine and Melissa Manuel. “Environmental and Contracting Issues in Municipal Wastewater Treatment Outsourcing.” in Seidenstat, P., Nadol, M., & Hakim, S. America’s Water and Wastewater Industries: Competition and Privatization. Vienna, VA: Public Utilities Reports, 2000. Matacera, Paul J. and Frank J. Mangravite in Seidenstat, P., Haarmeyer, D., & Hakim, S. Reinventing Water and Wastewater Systems: Global Lessons for Improving Water Management. New York: J. Wiley, 2002. National Research Council. Privatization of Water Services in the United States: An Assessment of Issues and Experience. Washington, D.C.: National Academy Press, 2002. Seidenstat, Paul, Michael Nadol, and Simon Hakim. “Competition and Privatization in the Water and Wastewater Industries.” in Seidenstat, P., Nadol, M., & Hakim, S. America’s Water and Wastewater Industries: Competition and Privatization. Vienna, VA: Public Utilities Reports, 2000. Seidenstat, Paul. “Organizing water and wastewater industries to meet the challenges of the 21st century.” Public Administration and Management (8:2), 69-99 (2003). Seidenstat, Paul. “Global Lessons: Options for Improving Water and Wastewater Systems.” in Seidenstat, P., Haarmeyer, D., & Hakim, S. Reinventing Water and Wastewater Systems: Global Lessons for Improving Water Management. New York: J. Wiley, 2002. Sills Jr., James H. “The Challenges and Benefits of Privatizing Wilmington’s Wastewater Treatment Plant.” in Seidenstat, P., Haarmeyer, D., & Hakim, S. Reinventing Water and Wastewater Systems: Global Lessons for Improving Water Management. New York: J. Wiley, 2002. Traficante, Michael A., and Peter Alviti, Jr. “A New Standard for a Long- Term Lease and Service Agreement.” in Seidenstat, P., Haarmeyer, D., & Hakim, S. Reinventing Water and Wastewater Systems: Global Lessons for Improving Water Management. New York: J. Wiley, 2002. In addition to the individual named above, Sherry L. McDonald, Assistant Director; Hiwotte Amare; Elizabeth Beardsley; Janice Ceperich; Philip Farah; Cindy Gilbert; Maylin Jue; Corissa Kiyan; Carol Kolarik; Anu Mittal; Marietta Mayfield Revesz; Janice M. Poling; and Ben Shouse made significant contributions to this report. Also contributing to this report were Carol Henn, William B. Shear, and James Wozny.
Communities will need hundreds of billions of dollars in coming years to construct and upgrade wastewater infrastructure. Policymakers have proposed a variety of approaches to finance this infrastructure, including the creation of a national infrastructure bank (NIB) and the increased use of privately financed public-private partnerships (PPP). In this context, GAO was asked to identify (1) stakeholder views on issues to be considered in the design of an NIB and (2) the extent to which private financing has been used in wastewater PPPs and its reported advantages and challenges. In conducting this work, GAO developed a questionnaire based on existing NIB proposals and administered it to 37 stakeholders with expertise in wastewater utilities, infrastructure needs, and financing; GAO received 29 responses from stakeholders with a variety of perspectives about an NIB. To determine the extent to which wastewater PPPs have been privately financed and their advantages and disadvantages, GAO identified and interviewed municipalities involved in privately financed PPPs and wastewater services companies, conducted case studies in states with privately financed PPPs, and conducted a literature review. GAO is not making any recommendations. While this report discusses a number of funding approaches, GAO is not endorsing any option and does not have a position on whether an NIB should be established. Stakeholders who responded to GAO's questionnaire discussed issues in the following three key areas that should be considered in designing an NIB: 1) Mission and administrative structure. While a majority of stakeholders supported the creation of an NIB, their views varied on its mission and administrative structure. One-third supported an NIB to fund only water and wastewater infrastructure, while two-thirds responded that it should also fund transportation and energy projects. There was no consensus among stakeholders on whether an NIB should be administered by an existing federal agency, structured as a government corporation, or structured as a government-sponsored enterprise. GAO has previously reported that an entity's administrative structure affects the extent to which it is under federal control, how its activities are reflected in the federal budget, and the risk exposure of U.S. taxpayers. 2) Financing authorities. A majority of stakeholders agreed on an NIB's financing authorities. Specifically, a majority said the federal government should provide the initial capital; an NIB should be authorized to use a variety of options to generate funds for operating expenses and lending; and an NIB should offer a variety of mechanisms for financing projects, such as providing direct loans, loan guarantees, and funding for the Environmental Protection Agency's existing wastewater funding program--the Clean Water State Revolving Fund. 3) Project eligibility and prioritization. Stakeholders' views varied on which types of projects should be eligible for NIB financing, such as whether it should exclusively finance large projects. In addition, a majority agreed an NIB should prioritize projects that address the greatest infrastructure need and generate the greatest environmental and public health benefits. GAO identified seven municipalities that have entered into privately financed PPPs--contractual agreements in which the private partner invests funds in the wastewater infrastructure--since 1992: Arvin, California; Cranston, Rhode Island; Fairbanks, Alaska; Franklin, Ohio; North Brunswick, New Jersey; Santa Paula, California; and Woonsocket, Rhode Island. Municipal and wastewater company officials GAO interviewed identified the following examples of advantages of privately financed PPPs: 1) Provide access to financing for municipalities that have difficulty using traditional financing sources, such as municipal bond markets. 2) May make operations more efficient, for example, by taking advantage of economies of scale by buying key supplies, like chemicals, in bulk. 3) May bring new infrastructure online faster than traditional public procurement because companies have more flexibility. These officials identified challenges of privately financed PPPs, including: 1) Local opposition may arise out of concerns about higher wastewater rates and the potential loss of municipal wastewater jobs. 2) Private financing is generally more costly than tax-exempt municipal bonds because of higher interest rates; a 2002 National Research Council study reported that private financing is 20 to 40 percent more expensive. 3) Contracts can be costly and difficult to develop because they are complex, and municipalities and companies are unfamiliar with this type of PPP.
DHS acquisitions support a wide range of missions and investments including ships and aircraft, border surveillance and screening equipment, nuclear detection equipment, and systems to track the department’s financial and human resources. In support of these investments, DHS, in 2003, established an investment review process to help reduce risk and increase the chances for successful acquisition outcomes by providing departmental oversight of major investments throughout their life cycles and to help ensure that funds allocated for investments through the budget process are being spent wisely, efficiently, and effectively. Our work over the past several years has consistently pointed to the challenges DHS has faced in effectively managing and overseeing its acquisition of programs and technologies. In November 2008, we reported that DHS had not effectively implemented its investment review process, and as a result, the department had not provided the oversight needed to identify and address cost, schedule, and performance problems for its major acquisitions. Specifically, we reported that of the 48 major investments reviewed requiring milestone or annual reviews, 45 were not reviewed in accordance with the departments’ investment review policy, and 18 were not reviewed at all. Four of these investments had transitioned into a late acquisition phase—production and deployment—without any required reviews. We recommended and DHS concurred that DHS identify and align sufficient management resources to implement oversight reviews in a timely manner throughout the investment life cycle. In June 2010, we reported that over half of the 15 DHS programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, and establishing acquisition program baselines. Our work noted that without the development, review, and approval of these key acquisition documents, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. In January 2011, DHS reported that it has begun to implement an initiative to assist programs with completing departmental approval of acquisition program baselines. In our February 2011 biennial update of the status of high-risk areas needing attention by Congress and the executive branch, we continued to designate DHS’s implementation and transformation, which includes the department’s management functions, as a high-risk area. For example, because of acquisition management weaknesses, major programs, such as SBInet, have not met capability, benefit, cost, and schedule expectations. Further, DHS had not fully planned for or acquired the workforce needed to implement its acquisition oversight policies as we previously recommended. As of January 2011, DHS reported that it had increased its acquisitions management staffing and planned to hire more staff to develop cost estimates. DHS has taken several actions to address these recommendations and implement more discipline and rigor in its acquisition processes. Specifically, DHS created the Acquisition Program Management Division in 2007 to develop and maintain acquisition policies, procedures, and guidance as a part of the system acquisition process. DHS also issued an interim acquisition directive and guidebook in November 2008 for programs to use in preparing key documentation to support component and departmental making. In January 2010, DHS finalized the acquisition directive which established acquisition life-cycle phases and senior-level approval of each major acquisition program at least three times at key acquisition decision events during a program’s acquisition life-cycle. This directive established the acquisition life-cycle framework with four phases: (1) identify a capability need (need phase); (2) analyze and select the means to provide that capability (analyze/select phase); (3) obtain the capability (obtain phase); and (4) produce, deploy, and support the capability (produce/deploy/support phase). Each acquisition phase culminates in a presentation to the Acquisition Review Board (ARB), which is to review each major acquisition (that is, those designated as level 1 or level 2 programs) at least three times at key acquisition decision events during a program’s acquisition life cycle. The acquisition decision authority—the Chief Acquisition Officer or other designated senior-level official—is to chair the ARB and decide whether the proposed acquisition meets certain requirements necessary to move on to the next phase and eventually to full production. The directive outlines the extent and scope of required program, project, and service management; level of reporting requirement; and the acquisition decision authority based on whether the acquisition is classified as level 1, 2, or 3. The acquisition decision authority for major acquisitions—level 1 and level 2—is to be at the department or component level and the acquisition decision authority for nonmajor acquisitions—level 3—is to be at the component level.An acquisition may be raised to a higher level acquisition level by the ARB. The ARB supports the acquisition decision authority in determining the appropriate direction for an acquisition at key Acquisition Decision Events. Following an ARB meeting, the Acquisition Program Management Division is to prepare an acquisition decision memorandum as the official record of the meeting to be signed by the acquisition decision authority. This memo is to describe the approval or other decisions made at the ARB and any action items to be satisfied as conditions of the decision. The ARB reviews are to provide an opportunity to determine a program’s readiness to proceed to the following life-cycle phase. However, we reported in March 2011 that the ARB had not reviewed most of DHS’s major acquisition programs by the end of fiscal year 2009 and programs that were reviewed had not consistently implemented action items identified as part of the review by established deadlines. Our prior work has shown that when these types of reviews are skipped or not fully implemented, programs move forward with little, if any, early department-level assessment of the programs’ costs and feasibility, which contributes to poor cost, schedule, and performance outcomes. As a part of its responsibilities, the Acquisition Program Management Division has identified major DHS acquisition programs, projects, or services for oversight through the ARB process. According to Acquisition Program Management Division officials, beginning in fiscal year 2009, the list was to be updated on a yearly basis through interviews with and documentation from component program offices. In May 2010, the Undersecretary for Management identified 86 programs on DHS’s major oversight list for fiscal year 2010, 62 of which TES and component officials determined required T&E oversight—that is programs that were in an acquisition phase where T&E was being planned or conducted. Several of the 62 programs consisted of multiple subprojects, such as TSA’s Passenger Screening Program. For more information on these 86 major acquisition programs, see appendix II. DHS’s 2010 acquisition directive also includes guidance for preparing documentation to support component and departmental decision making and specifies requirements for developmental and operational T&E as a part of the acquisition review process. Developmental T&E may include a variety of tests, such as system qualification testing, system acceptancetesting, and software testing. Developmental testing may be carried out by the user and may be conducted in simulated environments, such as laboratories, test facilities, or engineering centers that might or might not be representative of the complex operational environment. Operational T&E is a field test, performed under realistic conditions by actual users in order to determine the operational effectiveness and suitability of a system, and the corresponding evaluation of the data resulting from the test. To carry out its responsibilities for overseeing T&E, S&T established TES in 2006 and created the position of Director of TES in June 2007. TES’s mission is to establish and manage DHS T&E policies and procedures and to oversee and coordinate T&E resources to verify attainment of technical performance specifications and operational effectiveness and suitability. To carry out its T&E oversight, in fiscal year 2010, TES had a budget of about $23 million and as of February 2011 had a staff of 26, which includes the TES Director, 19 staff dedicated to T&E activities, and 6 dedicated to developing standards. In May 2009, DHS issued a delegation which specified the responsibilities and duties of the Director of Operational Test & Evaluation. The TES Director and Director of Operational Test and Evaluation, while distinct positions in the T&E directive, share some advisory, review, and oversight responsibilities. For example, both are responsible for advising program managers in developing T&E documentation and approving test and evaluation master plans. The TES Director is responsible for developing DHS T&E policy and the Director of Operational Test and Evaluation is to approve operational test plans and report to the ARB after assessing operational test reports. Since May 2009, the Director of Operational Test and Evaluation position has not been continuously filled according to the current TES Director. In a November 2010 memo, the Under Secretary for Science and Technology designated one person as both the director of TES and the Director of Operational Test and Evaluation until further notice. The T&E directive outlines the responsibilities of the TES Director and the Director of Operational Test and Evaluation. According to the directive, the TES Director is to establish the department’s testing and evaluation policies and processes and the Director of Operational Test and Evaluation is to administer those policies and processes. The directive also outlines TES’s responsibilities in overseeing T&E across DHS components and its role in the acquisition review process. Table 1 describes TES’s T&E responsibilities as outlined in the T&E directive for all level 1, level 2, and special oversight acquisition programs. The T&E directive requires TES to review and approve required component acquisition documentation before an ARB meets for an acquisition decision event. These documents are meant to be reviewed and, if required, approved in a sequential order associated with the acquisition phase, because these documents build upon one another. Figure 1 presents TES’s responsibilities throughout the four DHS acquisition phases as defined in the acquisition directive. To carry out these responsibilities for the 62 acquisition programs under its oversight in fiscal year 2010, TES has test area managers who assist component officials in fulfilling their T&E responsibilities and provide guidance and clarification in regard to the requirements in the T&E directive. According to TES, each major acquisition program is assigned a test area manager and as of February 2011, TES employed nine test area managers. TES met its oversight requirements when approving test plans and test reports in accordance with DHS acquisition and T&E directives for the 11 major acquisition programs we selected for review. However, TES did not consistently document its review and approval of operational test agents or its review of other required acquisition documentation, which could provide more assurance that components were meeting T&E directives when TES reviewed these documents. Further, TES does not plan an independent assessment of TSA’s Advanced Spectroscopic Portal’s operational test results, as required by the T&E directive. TES is to oversee T&E of major DHS acquisition programs by ensuring that the requirements set forth in the T&E directive are met and by working with component program officials to develop T&E documentation, such as test and evaluation master plans, as required by DHS’s acquisition directive. TES’s T&E oversight responsibilities set forth in the T&E and acquisition directives pertain to programs primarily in the analyze/select and obtain phases of the acquisition process because most testing and evaluation efforts occur in these phases. As a result, the requirements of the T&E directive and TES’s oversight vary depending on when a program progresses through certain phases of the acquisition process. For example, when a program is in the produce/deploy/support phase there is usually little to no T&E activity, so TES’s involvement is limited. We reviewed TES’s T&E oversight efforts for 11 DHS programs and found that TES had conducted oversight of components’ test plans and test reports, as set forth in the acquisition and T&E directives, as it asserted. The 11 programs, each managed by different DHS components, were in one phase of the acquisition process or had two or more subprojects simultaneously in different phases of the acquisition process. For example, Coast Guard’s H-65 helicopter program has 6 discrete subprojects, each with its own completion schedule, including 4 subprojects in the Produce/Deploy/Support phase and 2 subprojects in the Obtain phase. Acquisition Program Management Division, TES, and component officials determine if subprojects need to develop separate sets of acquisition documents as they progress through the acquisition process. Figure 2 provides an overview of these programs and their associated acquisition phases. Additional details on these programs can be found in appendix I. As shown in figure 3, for the 11 selected DHS programs, TES reviewed and approved test and evaluation master plans for 6 of the 7 programs that were required to develop such plans by the T&E and acquisition directives and had documented their approval of these plans. For the one program that was in the phase that required such a plan—ATLAS Tactical Communications—the program had not yet drafted its test and evaluation master plan. The remaining 4 programs had plans in draft form that had not yet been submitted to TES for review. As a result, TES was not yet required to review these plans. Component officials from each of these six programs stated that TES provided input to the development of the test and evaluation master plans. For example, Office of Health Affairs officials stated that TES officials suggested that the BioWatch Gen-3 program office incorporate an additional test event to ensure that the program was tested under specific environmental conditions described in the program’s operational requirements document, which resulted in more tests. In addition, U.S. Customs and Border Protection (CBP) officials stated that TES participated in a line-by-line review of the SBInet test plan and provided detailed suggestions. Further, TES suggested that the criteria used for operational testing in the test and evaluation master plan needed to be expanded, and that an update may be required for SBInet to progress to the next acquisition phase. All of the component program officials who had undergone TES review or approval told us that TES test area managers provided their input in a variety of ways, including participating in T&E working groups, in specific meetings to discuss T&E issues, or by providing written comments culminating in TES’s approval of the plan. After the test and evaluation master plan is developed, the test agent is to develop operational test plans, which detail field testing of the system under realistic conditions for determining that the system’s overall effectiveness and suitability for use before deployment of the system. As shown in figure 4, of the 11 selected acquisition programs, TES reviewed and approved operational test plans for the 4 programs that were required to develop such plans by the acquisition directive and documented their approval of these plans. Component officials from these 4 programs said that TES provided input into their test plans. For example, National Protection and Programs Directorate officials from the National Cybersecurity Protection System program stated that TES had significant comments on their operational test plan, such as including confidence levels associated with the syste key performance requirements and helping program officials select a sample size necessary to measure statistically significant results. In addition, TES officials requested that the plan include different testing scenarios in order to demonstrate a varied use of the system. In a officials from the Transportation Security Administration’s (TSA) Advanced Technology-2 program indicated that TES provided significan input to their plan through a working group. The remaining 7 of the 11 programs had not yet begun to develop their operational test plan. At the conclusion of operational testing, the test agent is to write a re on the results of the test. The T&E directive specifies that TES is to receive the operational test report, which is to address all the critic issues and provide an evaluation of the operational suitability and operational effectiveness of the system. After reviewing the operatio test report, TES then is to write a letter of assessment—which is an d independent assessment of the adequacy of the operational test an provides TES’s concurrence or nonconcurrence on the test agent evaluation of operational suitability and operational effectiveness. TES is to provide the letter of assessment to the ARB as it is determining whe a program should progress to the production and deployment phase. Of the 11 programs we selected to review, TES developed a letter of assessment for the 1 program— TSA’s Advanced Technology 2 —that had completed operational testing and had a written operational T&E report on the results. The assessment concluded that while the T&E activities were adequate to inform the ARB as to system performance, TES did not concur with TSA’s test agent’s assessment as to system effectiveness because the system did not achieve a key performance parameter during testing. The ARB considered the letter of assessment and TES’s input and granted TSA permission to procure and deploy a limited number of screening machines. TSA will have to go before the ARB again to determine if full-scale production can proceed after TSA has provided the ARB with a business case and risk mitigation plan related to testing issues. The remaining 10 selected programs had not completed operational testing and thus, were not ready for letters of assessment. In addition to letters of assessment, TES officials told us that they regularly discuss T&E issues and concerns either verbally or through e- mails with Acquisition Program Management Division officials, who are responsible for organizing ARB meetings. For example, Acquisition Program Management Division officials stated that they rely on TES to provide candid information about the suitability of various programs’ T&E and whether these issues impact their program’s readiness to go before the ARB. Further, the officials told us that TES’s input at the ARBs, if any, is to be documented in acquisition decision memorandums. Acquisition Program Management Division officials also noted that TES’s input may be used in making the decision about when to hold an ARB for a particular program. T&E input from TES is one of many factors the ARB uses in overseeing acquisitions. For example, according to S&T officials, the ARB considers the current threat assessments and the extent to which the program, if implemented sooner, would help to address that threat. The ARB also considers factors such as the cost of the program and potential costs of conducting more testing and whether the results of operational testing were sufficient to achieve the intended benefits of the program. As a result, the ARB may accept a higher level of risk and allow a program to proceed even if testing concerns have been raised, if it determines that other reasons for quicker implementation outweigh these concerns. TES officials also stated that they work extensively with components prior to ARB meetings to ensure that T&E issues are addressed, with the goal to address these issues before going before the ARB. TES meets with component officials during regular acquisition review team meetings to resolve various issues before ARB meetings are convened. For example, due to concerns about the results of system qualification tests, TES recommended to SBInet program and ARB officials that the program should not proceed to the next milestone—site preparation, tower construction, and sensor and communication equipment installation at the Ajo, Arizona test site—until after operational testing was completed at the Tucson, Arizona test site. In May 2009, the ARB authorized SBInet to proceed with plans for the Ajo, Arizona site despite TES’s advice to the contrary, and directed TES to work with component officials to revise test plans, among other things. While TES’s oversight of the test plans and reports for major acquisition programs selected for review is in accordance with provisions in the T&E directive, it did not consistently document its review and approval of certain acquisition documentation or document the extent to which certain requirements in the T&E directive were met. The T&E directive requires that an operational test agent—a government agency or independent contractor carrying out independent operational testing for major acquisition programs—is to meet certain requirements to be qualified and approved by TES, but does not specify how TES’s approval is to be documented. According to the T&E directive, the test agent may be within the same component, another government agency, or a contractor, but is to be independent of the developer and the development contractor. Because the responsibilities of a test agent are significant throughout the T&E process, this independence is to allow the agent to present objective and unbiased conclusions regarding the system’s operational effectiveness and suitability to DHS decision makers, such as the ARB. For example, some the test agent’s responsibilities in the T&E directive include: Being involved early in the acquisition cycle by reviewing draft requirements documents to help ensure that requirements are testable and measurable. Assisting the component program manager in the preparation of the test and evaluation master plan. Planning, coordinating, and conducting operational tests, and preparing the operational T&E report. Reporting operational test results to the program manager and TES. According to TES officials, the test agent is also to meet other requirements in order to be approved by TES, such as having the expertise or knowledge about the product being tested and having the capacity and resources to execute the operational tests. To ensure that criteria for test agents are met, the T&E directive requires TES to approve all agents for major acquisition programs. As shown in figure 5, of the 11 programs we reviewed, 8 programs had selected a test agent and the others were in the process of selecting a test agent. TES provided documentation, such as memoranda, of its approval for 3 of these 8 programs. For the remaining 5 programs, there was no documentation of the extent to which these test agents had met the criteria and that TES had approved them. According to TES officials, they did not have a mechanism in place requiring a consistent method for documenting their review and approval of component agents or the extent to which criteria used in reviewing these agents were met. In the absence of such a mechanism in fiscal year 2010, TES’s approval of test agents was not consistently documented. TES and component officials stated that the approval for the five programs was implicit or provided verbally without documentation regarding whether the test agent met the T&E directive requirements. The T&E directive states that the test agent is to be identified and approved as early as possible in the acquisition process to, among other things, assist the component program officials in developing the test and evaluation master plan and review draft requirements documents to provide feedback regarding the testability of proposed requirements. TES and component officials stated that they assumed that test agents were approved using various approaches. Specifically, of the five programs that had test agents sign the test and evaluation master plan, one program had documented approval from TES. For example, Coast Guard and Office of Health Affairs officials stated that they did not have explicit documentation of TES’s approval of their agents; however, they believed that TES’s approval was implicit when TES approved their test and evaluation master plan since the test agent and TES are both signatories on the plan. CBP and National Protection and Programs Directorate officials told us that TES provided verbal approval for their test agents. Since there is no mechanism requiring TES to document its approval of the agent, and approval was granted verbally, there is no institutional record for DHS or an independent third party to validate whether TES followed its criteria when approving these test agents and whether the test agent was identified and approved before the test and evaluation master plan and requirements documents were finalized, as outlined in the T&E directive. With regard to the three programs in which TES had documented its approval in memoranda, these memoranda detailed TES’s agreement or nonagreement with a particular agent and highlighted whether the agent met the criteria outlined in the T&E directive. For example, TES provided interim approval to all three of the programs with the conditions that the programs prove at a later date that the test agents met all the requirements. For example: In April 2010, TES wrote a memo and granted interim approval with “serious reservations” for 1 year to TSA’s test agent for the Passenger Screening program. In the memo, TES cited concerns about the organizational structure and the lack of independence of the test agent since the test agent was part of the same TSA office responsible for managing the program. The memo outlined several steps that TSA should take, including the implementation of interim measures, such as new procedures, to ensure the necessary independence critical to testing and evaluation efforts as required by DHS directives. TES officials told us that by documenting TES’s interim approval in a memo, they were able to communicate their concerns about the test agent’s independence to TSA and DHS decision makers and set forth interim measures that TSA needed to address regarding their concerns. In July 2010, TES granted conditional approval to the test agent for the U.S. Citizenship and Immigration Services’ (USCIS) Transformation program’s test agent. TES made its approval contingent on the program developing a plan to ensure that the test agent was familiar with the component’s business practices. According to TES officials, after component officials gave a briefing to TES, they determined that the test agent met the requirements and it was approved. In January 2011, TES granted conditional approval for the U.S. Secret Service’s Information Integration and Transformation program to bring its selected test agent on board. TES’s final approval will be given after program officials brief TES on the test agent’s operational testing approach, which is to demonstrate that the test agent has knowledge of the product and has the capacity to execute the tests. TES officials told us that they do not have approval memos for all of the test agents that have been hired by program offices since the T&E directive was implemented in May 2009. Because TES did not consistently document their approvals of test agents, it is unclear whether TES has ever disapproved a test agent. TES officials acknowledged that they did not consistently document that the test agents met T&E requirements and did not document their approval of test agents. TES officials said that it would be beneficial to do so to ensure that agents met the criteria required in the T&E directive. In addition, Standards for Internal Control in the Federal Government and associated guidance state that agencies should document key decisions in a way that is complete and accurate, and that allows decisions to be traced from initiation, through processing, to after completion. These standards further state that documentation of key decisions should be readily available for review. Without a mechanism for documenting its review and approval of test agents for major acquisition programs, it will be difficult for DHS or an independent third party to validate TES’s decision-making process to ensure that it is effectively overseeing component testing. Moreover, it will be difficult for TES to provide reasonable assurance that these agents met the criteria outlined in the T&E directive, such as the requirement that they be independent of the program being tested. In addition to reviewing and approving test plans, under the T&E directive, TES is required to review certain component acquisition documents, including the mission need statements, operational requirements document, concept of operations, and developmental test reports, amongst others. These documents, which are required at the need, Analyze/Select, and Obtain phases of the acquisition process, are to be reviewed by TES to assist component program managers in identifying and resolving technical, logistical, and operational issues early in the acquisition process and to ensure that these documents meet relevant criteria. Specifically, as outlined in the T&E directive, TES is to review the mission need statement to establish awareness of the program and help ensure that the required standards are developed and that the component has identified the appropriate resources and support needed to conduct testing. TES is also to review the operational requirements document, including the key performance parameters and critical operational issues that specify the operational effectiveness and operational suitability issues that the test agent is to examine in order to assess the system’s capability to perform the mission. Further, TES is to review the concept of operations, since this document describes how the technology or equipment will be used in an operating environment. TES is to review the developmental test reports to maintain knowledge of contractor testing and to assist in its determination of the program’s readiness to progress to operational testing. We have previously reported that inadequate attention to developing requirements results in requirements instability, which can ultimately cause cost escalation, schedule delays and fewer end items. Further, we reported that without the required development and review of key acquisition data, DHS cannot provide reasonable assurance that programs have mitigated risks to better ensure program outcomes. TES officials stated that they do not have a mechanism to document or track those that they did review, what criteria they used when reviewing these documents, and the extent to which the documents reviewed met those criteria. For the 11 DHS programs that we reviewed, 8 programs had component-approved mission need statements; 2 programs, Atlas Tactical Communications and Transformation, had not yet completed such statements; and 1 program, the initial SBInet program, had completed a mission need statement in October 2006 before the T&E directive was issued and did not develop a separate mission need statement for the Block 1 increment of the program. Of the 8 programs that had mission need statements, 6 components told us that they did not have evidence that TES reviewed the mission need statement in accordance with the T&E directive. Further, TES could not demonstrate that it had received or reviewed these documents. Since TES did not have documentation of its review, it is difficult to determine the extent to which the documents were reviewed and the extent to which these documents met the review criteria. TES officials told us that they do not usually provide substantial input into the mission need statements and that they receive these documents to establish awareness of a new program. Further, while one TES test area manager told us that he reviews all developmental test reports, another test area manger told us that some programs do not routinely send him developmental test reports. Also, for example, Secret Service officials said that for the Information Integration and Transformation program they provided the operational requirements document, concept of operations, and integrated logistic support plan to TES. Specifically, the officials said that TES officials were very helpful in providing input on draft documents and made improvements to the documents by suggesting, for example, that the tests be more realistic by including personnel from field offices, headquarters, and external agencies in the live/production test environment. In contrast, officials from TSA stated that while they provided their mission need statement, concept of operations, integrated logistics support plan, and acquisition program baseline documents for the Advanced Technology 2 (AT-2) program to TES, TES officials did not provide input or comments on any of those documents. TES officials told us that the AT-2 program was initiated and developed some acquisition documentation prior to May 2009 when the T&E directive was issued. Specifically the operational requirements document was approved and finalized by TSA in June 2008 prior to the T&E directive and provided later to TES in February 2010 when the program was being reviewed. When TES reviewed the operational requirements document along with other documents such as the test and evaluation master plan, TES wrote a memo to TSA in March 2010 requesting that detection performance requirements be clarified and that users concur with the requirements. After several months of discussion, TSA and TES agreed on an approach which was used as the basis for initial operational T&E. Standards for Internal Controls in the Federal Government, as outlined earlier, state that agencies should document key decisions, and further that documentation of key decisions should be readily available for review. TES officials stated that they do not have a mechanism requiring that they document their review of certain acquisition documentation or the extent to which the document met the criteria used in reviewing these documents, and recognized that doing so would be beneficial. Developing a mechanism for TES to document its review of key acquisition documents could better position TES to provide reasonable assurance that it is reviewing key documentation and providing input that is important for determining the outcome of future testing and evaluation efforts, as required by the T&E directive. Moreover, such a policy could help to ensure that an institutional record exists for DHS or an independent third party to use in determining whether TES is effectively overseeing component T&E efforts and assisting in managing DHS major acquisition programs. According to the T&E directive, TES is to conduct an independent assessment of the adequacy of an operational test, provide a concurrence or nonconcurrence on the test agent’s evaluation of operational suitability and operational effectiveness, and provide any further independent analysis it deems necessary for all major DHS acquisition programs. TES is to document this independent assessment by writing a letter of assessment within 30 days of receiving the operational test report from the components’ test agent and provide the letter of assessment to the ARB, who then uses the assessment in making its determination of whether the program can proceed to purchase and implementation. While TES has developed a letter of assessment for the two other programs undergoing an ARB decision to enter into the production and deployment phase since the T&E directive was issued in May 2009, TES officials told us that they do not plan to write such an assessment for the Advanced Spectroscopic Portal (ASP) program because they are the test agent for ASP and thus, are not in a position to independently assess the results of testing that they conducted. In April 2008, over a year before the T&E directive was issued, senior level executives from DHS, S&T, CBP, and the Domestic Nuclear Detection Office (DNDO) signed a memorandum of understanding regarding arrangements for ASP operational testing. The memo designated Pacific Northwest National Lab, a U.S. Department of Energy laboratory, as the test agent. However, the memo also outlined the roles and responsibilities of TES, many of which reflected the duties of a test agent, such as developing and approving all operational test plans, responsibility for the management of testing and field validation, and developing and approving operational test reports. TES officials told us that they were using Pacific Northwest National Lab staff to carry out the operational tests, but are acting, for all intents and purposes, as the test agent for ASP. TES and DNDO officials told us that this arrangement was made after repeated testing issues arose with the ASP program. In September 2008, we reported that ASP Phase 3 testing by DNDO provided little information about the actual performance capabilities of ASP and that the resulting test report should not be used in determining whether ASP was a significant improvement over currently deployed equipment. Specifically, we found that the ASP Phase 3 test results did not help determine an ASP’s “true” level of performance because DNDO did not design the tests to assess ASP performance with a high degree of statistical confidence. In response to our report, DHS convened an independent review team to assist the Secretary in determining whether he should certify that there will be a significant increase in operational effectiveness with the procurement of the ASP system. The independent review team found that the test results and measures of effectiveness were not properly linked to operational outcomes. In May 2009, we reported that DHS had increased the rigor of ASP testing in comparison with previous tests. For example, DNDO mitigated the potential for bias in performance testing (a concern we raised about prior testing) by stipulating that there would be no ASP contractor involvement in test execution. However, the testing still had limitations, such as a limited set of scenarios used in performance testing to conceal test objects from detection. Moreover, we also reported that TES was to have the lead role in the final phase of ASP testing. As of February 2011, TES officials told us that the final phase of testing, consisting of 21 days of continuous operation, had not yet been scheduled. With TES acting as the test agent, it is not in a position to exercise its responsibilities during the operational testing phase, such as approving the operational test plan or writing a letter of assessment of the final results of operational testing. As it has done for two other recent DHS acquisition programs, TES was able to confirm through its independent assessment whether the test agent conducted operational testing as described in the test and evaluation master plan and operational test plan. For example, TES outlined concerns in its letter of assessment to the ARB that the AT-2 system did not meet a stated operational requirement key performance parameter—a throughput measure of bags per hour—for the majority of the time under test which resulted in a “not effective” determination by TES. TES officials recognized that, as the test agent, they are not in a position to conduct an independent assessment of operational test results and write a letter of assessment for ASP and that they are the highest level organization within DHS for both T&E oversight and operational test expertise. They further stated that the decision to have TES serve as the test agent was made prior to the issuance of the T&E directive and that it was too late in the program’s development to go back and select another agent. Nevertheless, TES officials recognized that this one-time situation would result in the lack of an independent assessment of ASP test results and there were no plans to conduct or contract for such an independent assessment. While we acknowledge that this decision was made prior to the T&E directive and the requirement that TES write a letter of assessment of all major acquisition programs, it is nonetheless important that ASP undergo an independent assessment of its test results since its operational test plan, which was developed by TES, was not subject to oversight. Because ASP has faced testing issues, many of which we have reported on in past years, it is important that this program undergo oversight to help avoid similar problems from reoccurring. Without an independent assessment of ASP’s operational test results, it will be difficult to ensure that operational testing was properly planned, conducted, and that the performance results are useful. In addition, arranging for an independent assessment of operational tests results could provide the ARB with critical information on testing and evaluation efforts to help it determine whether ASP should be approved for purchase and implementation. TES and component officials reported challenges faced in coordinating and overseeing T&E across DHS components that fell into four primary categories: (1) ensuring that a program’s operational requirements—the key requirements that must be met for a program to achieve its intended goals—can be effectively tested; (2) working with DHS component program staff that have limited T&E expertise and experience; (3) using existing T&E directives and guidance to oversee complex information technology acquisitions; and (4) ensuring that components allow sufficient time and resources for T&E while remaining within program cost and schedule estimates. Both TES and DHS, more broadly, have begun initiatives to address some of these challenges, but it is too early to determine their effectiveness. Both TES and component officials stated that one of their challenges is developing requirements that are testable, consistent, accurate, and complete. Specifically, six of the nine TES test area managers told us that working with DHS components to ensure that operational requirements can be tested and are suitable to meet mission needs is important because requirements development is one of the biggest challenges facing DHS. For example, one TES test area manager described the difficulty in drafting a test and evaluation master plan if operational requirements are not testable and measurable. Another TES test area manager indicated that programs’ operational requirements documents often do not contain user needs or operational requirements for system performance. This leads to difficulties in testing those requirements later. Further, six of the nine TES test area managers cited that some components’ operational requirements are difficult to test as written, which results in delays in drafting T&E documents as well as impacting the program cost and schedule parameters. Our prior work has found that program performance cannot be accurately assessed without valid baseline requirements established at the program start. According to DHS guidance, the baseline requirements must include a threshold value that is the minimum acceptable value which, in the user’s judgment, is necessary to satisfy the need. In June 2010, we reported that if threshold values are not achieved, program performance is seriously degraded, the program may be too costly, or the program may no longer be timely. In addition, we reported that inadequate knowledge of program requirements is a key cause of poor acquisition outcomes, and as programs move into the produce and deploy phase of the acquisition process, problems become much more costly to fix. To help remedy these issues, we have made a number of recommendations to address them. DHS has generally agreed with these recommendations and, to varying degrees, has taken actions to address them. For example: In May 2010, we reported that not all of the SBInet operational requirements that pertain to Block 1—a surveillance, command, control, communications, and intelligence system being fielded in two portions of the international border in Arizona—were achievable, verifiable, unambiguous, and complete. For example, a November 2007 DHS assessment determined that 19 operational requirements, which form the basis for the lower-level requirements used to design and build the system, were not complete, achievable, verifiable, or affordable. Further, the DHS assessment noted that a requirement that the system should provide for complete coverage of the border was determined to be unverifiable and unaffordable because defining what complete coverage meant was too difficult and ensuring complete coverage, given the varied and difficult terrain along the border, was cost prohibitive. To address these issues, we recommended that the currently defined Block 1 requirements, including key performance parameters, are independently validated as complete, verifiable, and affordable and any limitations found in the requirements are addressed. Furthermore, CBP program officials told us that they recognized the difficulties they experienced with requirements development practices with the SBInet program. Within CBP, the Office of Technology, Innovation, and Acquisition has responsibility for managing the SBInet program. Office of Technology, Innovation, and Acquisition officials told us that their office was created to strengthen expertise in acquisition and program management of SBInet. In May 2009, we reported that ASP testing uncovered multiple problems in meeting the requirements for successful integration into operations at ports of entry. As a result, we recommended that DHS assess ASPs against the full potential of current equipment and revise the program schedule to allow time to conduct computer simulations of ASP’s capabilities and to uncover and resolve problems with ASPs before full-scale deployment. We also reported that other TSA technology projects were delayed because TSA had not consistently communicated clear requirements in order to test the technologies. We recommended that TSA evaluate whether current passenger screening procedures should be revised to require the use of appropriate screening procedures until it is determined that existing emerging technologies meet their functional requirements in an operational environment. In March 2011 testimony, the Under Secretary for S&T stated that S&T had begun working with the DHS Under Secretary for Management to use their collective expertise and resources to better address the “front end” of the acquisition cycle, namely, the translation of mission needs into testable requirements. Further, in response to this challenge, S&T has reorganized and established an Acquisition Support and Operations Analysis Group, which is to provide a full range of coordinated operations analysis, systems engineering, T&E, and standards development support for DHS components. In addition, TES’s T&E Council is currently focusing on the challenges related to requirements development. Specifically, TES test area managers have presented specific briefings to component officials at council meetings which provide information on how to better generate requirements. Further, in response to our previously mentioned report designating DHS on the high-risk list, DHS developed a strategy to, among other things, strengthen its requirements development process. DHS’s January 2011 strategy describes the establishment of a capabilities and requirements council to evaluate and approve operational requirements early in the acquisition process. Specifically, the capabilities and requirements council is to, among other things, reconcile disagreements across program offices and approve analyses of alternatives and operational requirement documents. We stated in a March 2011 response to DHS on its strategy that it was unclear how the introduction of new governance groups will streamline the process and address previously identified issues because it appeared that the governance groups are chaired by the Deputy Secretary and have many of the same participants. Since the S&T reorganization has only recently taken place and the T&E Council and the department’s strategy have only recently begun to address the challenge of requirements generation, it is too soon to determine the effectiveness of these actions in addressing this challenge. TES officials told us that T&E experience and expertise within DHS components varies, with some components possessing staff with extensive T&E experience and expertise and others having relatively little. For example, TES officials noted that the Coast Guard and TSA have T&E policies and procedures in place, as well as staff with extensive T&E experience, which limited their dependence on TES for T&E expertise. Other components in DHS told us they rely more on TES or contractors for T&E expertise. For the 11 DHS programs we reviewed, officials from components which do not have many acquisition programs, such as the Office of Intelligence and Analysis, reported needing more assistance from TES in identifying and selecting appropriate and qualified test agents, for example. Conversely, components with more acquisition programs, such as the Coast Guard, told us that they have well-established test agents and procedures in place, and require little guidance from TES. For example, we reported in April 2011 that most Coast Guard major acquisition programs leverage Navy expertise, in some way, to support a range of testing, engineering, and other program activities. Furthermore, CBP recently established a new office whose goal is to strengthen expertise in acquisition and program management, including T&E, and ensure that CBP’s technology efforts are focused on its mission and integrated across the agency. In response to this challenge, TES has worked with DHS’s Acquisition Workforce Office to develop T&E certification requirements and training for components. TES officials told us that they have worked with the Acquisition Workforce Branch and developed pilot courses on T&E for component T&E staff, including Fundamentals of Test and Evaluation, Intermediate Test and Evaluation, and Advanced Test and Evaluation. In April 2010, DHS issued an acquisition workforce policy which establishes the requirements and procedures for certification of DHS T&E managers. The policy allows T&E managers to be certified at a level that is commensurate with their education, training, and experience. Component staff from 6 of the 11 programs we reviewed said they participated in TES’s certification training program and believed that the training would assist them in carrying out their T&E responsibilities. In addition, TES is in the process of hiring four additional staff to assist the test area managers in their T&E oversight responsibilities and hoped to have the additional staff hired by the end of fiscal year 2011. Lack of DHS staff to conduct acquisition oversight, including T&E, is a departmentwide challenge. In our previous reports, DHS acquisition oversight officials said that funding and staffing levels have limited the number of programs they can review. We recommended that DHS identify and align sufficient management resources to implement oversight reviews in a timely manner. DHS generally concurred with the recommendation and, as of January 2011, has reported taking action to address it by identifying needed capabilities and hiring staff to fill identified gaps. Further, to address this challenge, in 2009 and 2010, T&E Council representatives from the Acquisition Workforce Branch made presentations at council meetings to update members on the status of various acquisition workforce issues, including T&E certification. For example, presenters asked T&E Council members to inform their respective components about new T&E certification courses and to provide information on how to sign up for the courses. In 2010, the Acquisition Workforce Policy was implemented by DHS, which allowed the department to begin to certify T&E acquisition personnel. While DHS has undertaken efforts to help address these challenges, it is too soon to evaluate the impact that these efforts will have in addressing them. Effectively managing IT acquisitions is a governmentwide challenge. TES and component officials we interviewed told us that T&E guidance, such as specific guidance for integrating developmental testing and operational testing, may not be sufficient for the acquisition of complex IT systems. Specifically, component officials stated that the assessment of risks and environmental factors are different for IT programs than other acquisitions and that conducting testing in an operational environment may not be necessary for IT programs because the operational environment is no different than the test environment. In addition, four of the nine test area managers told us that aspects of the existing T&E guidance may not directly apply to IT acquisitions. The department is in the process of making modifications to its acquisitions process to better accommodate information technology acquisitions. According to the previously mentioned January 2011 strategy submitted to GAO, DHS is piloting a new model for IT acquisitions. This model, which is to be consistent with the department’s overall acquisition governance process, is to have many of the steps in the modified process that are similar or the same as what currently exists but time frames for different types of acquisitions would be instituted. For example, acquisition programs designated as IT programs may go through a more streamlined acquisition process that may better fit the rapidly changing IT environment, and the ARB would have the option to delegate oversight responsibilities to an executive steering committee. In other cases, TES and component officials are investigating the possibility of conducting integrated testing—the combination of developmental and operational testing—for some programs although this process may take longer to plan and pose greater risks because testing is being done simultaneously. Further, the T&E Best Practices Integrated Working Group, a subgroup of the T&E Council, including TES, Acquisition Program Management Division, and Office of Chief Information Officer officials, was working to identify and promote T&E best practices for IT system acquisition. This group drafted an operational test agent risk assessment process to validate the streamlining process approach while adhering to acquisition and T&E policy and directives, and as of March 2011, one component, USCIS, has made use of this process. Additionally, three other programs are investigating the possible use of this process and the possibility of tailoring or eliminating T&E deliverables or operational T&E requirements for IT programs, with the approval of TES. The group has identified three IT acquisition programs to serve as a pilot for this effort. As DHS considers modifications to its T&E process for IT programs, it also must consider the effect such a change could have on determining a system's technical performance and evaluating the system's operational effectiveness and suitability. For example, we have previously reported on testing problems with SBInet, a CBP program designated as an IT program. We found that SBInet testing was not performed in a manner that would adequately ensure that the system would perform as intended. Among the factors contributing to these problems was insufficient time for reviewing and approving test documentation, which in part, led to test plans and test cases not being well-defined. As a result, we recommended that test schedules, plans, cases, and procedures are adequately reviewed and approved consistent with the revised test and evaluation master plan. Since the efforts DHS is taking to address this challenge have only recently been initiated, it is too early to tell what impact they will have on the overall challenges of T&E for IT programs. Both TES and component officials stated that balancing the need to conduct adequate T&E within the confines of a program’s costs and schedule is a recurring challenge, and a challenge that is difficult to solve. We have previously reported on the challenges associated with balancing the need to conduct testing within program cost and schedules. Our past review of the Department of Defense’s (DOD) Director of Operational Test and Evaluation found that while the acquisition community has three central objectives—performance, cost, and schedule—the Director of Operational Test and Evaluation has but one--operational testing of performance. We reported that these distinct priorities can lead to testing disputes. We reported that these disputes encompassed issues such as (1) how many and what types of test to conduct; (2) when testing should occur; (3) what data to collect, how to collect them, and how b to analyze them; and (4) what conclusions were supportable, given th e analysis and limitations of the test program. The foundation of most of these disputes laid in different notions of the costs and benefits of testing and the levels of risk that were acceptable when making full-rate production decisions. The DOD Director of Operational Test and Evaluation consistently urged more testing (and consequently more time, resources, and cost) to reduce the level of risk and number of unknowns before the decision to proceed to full-rate production, while the services consistently sought less testing and accepted more risk when making production decisions. These divergent dispositions frequently led to est healthy debates about the optimal test program, and in a small number of cases, the differences led to contentious working relations. TES and DHS component officials expressed views similar to those expressed in our past work at DOD. Of the nine TES test area managers we talked with, four told us that allowing appropriate time and resources for T&E within program cost and schedule is a challenge. According to the test area manager’s, component program management officials often do not incorporate sufficient time within their schedule for T&E or reduce the time allowed for T&E to save time and money. In one test area managers’ view, doing so can reduce the effectiveness of testing or negatively impact the results of the tests. However, TSA officials told us that TES wanted to insert new test requirements for the AT-2 program—including the involvement of more TSA staff in the tests—after the program schedule was established and it was difficult to accommodate the changes and resulted in some delays. TES officials told us that these test requirements were in lieu of other planned field testing, which were not consistent with the program’s concept of operations and that TSA officials agreed with the new test requirements. According to TES and component officials we spoke with, both the program officials and TES understand the views and perspectives of one another and recognize that a balance must be struck between effective T&E and managing programs within cost and schedule. As a result, TES is working with program officials through the T&E Council or T&E working groups to discuss these issues early in the acquisition cycle (before it is too late), particularly while developing the test and evaluation master plan, which outlines the time allowed for testing and evaluation. Timely and accurate information resulting from T&E of major acquisitions early in the acquisition process can provide valuable information to DHS’s senior level managers to make informed decisions about the development, procurement, deployment, and operation of DHS’s multibillion dollar portfolio of systems and services. Improving the oversight of component T&E activities is but one part of the significant challenges DHS faces in managing its acquisitions. Components themselves are ultimately responsible for the management and implementation of their programs and DHS senior level officials are responsible for making key acquisitions decisions which lead to production and deployment. TES helps support acquisition decisions by providing oversight over major acquisitions’ T&E, which can help reduce, but not eliminate, the risk that new systems will not be operationally effective and suitable. Since the Homeland Security Act creating DHS was enacted in 2002, S&T has had the responsibility for overseeing T&E activities across the department. However, S&T did not have staff or the acquisition and T&E directives in place to conduct such oversight across DHS components until May 2009 when DHS issued its T&E directive. Since then, TES has implemented some of the requirements and overseen T&E of major acquisitions we reviewed, as well as provided independent assessments of operational test results to the ARB. However, TES has not consistently documented its compliance with the directives. Documenting that TES is fulfilling the requirements within DHS acquisition and T&E directives and the extent to which the criteria it is using to review and approve these documents are met, including approving operational test agents and reviewing key acquisition documentation, would assist TES in demonstrating that it is conducting T&E oversight and meeting requirements in these directives. Furthermore, without an independent assessment of operational test results for the Advance Spectroscopic Portal program, a key T&E oversight requirement in the T&E directive, the ARB will lack T&E oversight and input it needs to determine whether ASP is ready to progress toward production and deployment. This is especially important, given that program’s troubled history, which we have highlighted in a series of prior reports. To better ensure that testing and evaluation requirements are met, we recommend that the Secretary of Homeland Security direct the Under Secretary for Science & Technology to take the following two actions: Develop a mechanism to ensure that TES documents its approval of operational test agents and the extent that the test agents meet the requirements in the T&E directive, and criteria that TES use in reviewing these test agents for major acquisition programs. Develop a mechanism to ensure that TES documents its required review of component acquisition documents, including the mission need statements, concept of operations, operational requirements documents, developmental test reports, test plans, and other documentation required by the T&E directive, the extent that these documents meet the requirements in the T&E directive, and criteria that TES uses in reviewing these documents. To ensure that the ARB is provided with an independent assessment of the operational test results of the Advanced Spectroscopic Portal program to help determine whether the program should be approved for purchase and implementation, we recommend that the Secretary of Homeland Security take the following action: Arrange for an independent assessment, as required by the T&E directive, of ASP’s operational test results, to include an assessment of the adequacy of the operational test and a concurrence or nonconcurrence on the operational test agent’s evaluation of operational suitability and operational effectiveness. We received written comments on a draft of this report from DHS on June 10, 2011, which are reproduced in full in appendix III. DHS concurred with all three of our recommendations. DHS concurred with our first recommendation (1) that S&T develop a mechanism to ensure that TES documents its approval of operational test agents, (2) the extent that the test agents meet the requirements in the T&E directive, and (3) the criteria that TES uses in reviewing these test agents for major acquisition programs. Specifically, DHS stated that the Director of TES issued a memorandum to test area managers and TES staff regarding the operational test agent approval process which describes the responsibilities, considerations for selection, and the process necessary to select an operational test agent. In addition, DHS stated that TES is drafting memos approving operational test agents using the new test agent approval process. DHS also concurred with our second recommendation that S&T develop a mechanism to ensure that TES documents (1) its required review of component acquisition documents required by the T&E directive, (2) the extent that these documents meet the requirements in the T&E directive, and (3) the criteria that TES uses in reviewing these documents. DHS stated that the Director of TES issued a memorandum to test area managers and TES staff detailing the role of TES in the document review process and the process that TES staff should follow for submitting their comments to these documents. Finally, DHS concurred with our third recommendation that S&T arrange for an independent assessment of ASP’s operational test results. DHS stated that the ASP program is under review and does not have an operational test scheduled. However, TES is investigating the option of using a separate test agent to conduct operational testing of ASP, which would allow TES to perform the independent assessment and fulfill its independent oversight role as outlined in DHS policy. Such actions, if taken, will fulfill the intent of this recommendation. DHS also provided technical comments on the report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of Homeland Security. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-9627 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. An effort to develop nd deploy technologie to llow Custom nd Border Protection to detect ncler or rdiologicl mteri from conveynce, such as trck, entering the United Ste t lnd nd port of entry. U.S. Immigrtion nd Custom Enforcement (ICE) An effort to modernize ICE’scticl commniction tem nd equipment, tht Ice gent nd officer use to support mission-criticl commniction from otdted log tem to modern nd ndrdized digittem. Project 25 pgrde will modernize tcticl commniction nd deploy ite infrastrctre nd end-user subscrier rdio. Interoperable Rpid Deployment Stem (IRDS) will otfit ICE with trportable commniction tem to support rpid deployment requirement fro rotine, emergency nd disaster repone, nd pecil opertion. The progrm i divided into ix egment, inclding: (1) P25 pgrde for the Atlnt Region, (2) P25 pgrde for the Boton Region, () P25 pgrde for the Denver Region, (4) P25 pgrde for the centrl hub infrastrctre, (5) IRDS moile rdio commniction kit thsupport disaster nd emergency repone opertion, nd (6) IRDS Moile Commniction Stem (MCS) moile commniction vehicle thsupport disaster nd emergency repone opertion. In Mrch 2011, the Component Acquition Exective determined tht the TACCOM progrm wold e conolidted with other ICE infrastrctre progr nd tht the progrm wold e required to submit quition docmenttion to the ARB prior to Aust 2011. Segment 1: Prodce/ Deploy/ Support Segment 2-6: In the process of eing pdted. A ntionwide, interoperting network of detector/identifier tht i to provide autonomous ir-sampling ly of the environment for iologicgent of concern. The tem i to enable detection, identifiction, nd reporting of recognized orgni within 6-hor period. (4) Obsolete Component Moderniztion (OCM) – Replce obsolete component nd substem; (5) Ship Helicopter Secre Trvere & Stem (SHSTS) – Provide the ability to automticlly ecre the ircrft to the flight deck nd trvere it into the hnger; nd, (6) Atomtic Flight Control Stem (AFCS/Avionic) Modernize digitl Common Avionic Architectre Stem (CAAS) common with the H-60T pgrde nd digitl Atomtic Flight Control Stem. An effort to modernize Secret Service’s IT infrastrctre, commniction tem, ppliction, nd process. The progrm i divided into for dicrete egment: (1) Enabling Cabilitie: IT Infrastrctre Moderniztion/Cyer Secrity/ Dabase Architectre; (2) Commniction Cabilitie; () Control Cabilitie; nd, (4) Mission Support Cabilitie. In Feuary 2011, the ARB grnted quition deciion event 2A nd quition deciion event 2B deciion for the Enabling Cabilitie egment. The remining three egment remined in the Anlyze/Select phase. An integrted tem of intrusion detection, lyticl, intrusion prevention, nd informtion-ring cabilitie thre to used to defend the federl civilin government’s informtion technology infrastrctre from cyer thre. Inclde the hrdwre, oftwre, supporting process, trining, nd ervice thre to e developed nd quired to support the mission. The inititem, known as Eintein, was renmed as Block 1.0 nd incldeabilitie such as centrlized d torge. (1) Block 2.0 i to dd n Intrusion Detection Stem (IDS) which i to assss network trffic for the preence of mlicious ctivity; (2) Block 2.1 i to provide Secrity Incident nd Event Mgement which i to enable d ggregtion, correltion, nd visualiztion. () Block .0 i to provide n intrusion prevention cability. A joint inititive etween Intelligence nd Anly (I&A) nd the Office of the Chief Informtion Officer which i to ring nified, enterpripproch to the mgement of ll classified informtion technology infrastrctre inclding: (1) Homelnd Secre D Network (HSDN) for ecret level commniction infrastrctre; (2) Homelnd Top Secret Network (HTSN) for top ecret commniction infrastrctre; nd () Homelnd Secre Commniction (HSC) for classified voice nd video teleconference cabilitie. A next genertion of x-ry technology tht i to complement the trditionl x-ry technology nd provide new technicl cabilitie, such as automted detection lgorithm, thret imge projection, lternte viewing tion, bulk exploive lgorithm, nd expnded thret lit tht incorporte emerging thre to vition ecrity. A tem which i to provide Trporttion Secrity Officerability to creen passenger' crry-on baggge irporttionwide. A progrm which i to deliver surveillnce nd deciion-support technologie tht crete virtual fence nd ituationreness long the U.S. order with Mexico nd C. The firSBInet deployment of the Block I tem took plce in the Ton, Arizon tion. The econd deployment of the Block I tem took plce in the Ajo, Arizon tion. In Juary 2011, the SBInet progrm ended as originlly conceived; however, limited deployment of technology, inclding 15 enor tower nd 10 commniction tower, remined deployed nd opertionl in Arizon. The T&E result on thee tower were to e reported ometime in April 2011. TASC i to develop nd field n integrted finncil mgement, asset mgement, nd procrement mgement tem oltion. The progrm i to usndrd business process nd ingle line of cconting complint with the common governmentwide cconting classifiction trctre. The TASC Exective Steering Committee determined tht the Federl Emergency Mgement Agency will e the firt DHS component to migrte to TASC. U.S. Citizenhip nd Immigrtion Service (USCIS) An effort to move immigrtion ervice from per-based model to n electronic environment. The progrm i to deliver implified, We-based tem for enefit eeker to submit nd trck their ppliction. The new, ccont-based tem i to provide customer with improved ervice. In fiscal year 2010, there were 86 acquisition programs on the Acquisition Program Management Division’s oversight list, which included the acquisition level and designation as an information technology acquisition. Table 2 lists information on these 86 acquisition programs, and in addition, includes information on the acquisition phase for each program as of April 2011 and whether the program was subject to the test and evaluation (T&E) directive. For example, some programs, such as Customs and Border Protection’s acquisition of Border Patrol Facilities would not involve any T&E activities and therefore would not be subject to the requirements in the T&E directive or DHS Science and Technology Directorate’s Test and Evaluation and Standards office (TES) oversight. In addition to the contact named above, Christopher Currie (Assistant Director), Nancy Kawahara, Bintou Njie, Melissa Bogar, Jessica Drucker, Caitlin White, Richard Hung, Michele Fejfar, Labony Chakraborty, Tracey King, Paula Moore, Dan Gordon, Michele Mackin, Molly Traci, and Sean Seales made significant contributions to this report.
In recent years, GAO has reported on challenges the Department of Homeland Security (DHS) has faced in effectively managing major acquisitions, including programs which were deployed before appropriate testing and evaluation (T&E) was completed. In 2009 and 2010 respectively, DHS issued new T&E and acquisition directives to address these challenges. Under these directives, DHS Science and Technology Directorate's (S&T) Test & Evaluation and Standards Office (TES) is responsible for overseeing T&E of DHS major acquisition programs--that is, those with over $300 million in life-cycle costs--to ensure that T&E and certain acquisitions requirements are met. GAO was asked to identify (1) the extent to which TES oversees T&E of major acquisitions; and (2) what challenges, if any, TES officials report facing in overseeing T&E across DHS components. GAO reviewed DHS directives and test plans, interviewed DHS officials, and reviewed T&E documentation from a sample of 11 major acquisition programs from each of 11 different DHS components. The results of the sample cannot be generalized to all DHS programs, but provided insights. TES met some of its oversight requirements for T&E of acquisition programs GAO reviewed, but additional steps are needed to ensure that all requirements are met. Specifically, since DHS issued the T&E directive in May 2009, TES has reviewed or approved T&E documents and plans for programs undergoing testing, and conducted independent assessments for the programs that completed operational testing during this time period. TES officials told GAO that they also provided input and reviewed other T&E documentation, such as components' documents describing the programs' performance requirements, as required by the T&E directive. DHS senior level officials considered TES's T&E assessments and input in deciding whether programs were ready to proceed to the next acquisition phase. However, TES did not consistently document its review and approval of components' test agents--a government entity or independent contractor carrying out independent operational testing for a major acquisition--or document its review of other component acquisition documents, such as those establishing programs' operational requirements, as required by the T&E directive. For example, 8 of the 11 acquisition programs GAO reviewed had hired test agents, but documentation of TES approval of these agents existed for only 3 of these 8 programs. Approving test agents is important to ensure that they are independent of the program and that they meet requirements of the T&E directive. TES officials agreed that they did not have a mechanism in place requiring a consistent method for documenting their review or approval and the extent to which the review or approval criteria were met. Without mechanisms in place for documenting its review or approval of acquisition documents and T&E requirements, such as approving test agents, it is difficult for DHS or a third party to review and validate TES's decision-making process and ensure that it is overseeing components' T&E efforts in accordance with acquisition and T&E directives and internal control standards for government entities. TES and DHS component officials stated that they face challenges in overseeing T&E across DHS components which fell into 4 categories: (1) ensuring that a program's operational requirements--the key performance requirements that must be met for a program to achieve its intended goals-- can be effectively tested; (2) working with DHS component program staff who have limited T&E expertise and experience; (3) using existing T&E directives and guidance to oversee complex information technology acquisitions; and (4) ensuring that components allow sufficient time for T&E while remaining within program cost and schedule estimates. Both TES and DHS, more broadly, have begun initiatives to address some of these challenges, such as establishing a T&E council to disseminate best practices to component program managers, and developing specific guidance for testing and evaluating information technology acquisitions. In addition, S&T has reorganized to assist components in developing requirements that can be tested, among other things. However, since these efforts have only recently been initiated to address these DHS-wide challenges, it is too soon to determine their effectiveness. GAO recommends, among other things, that S&T develop mechanisms for TES to document its review or approval of component acquisition documentation and T&E requirements, such as approving operational test agents. DHS agreed with GAO's recommendations.
Canada and Mexico are the United States’ first and third largest trading partners, respectively, and most freight between the United States and these countries is transported by truck and rail. Freight trains include bulk freight and intermodal freight. Bulk freight—such as grain, automobiles and component parts, coal, and chemicals—are transported in rail cars. For example, railroads deliver automotive parts made in the United States to assembly plants in Mexico by rail, and return finished automobiles from Mexico by rail. In addition, according to AAR representatives, bulk freight such as grain and lumber enters the United States along the northwestern border with Canada. Further, “intermodal” freight consists of containers carried by rail and transferred to or from other transportation modes, such as ships or trucks. For example, intermodal freight containers arrive at Prince Rupert in western Canada from Asia by ship and are transferred to rail and exported to the United States, entering through Ranier, Minnesota. Intermodal freight generally consists of consumer goods such as furniture and computers and, according to FRA, has been the fastest growing segment of the freight rail industry in the United States since 1980. Inbound international rail traffic has grown over the past 5 years, but the increase is not uniform across U.S. POEs and is projected to increase further in certain POEs. According to BTS data, the number of inbound trains increased 6 percent on the northern border and 29 percent along the southern border from 2010 through 2014. All international rail traffic enters and exits the continental United States through 30 different rail POEs—23 along the Canadian border and 7 along the Mexican border. The top 8 rail POEs on the northern and southern borders carried 68 percent of inbound rail traffic while 14 rail POEs—mainly along the northern border—received less than one inbound train a day on average over the past five years according to BTS data (see fig. 1). Ranier, Minnesota, and Laredo, Texas, have the highest number of inbound trains on the northern and southern borders with an average of 10 and 9 trains per day from 2010 through 2014, or an average of 3,675 and 3,466 inbound trains per year, respectively. Some stakeholders predict growth in international rail traffic in certain POEs. For example, representatives from one railroad noted that intermodal traffic through Ranier, Minnesota, will continue to grow since the port at Prince Rupert in Canada has announced an expansion of its capacity. In addition, carmakers announced that they have added additional plants and increased capacity in Mexico, which is likely to result in additional automotive traffic by rail over the southern border. Train movements can result in blocked highway-rail grade crossings, where vehicular traffic must wait to cross the tracks when trains are slowed or stopped (see fig. 2). The amount of time that highway-rail grade crossings are blocked depends on a number of factors, and is typically a function of the number, speed, and length of trains. Blocked highway-rail grade crossings can contribute to community vehicular congestion, and communities face challenges prioritizing and funding projects to alleviate these impacts. Negative community effects resulting from blocked highway-rail grade crossings include delays to motorists, blocked emergency vehicles, and quality of life impacts. State and local departments of transportation, which have primary responsibility for building, maintaining, and operating roads, can plan and fund projects to alleviate freight-related traffic congestion. In addition, some MPOs assist state and local governments in planning and prioritizing such projects, including grade separation projects such as overpasses and underpasses to allow vehicular traffic to bypass freight rail movements. The freight rail system operates almost exclusively on infrastructure that is owned, built, maintained, and funded by private railroads, particularly the seven largest freight railroads. Generally, train movements within the United States are dispatched, or controlled, by railroad personnel located in the United States. While DOT has a role in directing federal transportation policy, including freight rail, FRA issues regulations as part of its role to oversee the safety and reliability of the national freight network. In 2012, the Moving Ahead for Progress in the 21st Century Act (MAP-21) transportation reauthorization established a framework for a national freight policy and, among other things, directed DOT to develop a national freight strategic plan. The plan was to be developed in consultation with state departments of transportation and other transportation stakeholders and was to include best practices to mitigate the impacts of freight movements on communities. MAP-21 also required DOT to encourage states to develop freight plans with a description of procedures to guide states’ investment decisions involving freight transportation. FRA issues regulations that set requirements for train crews and equipment operating in the United States. Additionally, FRA manages a National Highway-Rail Crossing Inventory that provides a uniform national database of the nation’s highway-rail grade crossings, which can be used for planning and implementation of crossing safety improvements. According to the FRA, train lengths in general have been increasing in recent years and agency regulations do not place restrictions on the amount of time trains can block highway-rail grade crossings or on train lengths. Representatives from two railroads noted that current maximum train lengths are generally 10,000 feet—or about 2 miles. These representatives noted that these maximum train lengths are largely determined based on the capacity of the current rail system infrastructure. As part of its mission to safeguard U.S. borders while enabling legitimate trade and travel, CBP has personnel, including CBP Agricultural Specialists, located at rail POEs that scan inbound trains for security threats. CBP procedures generally include the following, which CBP officials said may vary slightly by POE: Advanced targeting: About 2 hours before the train arrives at the border, CBP electronically obtains the train’s manifest, which provides information on the train’s contents, from the railroad. Using CBP’s Automated Targeting System, CBP officials identify rail cars deemed high-risk for additional inspection. For example, as part of efforts to identify high-risk shipments, CBP Agricultural Specialists check the manifest against U.S. quarantine regulations. Rail Vehicle and Cargo Inspection System (R-VACIS): Inbound trains slow to pass through R-VACIS, a machine that produces an image of the inside of railcars using gamma radiation technology (see fig. 3). CBP officers review the scanned images for anomalies that may indicate the presence of un-manifested goods and contraband, including threats that could pose a risk to national security. Secondary physical inspections: Depending on the outcome of the advanced targeting and R-VACIS scan, CBP conducts secondary physical inspections of rail cars. Both DOT and CBP participate in working groups consisting of representatives from the United States, Canada, and Mexico that seek to improve processes related to the safety and fluidity of international trade, including freight rail. Coordination between the United States and Mexico and Canada is generally framed by larger government-to-government partnerships. The U.S.-Canada Beyond the Border Initiative addresses cross border policies and the U.S.-Canada Regulatory Cooperation Council coordinates the joint development of regulatory standards between the United States and Canada, and the High Level Economic Dialogue between Mexican and U.S. officials is designed, in part, to secure trade flows and cross-border cooperation between the two countries. In addition, the Transportation Border Working Group between the United States and Canada and the U.S.-Mexico Joint Working Committee on Transportation Planning focus on transportation issues. For example, the U.S.-Mexico Joint Working Committee on Transportation Planning led efforts to create border master plans to prioritize transportation needs along the southern border, including at rail POEs. To develop these border master plans, local, regional, state, and federal stakeholders on both sides of the border coordinated to prioritize transportation projects. In all four communities we visited, stakeholders such as railroads, local officials, and BLET representatives identified R-VACIS inspection procedures, which affect inbound trains, as a key source of reduced train speeds. CBP has directed that inbound trains pull through the R-VACIS at a predetermined rate of speed set by CBP in order to obtain and review quality scans. The impacts of R-VACIS inspections on train movements and highway-rail grade crossings can vary by the location of the R- VACIS. According to CBP officials, the machine is typically located right at the international border, with the exception of three locations on the northern border. The R-VACIS in Blaine is located approximately 3 miles inland from the Canadian border. According to a railroad representative in Blaine, the average maximum length of trains at this POE is 6,500 feet. Based on our calculations, it would take a train of this length approximately 15 minutes to pass through the R-VACIS at 5 miles per hour and may affect one or two highway-rail grade crossings. In contrast, CBP officials stated that the R-VACIS machines at the Port Huron and Detroit, Michigan, POEs are located in Canada. Trains pass through the R-VACIS in these locations at a predetermined speed and, once scanned, can proceed to enter the United States at a higher speed. CBP officials noted that these placements, which resulted from a Declaration of Principles for the improved security of rail shipments from Canada to the United States, were necessary because the tunnel infrastructure at these POEs requires that trains exit at high speeds. CBP officials also noted that they do not have the authority to physically inspect cargo in Canada. In addition, when secondary physical inspections occur, they may require trains to slow and stop, and CBP officials stated that the location of the inspections varies by POE and threat level CBP designated to the shipment. CBP officials also said that higher-risk threats, such as shipments containing suspected unauthorized persons (known as stowaways) or weapons, are inspected immediately and that lower-risk threats, such as paperwork discrepancies, are inspected later further away from the border. For example, CBP officials stated that CBP does not use R-VACIS to intentionally scan for people; however, CBP officials in Laredo said that if CBP officers do detect a stowaway on the train, the individual must immediately be secured and removed and could result in the train being stopped for about 45 minutes, during which highway-rail grade crossings on the U.S. side may be blocked. CBP officials in Laredo stated that eight stowaways were inadvertently detected on these trains last year, mostly at night. Meanwhile, more routine secondary physical inspections may involve stopping the train, uncoupling cars, reversing, stopping, and going forward again in order to set aside a rail car for CBP. Depending on the rail infrastructure at the POE, this process may result in trains blocking highway-rail grade crossings. For example, in Blaine, a BLET representative noted that putting a rail car aside for CBP, which generally occurs near the location of the R-VACIS, can take over an hour while blocking highway-rail grade crossings. As previously mentioned, CBP’s primary mission is to maintain national security, and CBP officials report that they operate on risk-based assessments. However, CBP has taken steps to expedite customs inspections at some POEs. CBP officials note that at the POE level, CBP often works together with local communities to develop protocols to expedite rail and minimize the impact on vehicular traffic. In at least two POEs on the northern border, CBP has adjusted the R-VACIS procedures to expedite freight rail. In Blaine, CBP allows empty coal trains through at an increased speed predetermined by CBP during daylight hours unless information received indicates a security risk or there is an operational need, thereby reducing the estimated average blocked highway-rail grade crossing time. In Ranier, a CBP official noted that CBP held meetings to review operations and, as a result, increased the maximum allowable R- VACIS speeds to a predetermined rate of speed set by CBP. One CBP official stated that CBP will not sacrifice security for expediency. In addition, at one POE, the railroad coordinated with CBP to expedite secondary inspections. Specifically, in Ranier, railroad officials said that the railroad invested approximately $10 million in equipment, staff, and infrastructure to build a “live lift” system to allow the removal of only the container of interest from intermodal trains for immediate inspection, instead of uncoupling the entire car which could hold several containers (see fig. 4). CBP officials and representatives from the railroad in Ranier stated that this investment reduced the overall secondary physical inspection process time and train delays, as well as the amount of time trains blocked a nearby highway-rail grade crossing. CBP officials in Laredo and DOT officials stated that trains going into Mexico are also subject to customs inspections, including R-VACIS scans, conducted by Mexican customs officials, which can result in slowed and stopped outbound trains and blocked highway-rail grade crossings in the United States. AAR representatives stated that Mexico is becoming more aware of the need to streamline processes and increase efficiency, particularly now that automobile manufacturing is expanding in Mexico, and U.S. railroads have been working with Mexican officials and other stakeholders to improve processes. For example, AAR representatives said that they meet regularly with customs agencies in the United States, Canada, and Mexico, and that they support a Trans- border Committee comprised of member railroads from all three countries to promote simplification and the development of electronic reporting systems to expedite freight rail traffic. At the POE level, CBP officials do not have authority over train movements once trains have crossed the border into Mexico or Canada. Trains entering the United States from Mexico must stop at the border for FRA-required brake inspections, and FRA has waived certain requirements to expedite this process. FRA regulation requires crews to perform full brake tests on trains at the origin location or at the interchange point, which is generally at the border as the trains enter the United States. An FRA region official stated that full brake tests were previously conducted with the whole train on the U.S. side, which could block highway-rail grade crossings for up to an hour. These brake tests include performing an air leakage test to ensure air brake pressure is maintained throughout the train, as well as a visual inspection of each car’s air brakes. Since the early 2000s, FRA has granted waivers to railroads to conduct abbreviated brake inspections at the border, provided the railroad submits a waiver request that meets certain criteria and is consistent with railroad safety. U.S. railroads on the southern border now have FRA brake inspection waivers in all but one POE, and FRA officials and railroad and BLET representatives said that such waivers to allow abbreviated brake tests have resulted in expedited train movements. The abbreviated brake tests allowed through the waiver can take 20 to 25 minutes according to BLET representatives in Laredo. An abbreviated brake test requires a visual roll-by inspection and a set-and-release test of the air brakes where the crew uses an end of train device to ensure air pressure is reaching the end of the train. As a condition of the waiver, crews are then required to conduct a full brake inspection at a U.S. rail yard away from the border. Despite FRA’s efforts to expedite brake inspections along the southern border, inbound trains sometimes arrive from Mexico with missing or damaged equipment which can cause delays. According to BLET and railroad representatives in Laredo, trains from Mexico often arrive in the United States with missing “end-of-train devices” that are required for the abbreviated brake test, which can cause delays up to an hour as train crews locate a replacement device. In addition, railroad and BLET representatives in Laredo noted that it is common for other train equipment to be tampered with, a situation that requires the train to be stopped until repairs can be completed. The Rail Safety Improvement Act of 2008 prohibits FRA from accepting mechanical and brake inspections of rail cars performed in Mexico before entering the United States unless, among other criteria, FRA certifies that the inspections are being performed under regulations and standards equivalent to those applicable in the United States. Moreover, according to DOT officials, FRA officials cannot verify brake inspections conducted in Mexico, in part, because the FRA officials face challenges coordinating with their counterparts due to security concerns. As a result, brake inspections occur on the border between the United States and Mexico, typically on a bridge. According to DOT officials, greater harmonization between the pertinent U.S. and Mexican regulations could result in the United States’ accepting brake inspections conducted in Mexico. DOT officials noted that although they would like to discuss rail regulatory and safety issues with Mexico and considers rail-related issues on occasion, no rail regulation harmonization efforts are currently underway, in part because Mexico is currently restructuring its rail regulatory body in an effort to increase its rail investments and networks. Furthermore, the U.S.-Mexico working group’s coordination efforts such as the U.S.-Mexico Joint Working Committee on Transportation Planning, have had limited initiatives focused specifically on freight rail issues, having instead focused on issues facing passenger vehicles and freight trucks. As we have previously mentioned, 60 percent of the freight that moves between the United States and Canada and Mexico is carried by truck. DOT officials told us that inbound and outbound trains on the southern border are required to stop at the border to change crew due to lack of comparable rail safety regulations between the United States and Mexico. While a BLET representative stated that crew changes can take 3 to 5 minutes, this can vary greatly depending on crew availability. For example, BLET and railroad representatives in Laredo noted that crews, who deliver trains to the rail yard and then are driven by a rail crew van to the border to pick up another train, can get delayed at the yard or on the way back to the border by traffic congestion. Such delays, according to a BLET representative in Laredo, can result in crew changes exceeding 2 or 3 hours. FRA regulations establish minimum federal safety standards for the eligibility, training, testing, certification, and monitoring of all locomotive engineers and conductors. According to DOT officials, the lack of Mexican safety regulations for the qualification and certification of locomotive engineers and conductors that are comparable to FRA regulations prohibits the United States from allowing Mexican crews to operate trains in the United States. In addition, as previously mentioned, while greater regulatory harmonization could result in Mexican crews being able to operate in the United States, DOT officials noted that Mexico is currently focused on creating a rail transport regulatory agency. According to DOT, FRA will invite Mexico to attend the annual North American Rail Safety Working Group Meeting in 2016 in an effort to encourage further harmonization. Two railroads have expressed interest in developing an international pool of crew to eliminate the need for crew changes on the southern border; however, DOT and CBP officials, and BLET representatives cited barriers to this initiative. Specifically, DOT officials stated that qualification and certification regulations, varying operating rules and hours of service for crews, and labor and union concerns would need to be addressed. Additionally, CBP officials in Laredo stated that they do not currently have the capability needed to facilitate processing an international crew. BLET representatives also noted concerns such as liability for damages and personal injury and security if U.S. crews were to operate in Mexico, since federal workplace laws are not applicable to U.S. citizens injured on the job while working abroad. BLET representatives also noted concerns with personal security of crew members while on board the train or when returning to the United States by vehicle after delivering the train to its destination in Mexico. These representatives also noted that exceeding the federal maximum allowable hours of service might become an issue given delays re-entering the United States at the vehicle border crossing. CBP and FRA have limited information on the effects of the above factors on rail movements. Although CBP has personnel located at the border, it does not have visibility into all factors affecting train movements. For example, trains are often operated at restricted speeds through POEs, meaning speeds are dictated by factors such as the train’s stopping distance and the train operator’s range of vision. According to BLET representatives in Ranier, speeds can be anywhere from 0.5 to 10 miles per hour through town due to the long stopping distances of heavy trains combined with limited visibility as a result of factors such as inclement weather or the track curvature, regardless of factors such as CBP inspections. Meanwhile FRA, which is primarily focused on the safety of trains operating within the United States, does not have staff located at POEs. Instead, FRA officials stated that they rely on voluntary reporting from railroads on any delays occurring and the reasons for these delays. FRA officials noted that it is difficult to obtain data from railroads on the cause and extent of train-related delays in POEs. CBP and FRA officials stated that they rely on communication with stakeholders to inform decisions such as modifying CBP procedures or brake test waiver requirements. As discussed later in this report, FRA has undertaken efforts to improve the availability of data on freight rail movements, including those at POEs. The factors noted above—customs inspections, brake inspections, and crew changes—can slow or stop trains travelling through U.S. POEs and consequently block highway-rail grade crossings in those communities, but different POEs are affected differently. As noted in Figure 5, the effect of factors such as customs inspections can vary based on whether the community is located on the southern or northern border. For example, an outbound crew change can result in the train stopped in one or more highway-rail grade crossings on the southern border, but is less likely to occur on the northern border because of greater harmonization, among other factors, between U.S. and Canadian safety regulations. In addition, although U.S. customs inspections can block U.S. highway-rail grade crossings for inbound trains on both borders, foreign customs inspections primarily impact outbound trains on the southern border. The extent to which the above factors may result in a train blocking a highway-rail grade crossing and delaying vehicular traffic also vary due to community characteristics, such as the number and location of highway- rail grade crossings and the availability of overpasses. For example, as noted below, in Ranier, railroad representatives estimated that one key highway-rail grade crossing is blocked for about 8 hours per day. In contrast, MPO officials in Buffalo and Detroit reported that international freight rail movements have minimal impact on traffic congestion in those cities because the rail lines are largely grade-separated, meaning the rail line rarely intersects with vehicular traffic. Furthermore, we have previously found that although communities may have long-standing concerns with the negative effects of highway-rail grade crossings, they have varying levels of quantified information on impacts such as traffic delay times or costs. Similarly, POE communities we visited provided some estimates of the amount of time highway-rail grade crossings are blocked, but were unable to provide data on the actual extent of blockage. For example, local officials in Blaine note that hour-long traffic disruptions can result from blocked highway-rail grade crossings, with 30 minutes waiting for the train and another 30 minutes waiting for the vehicle traffic queue to clear. However, local officials reported they did not have information on how regularly such delays occurred due to a lack of data. The following discussion of the rail POE communities we visited illustrates how their characteristics impacted highway-rail grade crossings. Ranier, Minnesota: Ranier is a community of 145 according to the 2010 Census, and is located about 3 miles northeast from the larger community of International Falls, Minnesota. Within Rainer, there is one highway-rail grade crossing—Spruce Street (see fig. 6). Spruce Street is blocked about 8 hours per day by the 20–22 trains traveling through per day—about 11 in each direction—according to representatives from the railroad. These representatives arrived at this total by estimating that a southbound train takes about 25 minutes to pass the highway-rail grade crossing, and a northbound train takes about 15 minutes, which amounts to over 7 hours a day for 11 trains to pass in each direction. These representatives report that the train traffic is distributed across nighttime and daytime hours because of the railroad’s aim to move traffic over its network evenly, which results in about one train travelling through Spruce Street per hour, including through the night. Speeds are slowed for inbound trains through Spruce Street due to CBP’s R-VACIS, although, as mentioned previously, CBP has taken efforts to expedite R-VACIS and the railroad and CBP have worked together to implement the live lift system to expedite secondary inspections. According to local officials, the blockage of Spruce Street has had a debilitating effect on businesses located north of Spruce Street. These officials report that due to the proximity of the Spruce Street intersection to Rainy Lake, it is impossible to build an overpass at that location. However, an overpass located approximately a mile away helps vehicle traffic reroute to get around the train. According to an FRA region official, the situation in Ranier does not constitute a serious effect on vehicle traffic, particularly compared with POE communities on the southern border and given the presence of the overpass. Blaine, Washington: Blaine, which is 35 miles south of Vancouver, Canada, is bordered on the north by the U.S./Canada border. The community—population 4,684 according to the 2010 Census— includes both Central Blaine to the east and West Blaine, where the Semiahmoo resort and marina are located. The rail line is located close to the waterfront through Central Blaine. Local officials report that two key highway-rail grade crossings are affected by freight rail movements— Hughes Avenue, a sole access point to a neighborhood of approximately 300 residents; and Bell Road, a key route connecting Central Blaine to West Blaine’s resort and marina (see fig. 7). According to railroad representatives, 12 freight trains pass per day— 6 in each direction—through Blaine, at both day and nighttime hours. Local officials attribute issues related to blocked highway-rail grade crossings in Blaine to the R-VACIS; however, as mentioned previously, CBP has adjusted its procedures to enable certain trains to go through R-VACIS faster. Local officials were unable to provide data on the amount of time Hughes Avenue and Bell Road are blocked, and noted that it is difficult to fund traffic studies that take train traffic into account, in part because the railroad does not contribute funding. Within Blaine there are no overpasses to enable traffic to reroute around trains. Furthermore, local officials reported it is not feasible to construct overpasses over Hughes Avenue and Bell Road due to geographic limitations such as the location of homes and a creek. Laredo, Texas: The 2010 census reported that Laredo is a city of approximately 236,000, and every day about 22 trains travel through Laredo—11 inbound and 11 outbound, according to CBP officials. Information provided by one of the railroads indicates that this traffic is fairly evenly split between daytime and nighttime hours. According to a 2006 study prepared for the MPO and the city, Laredo has over 80 highway-rail grade crossings which are split fairly evenly between two rail lines, which are operated by two different railroads and carry traffic in different directions through the city. A railroad representative noted that train traffic has recently been evenly split between these two rail lines. One of these rail lines bisects the downtown area, with 13 at- grade highway-rail crossings located at about every block (see fig. 8). According to an MPO official, the majority of complaints regarding blocked highway-rail grade crossings are along this downtown portion of the rail line. CBP officials in Laredo noted that a single stopped train can stretch from the border to near Interstate 35, a distance of approximately 2 miles, blocking all of the highway-rail grade crossings in between, including the 13 located downtown. These officials noted that this can affect traffic downtown, including lawyers who are cut off from the federal courthouse located on the other side of the rail line from their offices. In 2012, the Laredo region developed a Border Master Plan, which convened local, regional, and federal officials on both the U.S. and Mexico side of the border to prioritize border transportation projects. According to Texas state DOT officials, the Border Master Plan demonstrated the need for accurate data, including on current and future vehicular traffic levels, for analyzing costs and benefits and prioritizing projects. In addition, in 2015, a Laredo MPO-commissioned study gathered data on the number of trains passing through the community and speed from the Highway Rail Crossing Inventory, as well as vehicular traffic counts. However, since this study was primarily focused on actions to reduce train horn noise, it did not calculate the total amount of time highway-rail grade crossings are blocked. Brownsville, Texas: A community of about 175,000 people according to the 2010 Census, Brownsville currently has about 4 to 8 trains pass through the community per day, according to a railroad representative. On August 25, 2015, the first new international rail crossing between the United States and Mexico in 105 years was inaugurated in Brownsville. The new rail bridge relocates rail traffic away from the downtown area to the outskirts of Brownsville, with only one highway-rail grade crossing, and eliminates 14 highway-rail grade crossings downtown. Although moving the rail line outside of town has been discussed in other southern rail POE communities such as El Paso and Laredo, only Brownsville has succeeded in moving the rail POE out of the downtown area. A Cameron County official noted that project planning began in the 1990s, that much of the data used to prioritize the project was taken from a detailed feasibility study, and that other communities should now have an easier time proposing similar projects given that states are more involved with freight rail planning. According to a county official, the U.S. portion of the project cost over $40 million and most of the funding came from federal sources, including the American Recovery and Reinvestment Act of 2009. According to a railroad representative, the railroad agreed to transfer a portion of its existing right of way land to the county in exchange for the new right of way and infrastructure constructed by the county. Therefore, the railroad’s contribution to the project was the value of the land exchange rather than directly contributing funding for the new construction. In addition, a county official noted that coordinating with officials from Mexico and CBP were key challenges. Specifically, this official noted that monitoring the progress of the project on the Mexican side and coordinating with CBP on its requirements for the new bridge, such as the relocation of R-VACIS, posed challenges. CBP officials in Brownsville noted that the project did not begin with good coordination, and cited the need for strong coordination as a “lesson learned.” CBP, FRA region, and Brownsville MPO officials noted that the long-term success of the new rail bridge will largely depend on development of the area. These officials stated that increased development may result in new highway-rail grade crossings, which could result in traffic issues over time. A railroad representative noted that rail traffic through Brownsville is expected to increase in the future. The effect that freight rail may have on communities also varies based on the time of day that trains pass through the rail POE communities, as well as efforts made by railroads to prevent trains from blocking certain highway-rail grade crossings. For example, as noted above, trains pass through Ranier, Minnesota, around the clock, at an average of one per hour according to railroad representatives. Therefore, about half of the trains run through at night, when vehicle traffic is less and traffic congestion not an issue. In addition, according to railroad representatives and MPO officials in El Paso, trains cross the border during night time and early morning hours due to a Juarez, Mexico, city ordinance that restricts train movements to those times. In some situations, railroads have worked to avoid blocking certain highway-rail grade crossings. For example, in Laredo, a railroad representative noted that crews make best efforts to avoid blocking a trucking route and street with a school nearby during school hours. In addition, in Blaine, a CBP official reported that the railroad tries to limit the number of trains going through the community during the morning rush hour to avoid delaying school buses. We have previously found that a lack of publicly available data on freight rail movements and estimates of their impacts on vehicular traffic in communities across the United States creates difficulties in defining the extent of the problem and prioritizing potential solutions. Specifically, we found that limitations in both national and state and local data on freight rail movements reduce the ability of state or local officials to quantify freight rail community impacts nationwide and that these limitations create challenges to appropriately prioritizing efforts to address freight rail impacts against other types of funding priorities. At the national level, data on freight-related traffic congestion for local communities have limitations in terms of timeliness and completeness. At the local level, communities have limited data such as the number of trains and length of trains assigned by date, speed, and time. As we have previously found, communities often find it difficult to communicate with the railroad industry to obtain information on the number, timing, and speed of trains. We requested data directly from the railroads in order to quantify the extent that freight rail movements blocked highway-rail grade crossings in a selection of rail POE communities. Specifically, we requested data on the number of trains, the length of trains, and the speed of trains from railroads that operate in these POEs. This information would allow us to estimate train blockage time at highway-rail grade crossings in these communities. However, although we requested data directly from the five railroads that operate in eight selected rail POE communities, we received complete information from two of the railroads. Based on this data, we calculated the time selected highway-rail grade crossings are blocked and found highway-rail grade crossings in two communities— Ranier and one of the two rail lines in Laredo—to be blocked on average 16-19 minutes per train. Recent DOT efforts could help improve the availability of freight rail data needed to assess community impacts such as blocked highway-rail grade crossings for communities across the country, including POE communities. FRA maintains the National Highway-Rail Crossing Inventory that includes information such as the estimated number of daily trains in communities and the typical range of speed of trains that pass through a highway- rail grade crossing. However until recently this information was voluntarily submitted by railroads and states and according to FRA officials was not always current. On January 6, 2015, FRA issued a final rule requiring railroads to update the inventory once every 3 years. FRA officials said that the rule should improve the quality of the data, but that these improvements will not be fully evident for several years. Improved information on the average number of daily trains could better equip state and local governments to identify community congestion impacts from freight rail—including blocked highway-rail grade crossings located in POE communities along the border. Furthermore, in a November 2015 letter to congressional committees regarding a surface transportation bill, DOT Secretary Anthony Foxx noted that given the concerns regarding blocked crossings in many communities, FRA would benefit from authorization and funding to study blocked crossings to collect information as to the severity, frequency, and other characteristics of railroad operations that block highway-rail grade crossings. Secretary Foxx also noted that neither the House or Senate versions of the bill propose such authorization and funding. On December 4, 2015, President Obama signed into law the Fixing America’s Surface Transportation Act, which did not contain such provisions regarding blocked crossings. In addition, in September 2014, we issued a report on freight-related community impacts and recommended, among other things, that DOT incorporate additional information to help states define and prioritize local community impacts of national freight movements, including traffic- congestion impacts, and to establish what data could be consistently collected and analyzed in order to prioritize impacts of freight on local traffic congestion in its final guidance to states in the development of their state freight plans. We also recommended that DOT include a strategy for improving the availability of national data needed to quantify, assess, and establish measures of freight trends and impacts on local traffic congestion for inclusion in its National Freight Strategic Plan. DOT agreed with our recommendations. On October 18, 2015, DOT issued a draft National Freight Strategic Plan for public comment. The draft noted that DOT should work closely with state and local governments and international partners, as well as private stakeholders, to coordinate strategies and investments and noted that new freight traffic data sources and improved public-private cooperation on state freight plans will assist in this effort. The draft also noted that DOT should continue to engage in strong border infrastructure planning with border states through working groups with Canada and Mexico. We will continue to monitor the status of DOT’s response to our recommendations and DOT’s efforts related to the National Freight Strategic Plan. A DOT strategy on data to prioritize the impacts of freight related traffic congestion in the National Freight Strategic Plan, along with improvements to the National Highway-Rail Crossing Inventory, could help address data limitations at both the national and local levels and help communities—including POE communities—better define impacts from blocked highway-rail grade crossings and prioritize projects to mitigate such impacts. We provided a draft of this report to DOT and CBP for review and comment. In a response (reproduced in app. II), DOT highlighted efforts to minimize community impacts of international freight rail movement. DOT and CBP provided technical comments, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Susan Fleming at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. This report (1) describes factors that affect the movement of freight rail through selected ports of entry and the actions taken by federal agencies and others to expedite freight rail in these locations, and (2) examines what is known about the impacts of freight rail operations on highway-rail grade crossings in U.S. port of entry communities. To determine the factors that affect the movement of freight rail through selected ports of entry and the actions taken to expedite freight rail in these locations, we interviewed officials and reviewed documents from Customs and Border Protection (CBP), the U.S. Department of Transportation (DOT), the Federal Railroad Administration (FRA), and Department of State. We also interviewed representatives from the American Association of State Highway and Transportation Officials, the Border Trade Alliance, the Association of American Railroads, and the Brotherhood of Locomotive Engineers and Trainmen (BLET)—a union which represents train operators that we identified from prior GAO work. We interviewed FRA officials and reviewed FRA documentation regarding crew changes and brake inspections, including applicable regulations and FRA waiver decisions regarding brake inspections. We also interviewed DOT, FRA, and CBP officials and reviewed documentation on international working groups involving transportation issues on both the northern border (i.e., the U.S.- Canada Regulatory Cooperation Council and the Transportation Border Working Group) and the southern border (i.e., the U.S.-Mexico High Level Economic Dialogue and the U.S.-Mexico Joint Working Committee on Transportation Planning). To determine what is known about the impacts of freight rail operations on highway-rail grade crossings in U.S. POE communities, we also reviewed previous GAO reports and recommendations and interviewed DOT officials on available data sources and reviewed relevant documentation, such as the reporting requirements for the National Highway-Rail Crossing Inventory. To determine the factors that affect the movement of freight rail and the impacts of freight rail operations on highway-rail grade crossings, we selected nine rail POE communities— Nogales, Arizona; El Paso, Eagle Pass, Brownsville, and Laredo, Texas; Blaine, Washington; Ranier, Minnesota; Port Huron, Michigan; and Rouses Point, New York. These communities were selected because they had at least one inbound train on average per day from 2010 through 2014, according to DOT’s Bureau of Transportation Statistics’ (BTS) Border Crossing data. As part of this selection, we excluded 11 communities where the rail POEs were in transit (where trains pass through but are not subject to full CBP procedures), outside of the continental United States, did not cross incorporated communities, or have largely grade-separated infrastructure. We conducted visits to four of these selected communities—Brownsville and Laredo, Texas; Ranier, Minnesota; and Blaine, Washington—that were selected based on factors such as those with heavy inbound train volume from 2010 through 2014 according to BTS data, complaints received by CBP about blocked crossings, and a mix of northern and southern border locations. We also selected locations where actions had been taken to mitigate congestion or expedite rail, such as Brownsville, Texas, for its construction of a new international rail bridge. At each of the four site visits, we interviewed representatives from the city or county, the Metropolitan Planning Organization (if applicable), the state department of transportation, the FRA regional office, and BLET. We also interviewed representatives from the 5 railroads that operate trains through each selected POE. In each site visit we also interviewed officials from CBP and observed their inspection process as well as the geography and relevant highway-rail crossings of the community. We calculated the average time that freight trains would block key highway-rail grade crossings in selected communities based on the average speed of trains, length of trains, and frequency of trains that were reported by railroad representatives. To do so, we developed a data collection instrument and attempted to collect information from five railroads on the number, length, and speed of trains passing over the three highway-rail grade crossings closest to the international border on a typical weekday in July 2015 in eight of the selected communities. As we note in the report, although we requested information from five railroads, we received incomplete information in response and were able to analyze information from two of these railroads. In order to better understand the impacts of international rail in these communities, we spoke to local officials from the city or MPO by phone in each of the five selected communities that we did not visit (Nogales, Arizona; El Paso and Eagle Pass, Texas; Port Huron, Michigan; and Rouses Point, New York). We also interviewed officials from the MPOs in Detroit, Michigan and Buffalo, New York, to understand the impacts of international freight rail in these communities. We developed maps to provide context regarding the level of international freight rail traffic and impacts on communities. Specifically, we used BTS data to calculate the average number of inbound trains per day from 2010 through 2014 by POE and displayed this information on a map. To determine the reliability of BTS data, we reviewed related documentation and interviewed knowledgeable agency officials. We determined these data were sufficiently reliable for our purpose of providing contextual information. We also developed maps including the location of at-grade and grade separated highway-rail crossings for three of the four communities we visited—Ranier, Minnesota; Laredo, Texas; and Blaine, Washington. We did not include a map of Brownsville, Texas, since its rail traffic patterns are currently changing due to the construction of a new international rail bridge. To develop these maps, we used data from the National Highway-Rail Crossing Inventory, as well as maps and observations obtained from our in-person visits to these communities. By reviewing related documentation, interviewing knowledgeable DOT officials, and comparing the data to our site visits, we determined the data were sufficiently reliable for the purpose of developing maps. In addition to the individual named above, Sharon Silas (Assistant Director), Mark Braza, Delwen Jones, Rick Jorgenson, Emily Larson, John Mingus, Ian P. Moloney, Cheryl Peterson, Nada Raoof, and Malika Rice made key contributions to this report.
About 93 trains a day on average crossed into the continental United States from Canada and Mexico in 2014, according to DOT's Bureau of Transportation Statistics (BTS). Trains enter and leave the United States through 30 POEs—23 on the northern border and 7 on the southern border. Although international freight rail plays an important role in U.S. economic and trade interests, the movement of rail through U.S. communities at the border can result in blocked highway-rail grade crossings and vehicle traffic congestion. House Report 113-464 accompanying the Departments of Transportation, and Housing and Urban Development Appropriations Act included a provision for GAO to review the impact of international rail crossings on U.S. border communities. This report (1) describes the factors that affect the movement of freight rail and the actions taken by federal agencies and others to expedite freight rail in selected POEs and (2) examines what is known about the impacts of freight rail operations on highway-rail grade crossings in POE communities. GAO visited four POE communities that were selected in part based on BTS's 2010–2014 data on average incoming train volume. In each POE, GAO interviewed officials from local and state governments, the railroad, CBP, and FRA. GAO also interviewed officials from DOT, CBP, the Border Trade Alliance, and the Association of American Railroads. Factors such as inspections and crew changes affect freight rail movements in the four U.S. border port of entry (POE) communities GAO visited, which can result in blocked highway-rail grade crossings. Federal agencies and others have taken actions to expedite rail in these communities. As part of its mission to safeguard the border, U.S. Customs and Border Protection (CBP) scans inbound rail cars on both borders using the Rail Vehicle and Cargo Inspection System (R-VACIS), a machine used to detect anomalies and threats to national security. CBP generally requires trains to slow in order to pass through R-VACIS. To expedite freight rail and reduce blocked highway-rail grade crossings, CBP, for example, adjusted its procedures to allow certain trains to go through R-VACIS faster at two POEs on the northern border. Similarly, crew changes can result in stopped trains and blocked U.S. highway-rail grade crossings, particularly on the southern border. U.S. Department of Transportation (DOT) officials stated that crew changes are required due to differences in safety regulations between the U.S. Federal Railroad Administration (FRA) and Mexico. Railroads have expressed interest in eliminating such crew changes but face challenges such as FRA and labor union safety concerns. The impacts of international freight rail on highway-rail grade crossings in communities GAO visited vary based on border-specific factors and community characteristics, and DOT improvement efforts including the issuance of a final rule could provide better data for help determining these impacts in the future. Inspections and crew changes, as well as rail traffic levels, can vary across POEs. For example, some factors play a role at southern, but not northern POEs. In addition, freight rail impacts vary based on community characteristics such as the availability of overpasses. State and local officials face data limitations, which reduce their ability to quantify rail-related community impacts. For example, local officials often do not have data on the number and length of trains passing through the community. In September 2014, GAO recommended that DOT improve the availability of national data to assess freight impacts on traffic congestion. DOT agreed and has actions under way. In January 2015, the FRA issued a final rule requiring railroads to update FRA's highway-rail crossing inventory once every 3 years. Prior to this rule, railroads voluntarily submitted data that were not always updated. DOT data efforts could better equip state and local governments to define the extent of blocked highway-rail grade crossings in communities nationwide, including at rail border communities. GAO is not making recommendations in this report. DOT and CBP provided technical comments, which were incorporated.
FPS was created in 1971 and located within GSA until, under the Homeland Security Act of 2002, it was transferred to DHS and placed within ICE, effective March 1, 2003. Under the act, FPS is authorized to protect the buildings, grounds, and property that are under the control and custody of GSA and the persons on the property. FPS is authorized to enforce federal laws and regulations aimed at protecting GSA buildings and persons on the property and to investigate offenses against these buildings and persons. DHS and GSA developed an MOA to set forth roles, responsibilities, and operational relationships between FPS and GSA concerning the security of GSA buildings. In accordance with the MOA, FPS inspectors are responsible for performing a range of law enforcement and security duties at GSA buildings, including patrolling the building perimeter, responding to incidents and demonstrations, completing risk assessments for buildings and space that GSA is participating in meetings with GSA property managers and tenant overseeing contract guard operations. The level of physical protection services FPS provides at each of the 9,000 GSA buildings varies depending on the building’s security level. To determine a building’s security level, FPS uses the Department of Justice (DOJ) vulnerability assessment guidelines, which categorize federal buildings into security levels I through V based on factors such as building size and number of employees. The DOJ standards recommend minimum security measures for each of the five levels and FPS uses these standards and other information to recommend countermeasures. The DOJ standards also require FPS to complete BSAs every 2 to 4 years, depending on the security level of the building. For example, a BSA is to be completed every 2 years for a level IV building and every 4 years for a level I building. As part of each assessment, the inspector is required to conduct an on-site physical security analysis using FPS’s risk assessment tool, known as Federal Security Risk Manager, and interview tenant agency security representatives, GSA realty specialists, site security supervisors, and building managers. After completing their assessments, inspectors make recommendations to GSA and tenant agencies for building security countermeasures, including security equipment and security fixtures. Tenant agencies decide whether to fund countermeasures recommended for security equipment and FPS is responsible for acquiring, installing, and maintaining approved equipment. GSA and tenant agencies determine whether to fund recommended security fixtures and GSA is responsible for acquiring, installing, and maintaining approved fixtures. In some cases, and in accordance with its policies, FPS has delegated the protection of buildings to tenant agencies, which may have their own law enforcement authority or may contract separately for guard services. FPS is a fully reimbursable agency—that is, its services are fully funded by security fees collected from tenant agencies. FPS charges each tenant agency a basic security fee per square foot of space occupied in a GSA building. In fiscal year 2009, the basic security fee is 66 cents per square foot and covers services such as patrolling the building perimeter, monitoring building perimeter alarms, dispatching law enforcement officers through its control centers, conducting criminal investigations, and performing BSAs. FPS also collects an administrative fee that it charges tenant agencies for building-specific security services, such as controlling access to building entrances and exits and checking employees and visitors. In fiscal year 2009, the fee rate for building-specific expenses is 8 percent. In addition to these security services, FPS provides tenant agencies with additional services upon request, which are funded through reimbursable security work authorizations (SWA) for which FPS charges an administrative fee. For example, tenant agencies fund FPS’s security equipment countermeasure recommendations that they approve through SWAs. In fiscal year 2009, the SWA fee rate is 8 percent. Since transferring to DHS, FPS’s mission has expanded beyond solely protecting GSA buildings to include homeland security activities, such as implementing homeland security directives, and providing law enforcement, security, and emergency response services during natural disasters and special events. For example, FPS serves as the sector- specific agency for the Government Facilities critical infrastructure sector under Homeland Security Presidential Directive 7 (HSPD-7). Additionally, DHS has authority under the Homeland Security Act to engage FPS in activities DHS deems necessary to enhance homelandsecurity. For example, FPS can be called upon to assist the Federal Emergency Management Agency in responding to natural disasters, a provide backup to other DHS law enforcement units during special events, such as political demonstrations. According to FPS, it is reimbursed fo these supportive services. We have previously identified challenges that raised concerns about FPS’s protection of GSA buildings and tenants. In 2004, we reported on the challenges FPS faced in transitioning from GSA to DHS, including issues related to expanding responsibilities and funding. In June 2008, we reported on a range of operational and funding challenges facing FPS. We found that the operational challenges we identified hampered FPS’s ability to accomplish its mission of protecting GSA buildings and the actions it took may not have fully resolved the challenges. For example, the number of FPS staff decreased by about 20 percent between fiscal year 2004 and fiscal year 2007. We found that FPS managed these decreases in staffing resources in a way that diminished security and increased the risk of crime and terrorist attacks at many GSA buildings. We further reported that the actions FPS took to address its funding challenges had some adverse implications. For example, during fiscal years 2005 and 2006, FPS’s projected expenses exceeded its collections, and DHS had to transfer funds to make up the difference. We also found that although FPS had developed output measures, it lacked outcome measures to assess the effectiveness of its efforts to protect GSA buildings. Moreover, FPS lacked a reliable data management system for accurately tracking performance measures. “Nothing in this chapter may be construed to affect the functions or authorities of the Administrator of General Services with respect to the operation, maintenance, and protection of buildings and grounds owned or occupied by the Federal Government and under the jurisdiction, custody, or control of the Administrator. Except for the law enforcement and related security functions transferred under section 203(3) of this title, the Administrator shall retain all powers, functions, and authorities vested in the Administrator under chapter 1, except section 121(e)(2)(A), and chapters 5 to 11 of Title 40, and other provisions of law that are necessary for the operation, maintenance, and protection of such buildings and grounds.” In response to a 2005 GAO recommendation and to enhance coordination with FPS, GSA established the Building Security and Policy Division within the Public Buildings Service (PBS)—where FPS once resided—in 2006. This division has three primary branches: Building Security Policy—develops GSA security policies. Building Security Operations—interfaces with FPS and monitors the services FPS provides to GSA and tenant agencies. Physical Security—provides physical security expertise, training, and guidance to GSA leadership, regional staff, and tenant agencies. During 2006, the division developed the Regional Security Network, which consists of several staff per GSA region to further enhance coordination with FPS at the regional and building levels, and to carry out GSA security policy in collaboration with FPS and tenant agencies. In 1995, Executive Order 12977 established the ISC to enhance the quality and effectiveness of security in, and protection of, nonmilitary buildings occupied by federal employees in the United States. ISC has representation from all federal cabinet-level departments and other agencies and key offices, including GSA and FPS. Furthermore, ISC was established as a permanent body to address continuing government security issues for federal buildings. Under the order, ISC became responsible for developing policies and standards, ensuring compliance and overseeing implementation, and sharing and maintaining information. Executive Order 13286 transferred the ISC Chair from GSA to DHS. In 2004, we assessed ISC’s progress in fulfilling its responsibilities. We have identified a set of key facility protection practices from the collective practices of federal agencies and the private sector to provide a framework for guiding agencies’ protection efforts and addressing challenges. We focused on the following three key practices for this report: (1) allocating resources using risk management; (2) leveraging technology; and (3) information sharing and coordination. We have used the key practices to evaluate the efforts of the Smithsonian Institution to protect its assets, of DHS to protect its facilities, and of federal entities to protect icons and facilities on the National Mall. Moreover, ISC is using our key facility protection practices as key management practices to guide its priorities and work activities. For example, ISC established subcommittees for technology best practices and training, and working groups in the areas of performance measures and strategic human capital management. ISC also issued performance measurement guidance in 2009. FPS is limited in its ability to influence the allocation of resources using risk management because security funding decisions are the responsibility of GSA and tenant agencies. Moreover, FPS uses an outdated risk assessment tool, a subjective approach, and a time-consuming process to conduct BSAs. GSA and tenant agencies have concerns about the quality and timeliness of FPS’s risk assessment services and in some cases, are assuming these responsibilities. Although FPS is taking steps to implement a new risk management program, it is unclear when all program components—such as risk assessment tools—will be fully implemented as FPS has extended initial implementation from fiscal year 2009 into fiscal year 2010. FPS’s new risk management program could help GSA and tenant agencies refine their resource allocation decisions if risk assessments are enhanced and FPS can help GSA and tenant agencies prioritize risks among all buildings. Until the risk management program is implemented, FPS will continue to use its current approach, which may leave some buildings and tenants vulnerable to terrorist attacks and crime. FPS’s ability to influence the allocation of resources based on the results of its risk assessments is constrained because GSA and tenant agencies must agree to fund recommended countermeasures, and we found that tenant agencies were sometimes unwilling to fund recommended security equipment. We have reported that a risk management approach to building protection generally involves identifying potential threats, assessing vulnerabilities, and evaluating mitigation alternatives for their likely effect on risk and their cost. Incorporating information on these elements, a strategy for allocating security-related resources is developed, implemented, and reevaluated over time as conditions change. Through the risk assessment process, FPS inspectors make recommendations for security fixtures and equipment which they include in BSA executive summaries that FPS is required to share with GSA and tenant agencies. GSA and tenant agencies determine whether to fund recommended security fixtures and GSA is responsible for acquiring, installing, and maintaining approved fixtures. Tenant agencies determine whether to fund recommended security equipment and FPS is responsible for acquiring, installing, and maintaining security equipment. However, tenant agencies may be unwilling to approve FPS’s security equipment countermeasure recommendations, in which case FPS views them as choosing to accept the risk. According to officials we spoke with from FPS, GSA, and tenant agencies, tenant agencies may not approve FPS’s security equipment countermeasure recommendations for several reasons: Tenant agencies may not have the security expertise needed to make risk-based decisions. Tenant agencies may find the associated costs prohibitive. The timing of the BSA process may be inconsistent with tenant agencies’ budget cycles. Consensus may be difficult to build among multiple tenant agencies. Tenant agencies may lack a complete understanding of why recommended countermeasures are necessary because they do not receive BSAs in their entirety. For example, in August 2007, FPS recommended a security equipment countermeasure—the upgrade of a surveillance system shared by two locations that, according to FPS officials, would cost around $650,000. While members of one BSC told us they approved spending between $350,000 and $375,000 to fund their agencies’ share of the countermeasure, they said that the BSC of the other location would not approve funding; therefore, FPS could not upgrade the system it had recommended. In November 2008, FPS officials told us that they were moving ahead with the project by drawing on unexpended revenues from the two locations’ building-specific fees and the funding that was approved by one of the BSCs. In May 2009, FPS officials told us that all cameras had been repaired and all monitoring and recording devices had been replaced, and that the two BSCs had approved additional upgrades and that FPS was implementing them. As we reported in June 2008, we have found other instances in which recommended security countermeasures were not implemented at some of the buildings we visited because BSC members could not agree on which countermeasures to implement or were unable to obtain funding from their agencies. Complicating the issue of FPS’s limitations in influencing risk-based resource allocation decisions, FPS inspectors use an outdated risk assessment tool, known as Federal Security Risk Manager, to produce BSAs which are also vulnerable to inspector error and subjectivity and can take a considerable amount of time to complete. GSA originally developed the risk assessment tool in the late 1990s when FPS was a part of GSA and updated it in 2002, and it moved with FPS when it was transferred to DHS in 2003. FPS has identified problems with the risk assessment tool and overall approach to developing BSAs, including The risk assessment tool contributes to BSA subjectivity because it lacks a rigorous risk assessment methodology. For example, the tool does not incorporate ISC standards or the National Infrastructure Protection Plan (NIPP) framework, therefore, inspectors must apply ISC standards during their reviews of BSAs produced from the risk assessment tool and modify these reports in accordance with the standards. Inspectors’ compliance with BSA policies and procedures is inconsistent and inspectors must search for risk information from different sources and perform duplicative data entry tasks making it difficult for inspectors to focus fully on the needs of GSA and tenant agencies. Additionally, inspectors record risk assessment findings on paper-based forms and then transfer data to the risk assessment tool and other systems manually, potentially introducing errors during the transfer. We concur with FPS’s findings and also believe the discretion given to inspectors in FPS’s risk assessment approach provides less assurance that vulnerabilities are being consistently identified and mitigated. Without consistent application of risk assessment procedures, FPS cannot assure GSA and tenant agencies that expenditures to implement its recommendations are necessary. Furthermore, FPS’s reliance on an outdated risk assessment tool provides less assurance that risks and mitigation strategies are adequately identified. We have previously reported on other concerns about FPS’s risk assessment tool. For example, the tool does not allow FPS to compare risks from building to building so that FPS, GSA, and tenant agencies can prioritize security improvements among the nearly 9,000 buildings within GSA’s inventory. The ability to compare risks among all buildings is important because it could allow FPS, GSA, and tenant agencies to comprehensively identify and prioritize risks and countermeasure recommendations at a national level and direct resources toward alleviating them. We also reported that the risk assessment tool does not allow FPS to further refine security improvement priorities based on more precise risk categories—rather than the high, medium, or low categories FPS inspectors use under the current system. Furthermore, we reported that the risk assessment tool does not allow FPS to track the implementation status of security recommendations based on assessments. Without this ability, FPS has difficulty determining the extent to which identified vulnerabilities at GSA buildings have been mitigated. Considering the steps involved, it can also take several months for FPS to complete a BSA. Some of these steps include conducting an on-site physical security survey, interviewing representatives from GSA and tenant agencies (an inspector may need to visit a site multiple times to meet with all pertinent officials), entering survey and interview results into the risk assessment tool and other systems such as the Security Tracking System, producing a BSA document that undergoes several layers of review and briefing representatives of tenant agencies and GSA on the BSA results and distributing the executive summary to them. An FPS supervisory officer told us that it took an average of 3 months to complete a BSA. The officer explained that it may take 2 to 5 weeks for an inspector to complete a security survey, interviews, and a BSA document. The officer gave an example that for one of the buildings within the region, an inspector must interview representatives from 30 tenant agencies. In another example, we found that an FPS inspector had completed a BSA report for one location in April 2008, but at the time of our visit in August 2008 the document was still undergoing supervisory review and tenant agency representatives and GSA had not yet been briefed on the results or received a copy of the executive summary. Furthermore, inspectors are responsible for conducting BSAs for multiple buildings. The inspectors we interviewed were each responsible for conducting BSAs and overseeing security operations at between 1 and 20 buildings. GSA security officials at the national and regional levels that we met with were concerned about the quality and timeliness of the risk assessment services that FPS provides. Officials explained that GSA created the current risk assessment tool hastily following the 1995 bombing of the Alfred P. Murrah Federal Building and that FPS inherited a flawed tool when it moved to DHS. GSA security officials expressed concerns over the quality of FPS’s BSAs. For example, GSA regional security officials told us that an FPS inspector recommended that GSA remove a structure from a building, because the inspector thought it blocked the view of the security guards. However, according to these officials, FPS had not cited this blocked view as a vulnerability or recommended the structure’s removal in previous BSAs, and to their knowledge, there had been no significant changes in identified threats, the space, or the building’s tenant composition. GSA security officials also told us that they have had difficulties receiving timely risk assessments from FPS for space that GSA is considering leasing. These risk assessments must be completed before GSA can take possession of the property and lease it to tenant agencies. An inefficient risk assessment process for new lease projects can add costs for GSA and create problems for both GSA and tenant agencies that have been planning for a move. ICE officials told us that there are many occasions where FPS is not notified by GSA of the need for a new lease assessment and in some cases, tenants have moved into leased space without FPS’s knowledge. GSA is updating a tool —the Risk Assessment Methodology Property Analysis and Ranking Tool (RAMPART)—that it began developing in 1998, but has not recently used, to better ensure the timeliness and comprehensiveness of these risk assessments. GSA expects to test and implement the system during fiscal year 2009. GSA security officials told us that they may use RAMPART for other physical security activities, such as conducting other types of risk assessments and determining security countermeasures for new facilities. The tenant agency officials we spoke with at the five sites did not raise concerns about FPS’s risk assessment process, but all of them told us that at the national level, their agencies were taking steps to pursue their own risk assessments for the exterior of their buildings, even though they pay FPS for this service. GSA security officials said they have seen an increase in the number of tenant agencies conducting their own risk assessments. They told us that they are aware of at least nine tenant agencies that are taking steps to acquire risk assessments for the exterior of their buildings. Additionally, we have previously reported that some tenant agencies had told us that they were using or planned to find contractors to complete additional risk assessments because of concerns about the quality and timeliness of FPS’s BSAs. We also reported that several DHS components and other tenant agencies were taking steps to acquire their own risk assessments because FPS’s assessments were not always timely or adequate. Similarly, we also found that many facilities had received waivers from FPS to enable the agencies to complete their own risk assessments. While tenant agencies have typically taken responsibility for assessing risk and securing the interior of their buildings, assessing exterior risks will require additional expertise and resources. This is an inefficient approach considering that tenant agencies are paying FPS to assess building security. However, ICE officials stated that in many cases, the agencies that are pursuing risk assessments are doing so to include both GSA and non-GSA buildings that they occupy, and that in other instances, agencies must adhere to other physical security standards and thus conduct their own assessments. FPS recognizes the inadequacies of its risk assessment tool, methodology, and process and is taking steps to develop a new risk management program. Specifically, FPS is developing the Risk Assessment and Management Program (RAMP) to improve the effectiveness of FPS’s risk management approach and the quality of BSAs. According to FPS, RAMP will provide inspectors with the information needed to make more informed and defensible recommendations for security countermeasures. FPS also anticipates that RAMP will allow inspectors to obtain information from one electronic source, generate reports automatically, enable FPS to track selected countermeasures throughout their life cycle, address some concerns about the subjectivity inherent in BSAs, and reduce the amount of time inspectors and managers spend on administrative work. Additionally, FPS is designing RAMP so that it will produce risk assessments that are compliant with ISC standards, compatible with the risk management framework set forth by the NIPP, and consistent with the business processes outlined in the MOA with GSA. FPS expects that the first phase of RAMP will include BSA and countermeasure management tools, among other functions. According to FPS, RAMP will support all components of the BSA process, including gathering and reviewing building information; conducting and recording interviews; assessing threats, vulnerabilities, and consequences to develop a detailed risk profile; recommending appropriate countermeasures; and producing BSA reports. According to FPS, RAMP’s countermeasure lifecycle management activities will include countermeasure design, review, recommendation, approval, implementation, acceptance, operation, testing, and replacement. FPS began designing RAMP in early 2007 and expects to implement the program in three phases, completing its implementation by the end of fiscal year 2011. However, it is unclear whether FPS will meet the implementation goals established in the program’s proposed timeline. In June 2008, we reported that FPS was going to implement a pilot version of RAMP in fiscal year 2009, but in May 2009, FPS officials told us they intend to implement the first phase in the beginning of fiscal year 2010. FPS officials also told us that RAMP training for inspectors will begin in October 2009 and conclude in December 2009. Until RAMP components are fully implemented, FPS will continue to rely on its current risk assessment tool, methodology, and process, potentially leaving GSA and tenant agencies dissatisfied. GSA security officials are aware that RAMP’s development and implementation have run behind schedule and are concerned about when improvements to FPS’s risk assessment processes will be made. Under the 2006 MOA, FPS and GSA recognized that revisions and enhancements would need to be made to the risk assessment process, and FPS agreed it would work in consultation with GSA on any modifications to risk assessment tools. FPS shared RAMP plans with GSA in 2007 and solicited feedback, yet GSA security officials told us they think collaboration could have been stronger and have concerns about RAMP’s ability to meet their physical security needs. For example, as stated earlier, GSA relies on FPS to provide it with risk assessments for buildings that it wants to lease, but because FPS does not provide these assessments in a timely manner GSA is taking steps to implement its own risk assessment tool by the end of fiscal year 2009. According to FPS, RAMP will include a risk assessment tool for new lease projects, but it did not include this component in the first development phase and instead, this tool is scheduled for rollout at the end of fiscal year 2010. FPS officials told us that as they move forward with RAMP, they intend to ask GSA and tenant agencies what risk assessment information they need from BSA reports. Also, FPS officials told us they are reassessing building security levels using ISC’s updated facility security level standards and a specialized calculator tool. The updated ISC standards take factors other than a building’s size and population into account, including mission criticality, symbolism, threats to tenant agencies, and other factors such as proximity to a major transportation hub. FPS is trying to meet ISC’s target date of September 30, 2009, for finalizing updated building security levels for nearly 9,000 GSA buildings. According to FPS, inspectors began reassessing building security levels during June 2008 and as of May 2009, FPS officials told us that inspectors had determined preliminary security levels for all buildings, and finalized security levels for 3,100 buildings. FPS officials told us inspectors have been following ISC guidance in reassessing the facility security levels which require that the tenant agencies make the final security level determination. However, GSA security officials at the national office told us they were receiving feedback from GSA security officials in the regions that some FPS inspectors were presenting the updated security levels as mandatory and final, not as preliminary results to be discussed. Risk management practices provide the foundation of a comprehensive protection program. Hence, efforts in the other key practice areas— leveraging technology and information sharing and coordination—are diminished if they are not part of a risk management approach which can be the vehicle for using these tools. It is critical that FPS—which is responsible for assessing risk for nearly 9,000 GSA buildings and properties that GSA may lease—replace its outdated, subjective, and time- consuming risk assessment tool and approach with the new program it has been developing since fiscal year 2007, especially as the results of its risk analyses lay the foundation for FPS, GSA, and tenant agencies’ security efforts. DHS is the nation’s designated leader of critical infrastructure protection efforts; therefore, it is critical that RAMP be developed in an expeditious manner so that DHS can fulfill this mission with regard to federal facilities that FPS protects. Furthermore, department level attention in ensuring that FPS achieves success through regular updates to the Secretary is warranted. This added oversight would enhance the department’s ability to monitor RAMP development and make FPS accountable for results, given the delays that RAMP has already experienced. FPS’s approach to leveraging technology does not ensure that the most cost-effective technologies are being selected to protect GSA buildings. Individual inspectors make technology decisions with limited training and guidance, giving GSA and tenant agencies little assurance that vulnerabilities have been systematically mitigated within and among all buildings as cost-effectively as possible. Although FPS is developing a program for technology acquisition, its implementation has been delayed and it does not include an evaluative component to ensure cost- effectiveness. As previously discussed, FPS inspectors recommend security fixtures to GSA and security equipment to tenant agencies through the BSA process. However, the training, guidance, and standards that FPS provides to inspectors for selecting technologies are limited. As a result, GSA and tenant agencies have little assurance that the countermeasures inspectors recommend are cost-effective and the best available alternative. We have previously reported that by efficiently using cost-effective technology to supplement and reinforce other security measures, agencies can more effectively apply the appropriate countermeasures to vulnerabilities identified through the risk management process, and that linking the chosen technology to countermeasures identified as part of the risk management process provides assurance that factors such as purpose, cost, and expected performance have been addressed. Furthermore, we have recognized that having a method that allows for cost-effectively leveraging technology to supplement and reinforce other measures represents an advanced application of the key practice. Through the BSA process, FPS recommends security fixtures to GSA, and GSA has policies and procedures in place to guide its decisions about the recommended investments and to identify and acquire cost-effective fixtures through established contracts with vendors. FPS inspectors also recommend technology-related security equipment through the BSA process and acquire, install, and maintain the security equipment that tenant agencies approve for purchase. FPS does not have a comprehensive approach for identifying, acquiring, and assessing the cost-effectiveness of the security equipment that its inspectors recommend. Instead, individual FPS inspectors identify equipment for its purchase, installation, and maintenance. FPS officials told us that inspectors make technology decisions based on the initial training they receive, personal knowledge and experience, and contacts with vendors. FPS inspectors receive some training in identifying and recommending security technologies as part of their initial FPS physical security training. Since FPS was transferred to DHS in 2003, its refresher training program for inspectors has primarily focused on law enforcement. Consequently, inspectors lack recurring technology training. Supervisory officers and inspectors from two of the five sites we visited told us that they learn about security technologies on their own by reviewing industry publications and by attending trade shows and security conferences but inspectors must have the time and funding to attend. A supervisory officer from one FPS region told us the region has sent some inspectors to security conferences sponsored by ASIS International. Additionally, FPS does not provide inspectors with specialized guidance and standards for cost-effectively selecting technology. In the absence of specific guidance, inspectors follow the DOJ minimum countermeasure standards and other relevant ISC standards but these standards do not assist users in selecting cost-effective technologies. FPS’s devolution of responsibility for selecting technology to individual inspectors, whose knowledge of existing and emerging technologies varies because it is built on limited training and personal experience, results in subjective equipment selection decisions. Additionally, the acquisition process can be time-consuming for inspectors—many of whom have other law enforcement and security duties for multiple buildings—because they must search for equipment and vendors and facilitate the establishment of installation and maintenance contracts. FPS’s process for acquiring, installing, and maintaining technologies provides GSA and tenant agencies with little assurance that they are getting the highest-quality, most cost- effective technology security solutions and that common vulnerabilities are being systematically mitigated across all buildings. For example, an explosives detection dog was used at one location to screen mail that is distributed elsewhere. In 2006, FPS had recommended, based on the results of its risk analysis, the use of this dog and an X-ray machine, although at the time of our visit only the dog was being used. Moreover, the dog and handler work 12-hour shifts Monday through Friday when most mail is delivered and shipped, and the dog needs a break every 7 minutes. The GSA regional security officials we spoke with questioned whether this approach was more effective and efficient than using an on- site enhanced X-ray machine that could detect biological and chemical agents as well as explosives and could be used anytime. In accordance with its policies, FPS conducted a BSA of the site in 2008 and determined that using an enhanced X-ray machine and an explosives detection dog would bring the projected threat rating of the site down from moderate to low. FPS included estimated one-time installation and recurring costs in the BSA and executive summary, but did not include the estimated cost and risk of the following mail screening options: (1) usage of the dog and the additional countermeasure; (2) usage of the additional countermeasure only; and (3) usage of the dog only. Consequently, tenant agency representatives would have to investigate the cost and risk implications of these options on their own to make an informed resource allocation decision. FPS is taking steps to implement a more systematic approach to technology acquisition by developing a National Countermeasures Program, which could help FPS leverage technology more cost-effectively. According to FPS, the program will establish standards and national procurement contracts for security equipment, including X-ray machines, magnetometers, surveillance systems, and intrusion detection systems. FPS officials told us that instead of having inspectors search for vendors to establish equipment acquisition, installation, and maintenance contracts, inspectors will call an FPS mission support center with their countermeasure recommendations, and the center will procure the services through standardized contracts. According to FPS, the program will also include life-cycle management plans for countermeasures. FPS officials explained that the National Countermeasures Program establishes contractual relationships through GSA Schedule 84 to eliminate the need for individual contracting actions when requirements for new equipment or services are identified. FPS officials told us they worked closely with GSA’s Federal Acquisition Service (FAS) to develop the program and FAS officials concurred stating, for example, that the two agencies have collaborated to ensure that GSA Schedule 84 has a sufficient number of vendors to support FPS requirements for physical security services. FPS officials said they established an X-ray machine contract through the schedule and that future program contracts will also explore the use of the schedule as a source for national purchase and service contracts. According to FPS, the National Countermeasures Program should provide the agency with a framework to better manage its security equipment inventory; meet its operational requirement to identify, implement, and maintain security equipment; and respond to stakeholders’ needs by establishing nationwide resources, streamlining procurement procedures, and strengthening communications with its customers. FPS officials told us they believe this program will result in increased efficiencies because inspectors will not have to spend their time facilitating the establishment of contracts for security equipment because these contracts will be standardized nationwide. Additionally, FPS officials told us that they participate in the research and development of new technologies with DHS’s Science and Technology Directorate. Although the National Countermeasures Program includes improvements that may enhance FPS’s ability to leverage technology, it does not establish tools for assessing the cost-effectiveness of competing technologies and countermeasures and implementation has been delayed. Security professionals are faced with a multitude of technology options offered by private vendors, including advanced intrusion detection systems, biotechnology options for screening people, and sophisticated video monitoring. Having tools and guidance to determine which technologies most cost-effectively address identified vulnerabilities is a central component of the leveraging technology key practice. FPS officials told us that the National Countermeasures Program will enable inspectors to develop countermeasure cost estimates that can be shared with GSA and tenant agencies. However, incorporating a tool for evaluating the cost- effectiveness of alternative technologies into FPS’s planned improvements in the security acquisition area would represent an enhanced application of this key practice. Another concern is that FPS had planned to implement the program throughout fiscal year 2009, but extended implementation into fiscal year 2010 and thus it is not clear whether FPS will meet the program’s milestones in accordance with updated timelines. For example, FPS had anticipated that the X-ray machine and magnetometer contracts would be awarded by December 2008, and that contracts for surveillance and intrusion detection systems would be awarded during fiscal year 2009. In May 2009, FPS officials told us that the X-ray machine contract was awarded on April 30, 2009, and that they anticipated awarding the magnetometer contract in the fourth quarter of fiscal year 2009 and an electronic security services contract for surveillance and intrusion detection systems during the second quarter of fiscal year 2010. FPS had planned to test the program in one region before implementing it nationwide, but after further consideration, FPS management decided to forgo piloting the program in favor of rolling it out nationally. Until the National Countermeasures Program is fully implemented, FPS will continue to rely on individual inspectors to make technology decisions. It would be beneficial for FPS to establish a process for determining the cost-effectiveness of technologies considering the cost and risk implications for the tenant agencies that determine whether they will implement FPS’s countermeasure recommendations. FPS, GSA, and tenant agencies share information and coordinate in a variety of ways at the national, regional, and building levels; however, FPS inspectors do not meet regularly with GSA property managers and tenant agencies, FPS and GSA disagree over what threat and risk information should be shared, and FPS faces technical obstacles to communicating directly with other law enforcement agencies when responding to incidents. At the national level, FPS and GSA share information and coordinate in a variety of ways. We have reported that information sharing and coordination among organizations is crucial to producing comprehensive and practical approaches and solutions to address terrorist threats directed at federal buildings. FPS and the Building Security and Policy Division within GSA’s PBS hold two biweekly teleconferences—one to discuss building security issues and priorities and the other to discuss the status of GSA contractor security background checks. FPS officials stated that this regular contact with GSA has made their relationship more productive and promotes coordination. GSA security officials also recognize the importance of these teleconferences, although they would like more involvement from FPS such as having better follow-through on meeting action items. Additionally, FPS and GSA are both members of ISC and serve together on various subcommittees and working groups. The FPS Director and the Director of the PBS Building Security and Policy Division participate in an ISC executive steering committee, which sets the committee’s priorities and agendas for ISC’s quarterly meetings. These activities could enhance FPS’s and GSA’s collaboration in implementing ISC’s security standards and potentially lead to greater efficiencies. According to FPS and ISC, FPS has consistently participated in ISC working groups, but the staff assigned to some of the groups changed from meeting to meeting. GSA security officials also cited limitations with FPS’s staffing of ISC working groups. FPS and GSA have also established an Executive Advisory Council to enhance the coordination and communication of security strategies, policies, guidance, and activities with tenant agencies in GSA buildings. As the council’s primary coordinator, FPS convened the group for the first time in August 2008, and 17 agencies attended. According to FPS, it intends to hold semiannual council meetings, and as of May 2009, FPS had not held a second formal meeting. This council could enhance communication and coordination between FPS and GSA, and provide a vehicle for FPS, GSA, and tenant agencies to work together to identify common problems and devise solutions. Furthermore, FPS and GSA are renegotiating the 2006 MOA between DHS and GSA to, among other things, improve communication. However, officials told us that this process has been time-consuming and the two parties have different views on the outcomes. FPS and GSA began renegotiating the MOA during fiscal year 2008 and expected to finalize it during fiscal year 2009. However, in May 2009, FPS officials told us they do not have an estimated date for finalizing the MOA and GSA officials told us they do not anticipate reaching an agreement until fiscal year 2010. FPS and GSA recognize that the renegotiation can serve as an opportunity to discuss service concerns and develop mutual solutions. While FPS and GSA concur that the MOA should be used as an accountability tool, FPS thinks the document should offer general guidelines on the services it provides, but GSA wants a more prescriptive agreement. Overall, FPS and GSA regional officials told us that FPS shares some information with GSA and that collaboration between the two agencies has improved. However, the agencies’ satisfaction with this situation differs. The FPS regional officials we spoke with said the agencies’ information sharing and coordination procedures work well, while GSA regional security officials told us that communication should be more frequent and the quality of the information shared needs to be improved. Moreover, according to the GSA officials, FPS’s sporadic and restricted sharing of threat information limits GSA’s ability to protect its properties. We have reported that by having a process in place to obtain and share information on potential threats to federal buildings, agencies can better understand the risks they face and more effectively determine what preventive measures should be implemented. Additionally, we have reported that sharing terrorism-related information that is critical to homeland security protection is important, and agencies need to develop mechanisms that support this type of information sharing. The 2006 MOA between DHS and GSA requires FPS to provide GSA with quarterly briefings at the regional level. However, GSA regional security officials told us that they were not receiving related threat information as part of these updates until October 2008, when the FPS Director—in response to feedback from GSA—instructed regional personnel to share threat information. The FPS Director advised Regional Directors to meet quarterly with their respective GSA regional administrators, regional commissioners, or security representatives to discuss and share information on regional security issues. The Director further stated that briefings should include unclassified intelligence information concerning threats against GSA buildings and updates to the regional threat assessment, as well as information and analysis on protecting the regions’ most vulnerable facilities. Moreover, in its strategic plan, FPS recognizes the importance of ensuring that policies and procedures are being established and followed consistently across the country, and asserts that effective communication between headquarters and regional personnel at all levels will aid in this effort. GSA officials also told us that they are taking steps to replicate headquarters structures in their regions to ensure consistent applications of policies and to standardize communication practices. While FPS’s action to share threat information is a positive step, GSA security officials at the national office told us they received feedback from security staff in the regions that threat briefings were not uniform across regions and varied in their usefulness. The majority of the briefings, the officials said, communicated information about crime incidents and did not, in their view, provide threat information. In May 2009, FPS officials told us that regions gave briefings during the second quarter of fiscal year 2009, but GSA security officials told us that some regions reported that they had not received these second quarter briefings. To improve information sharing and coordination at the regional level, FPS standardized its quarterly threat briefing format. FPS officials told us that they partnered with GSA to create a sensitive but unclassified (SBU) facility-specific companion document to the BSA called the “Facility Security Assessment Threat Summary.” According to FPS, this quarterly threat briefing will contain facility-specific information on security performance measures, criminal activity, unclassified intelligence regarding threats, significant events, special FPS law enforcement operations, and potential threats, demonstrations, and other events. FPS officials told us that this quarterly threat briefing format is a positive step in providing a briefing document that GSA can use in evaluating threat information that is germane to its property portfolio. FPS officials told us that they finalized the briefing format and that the Director signed the General Services Administration Threat Briefing Policy directive in June 2009. In contrast, in June 2009, GSA security officials told us that they believe they had little involvement in developing FPS’s threat briefing format explaining that although FPS asked GSA to comment on its proposed format—which, according to GSA, it did in March 2009—FPS had not discussed GSA’s comments with them or updated GSA on the content or status of the format. GSA security officials told us they have representation on an ISC working group that is developing a standardized design basis threat template to support risk assessment threat ratings. According to the 2006 MOA, FPS is to meet with GSA property managers and tenant agency representatives when it discusses the results of its BSAs. Depending on the building’s security level, the BSA may occur every 2 to 4 years. Apart from these briefings, FPS, GSA, and tenant agencies choose how frequently they will all meet. An information sharing best practice that we have reported on is holding regularly scheduled meetings during which participants can, for example, share security management practices, discuss emerging technologies, and create committees to perform specific tasks, such as policy setting. It is critical that FPS, as the provider of law enforcement and related security services for GSA buildings, and GSA, as the manager of these properties, have well- established lines of communication with each other and with tenant agencies to ensure that all parties are aware of the ever-changing risks in a dynamic threat environment and that FPS and GSA are taking appropriate actions to reduce vulnerabilities. Nevertheless, we identified information sharing gaps at all the sites we visited, and found that in some cases these deficiencies led to decreased security awareness and increased risk. At one location, we observed during our interview with the building security committee (BSC) that the committee members were confused about procedures for screening visitors who are passengers in employees’ cars that enter the building via the parking garage. One of the tenants recounted an incident in which a security guard directed the visitor to walk through the garage to an appropriate screening station. According to the GSA property manager, this action created a safety hazard. The GSA property manager knew the appropriate screening procedure, but told us there was no written policy on the procedure that members could access. Additionally, BSC members told us that the committee met as needed. At one location, FPS had received inaccurate square footage data from GSA and had therefore overcharged the primary tenant agency for a guard post that protected space shared by all the tenants. According to the GSA property manager, once GSA was made aware of the problem, the agency obtained updated information and worked with the tenant agencies to develop a cost-sharing plan for the guard post, which made the primary tenant agency’s security expenses somewhat more equitable. BSC members told us that the committee met regularly. At one location, members of a BSC told us that they met as needed, although even when they hold meetings, one of the main tenant agencies typically does not participate. GSA officials commented that this tenant adheres to its agency’s building security protocols and does not necessarily follow GSA’s tenant policies and procedures which GSA thinks creates security risks for the entire building. At one location, tenant agency representatives and officials from FPS told us they met regularly, but GSA officials told us they were not invited to these meetings. GSA officials at this location told us that they invite FPS to their property management meetings for that location, but FPS does not attend. GSA officials also said they do not receive timely incident information for the site from FPS and suggested that increased communication among the agencies would help them be more effective managers of their properties and provide tenants with better customer service. At one location, GSA undertook a major renovation project beginning in April 2007. FPS, GSA, and tenant agency representatives did not all meet together regularly to make security preparations or manage security operations during construction. FPS officials told us they had not been invited to project meetings, although GSA officials told us that they had invited FPS and that FPS attended some meetings. In May 2008, FPS discovered that specific surveillance equipment had been removed. As of May 2009, FPS officials told us they did not know who had removed the equipment and were working with tenant agency representatives to recover it. However in June 2009, tenant agency representatives told us that they believed FPS was fully aware that the equipment had been removed in December 2007. To improve information sharing and coordination at the building level, FPS and GSA plan to implement ISC’s facility security committee standards at all multitenant and single-tenant buildings and campuses after ISC issues them. FPS and GSA could leverage these standards to establish consistent communications and designate the roles and responsibilities of FPS, GSA, and tenant agencies. FPS and GSA have had representation on the ISC working group that is developing the standards. ISC intends to issue the standards in the first quarter of fiscal year 2010, but it is unclear when FPS and GSA will implement them. GSA security officials also told us that FPS does not consistently or comprehensively inform GSA of changes to services or provide GSA with contingency plans when FPS deploys inspectors and other personnel to provide law enforcement, security, and emergency response services for special events in support of broader homeland security goals. For example, GSA security officials cited some instances in which FPS reduced its services during the 2009 Presidential Inauguration. They noted, for example, that FPS inspectors did not attend BSC meetings and said that FPS did not inform GSA of all service changes. FPS’s response to special events and critical incidents is governed by the FPS Interim Critical Incident Response Plan issued by the Director in September 2007. This plan does not include procedures for notifying GSA and tenant agencies of expected service changes, restrictions, and modifications at the national, regional, and building levels. FPS officials told us that FPS notified tenant agencies in the National Capital Region of expected servi changes, restrictions, and modifications during the 2009 inauguration. Officials also said that, when possible, inspectors personally contacted GSA building managers and tenant agency representatives in the region. However, FPS personnel were deployed from all regions in accordance with the critical incident response plan and FPS officials did not tell us that regions other than the National Capital Region were notified. Becau GSA and tenant agencies rely on FPS to provide critical law enforcement and security services and tenant agencies pay for these services, we believe it is important for FPS to notify these entities in advance of se changes and provide for interim coverage. While FPS and GSA acknowledge that the two organizations are partners in protecting and securing GSA buildings, FPS and GSA fundamentally disagree over how much of the information in the BSA should be shared Per the MOA, FPS is required to share the BSA executive summary with GSA and FPS believes that this document contains sufficient information for GSA to make decisions about purchasing and implementing FPS’s recommended countermeasures. However, GSA officials at all levels ci limitations with the BSA executive summary saying, for example, that it does not contain enough contextual information on threats and vulnerabilities to support FPS’s countermeasure recommendatio justify the expenses that GSA and tenant agencies would incur by installing additional countermeasures. Moreover, GSA security officials told us that FPS does not consistently share BSA executive summaries across all regions. Instead, GSA wants to receive BSAs in their entirety s that it can better protect its buildings and the tenants who occupy them. The BSA executive summary includes . a brief description of the building; an overview of the risk assessment methodology; types of threats that the building is exposed to and their risk ratings; countermeasure recommendations, estimated installation and gs after countermeasure implementation. In contrast, a complete BSA includes a detailed description of the physic al features of the building and its terviewees and contact information; a profile of occupants and agency missions; recent losses, crimes, and security violations previous security surveys, inspections, and rela investigations, and studies; descriptions of threats and risk ratings; installation and recurring costs; and projected threat ratings after counter measure implementation. When FPS was housed within GSA and PBS, GSA security officials that FPS shared BSAs in their entirety with GSA, but now at the national level, GSA can request full BSAs from FPS, and FPS makes determinations on a case-by-case basis by following and interpreting DHS information sharing policies. However, GSA security officials told us that the process for requesting BSAs is informal and that FPS has not been responsive to these requests overall. Furthermore, considering there are nearly 9,000 buildings in GSA’s inventory, this may be an inefficient approach to obtain key facility protection information. We have found that information sharing and coordination are important at the individual building level and that protecting federal buildings requires building security managers to told us involve multiple organizations to effectively coordinate and share information to prevent, detect, and respond to terrorist attacks. According to GSA, building protection functions are an integral part of its property preservation, operation, and management responsibilities. In 2000, when FPS was still a part of GSA, Congress considered removing FPS from PBS. At that time, GSA opposed such action asserting that it would divorce security from other federal building functions when security considerations needed to be integrated into decisions about the location, design, and operation of federal buildings. GSA was concerned that separating FPS from PBS would create an organizational barrier between protection experts and PBS asset managers, planners, project managers, and building managers who set PBS budgets and policies for the GSA inventory as a whole and oversaw day-to-day operations in GSA buildings. However, Congress did not remove FPS from PBS, and FPS remained within GSA and PBS until it was transferred to DHS and ICE under the Homeland Security Act of 2002. Prior to the creation of DHS, we expressed concern about separating security from other real property portfolio functions, such as site location, design, and construction for new federal buildings, because decisions on these factors have implications for what types of security will be necessary and effective. We concluded that if DHS was given the responsibility for securing GSA facilities, the role of integrating security with other real property functions would be an important consideration, especially since GSA would still be the caretaker of these buildings. Under the Homeland Security Act of 2002, FPS was transferred to DHS and retained responsibilities for law enforcement and related security functions for GSA buildings and grounds. However, except for law enforcement and related security functions transferred to DHS, under the act, GSA retained all powers, functions, and authorities in law, related to the operation, maintenance, and protection of its buildings and grounds. As a result of the act, GSA and DHS both have protection responsibilities for GSA-controlled buildings and grounds. DHS and GSA developed an MOA to address roles, responsibilities, and operational relationships between FPS and GSA concerning the security of GSA-controlled space. Through this agreement, DHS and GSA determined that FPS would continue to conduct BSAs for GSA. GSA security officials told us that GSA staff at the national, regional, and building levels need the information contained in the BSA to cost-effectively manage their buildings to ensure that they are secure and that their customers, or tenant agencies, are adequately protected. Because GSA personnel do not receive the entire BSA, they must decide on the basis of incomplete information how to use funds to implement countermeasures and mitigate vulnerabilities. Furthermore, GSA property managers are responsible for coordinating and maintaining emergency management plans, such as evacuation and continuity of operations plans, and when a safety or security incident arises at a GSA building, GSA assumes a lead role in the incident command. Without complete risk information, GSA is challenged to maintain appropriate situational awareness and preparedness to protect buildings, especially during emergencies. Although the Director of FPS recognizes that FPS and GSA have common interests in protecting GSA buildings and the federal employees who work in them, the Director has determined that GSA does not meet the standards under which FPS shares BSAs and maintains that BSA executive summaries provide GSA with sufficient information. FPS designates the SBU information contained in BSAs as “law enforcement sensitive” (LES) in accordance with DHS and ICE policies. FPS considers the BSA to be an LES document because it incorporates all aspects of a location’s physical security into one document whose release outside of the law enforcement arena could adversely impact the conduct of law enforcement programs. According to FPS, the BSA can include LES information such as: information, details, or criminal intelligence data indicating why a threat is deemed credible; information and details relating to any ongoing criminal investigations, law enforcement operations, or both; and detailed analysis of why the lack or inadequacy of a countermeasure creates an exploitable vulnerability. According to FPS, LES information is safeguarded and determinations to disseminate LES information are made in accordance with a DHS information safeguarding management directive and an ICE directive for safeguarding LES information. FPS maintains that GSA does not need to know the LES information that is contained in the BSA and that if the BSA is released to GSA, the risk of unscrupulous or criminal use of the information would increase significantly. According to FPS, the information contained in the BSA is not critical to GSA’s performance of its authorized, assigned mission. FPS further maintains that GSA retains no legal responsibility for the physical protection and law enforcement operations within GSA buildings because the Homeland Security Act of 2002 transferred FPS’s law enforcement and related security functions from GSA to DHS and that under the act it is responsible for protecting the buildings, grounds, and property under GSA’s control or custody. We have reported on the importance of sharing terrorism-related information that is critical to homeland security protection and have identified a need for agencies to develop mechanisms that support this information sharing. Other federal agencies have found ways to share sensitive information with other entities. For example, in response to a GAO recommendation, the Transportation Security Administration established regulations that allow for sharing sensitive security information with persons covered by the regulations who have a need to know, including airport and aircraft operators, foreign vessel owners, and Transportation Security Administration employees. The ICE directive for safeguarding LES information states that an information sharing and access agreement in the form of a memorandum of understanding or agreement may formalize LES information exchanges between DHS and an external entity. Moreover, according to standard language in FPS’s BSAs, a security clearance is not required for access to LES information; a criminal history check and a national fingerprint check—performed in accordance with Homeland Security Presidential Directive 12 (HSPD-12) investigative requirements—is required. According to GSA, it follows these requirements. Moreover, GSA has an information safeguarding policy in place to protect SBU building information which can include: the location and details of secure functions or space in a building such as secure routes for prisoners and judges inside courthouses; the location and details of secure functions or secure space such as security and fire alarm systems; the location and type of structural framing for the building including any information regarding structural analysis, such as counterterrorism methods used to protect the building and occupants; and risk assessments and information regarding security systems or strategies of any kind. In the 2006 MOA, FPS and GSA agreed that shared SBU information would be handled in accordance with each agency’s information safeguarding policies. Furthermore, one of FPS’s strategic goals is to foster relationships to increase the proactive sharing of information and intelligence. In its strategic plan, FPS states that it will use efficient information sharing and information protection processes based on mutually beneficial, trusted relationships to ensure the implementation of effective, coordinated, and integrated infrastructure protection programs and activities. When we spoke with FPS and GSA officials in 2008, they thought the MOA renegotiation could serve as a platform for determining what BSA information should be shared. However, when we spoke with FPS and GSA officials in 2009, they did not know when the MOA would be renegotiated and FPS determined it would not change BSA sharing procedures during the renegotiation. Therefore, GSA will continue to receive BSA executive summaries and the individual BSAs that FPS approves for sharing, but it will not have access to other BSA information that it could use to make risk-based decisions to protect its buildings, the federal employees who work in them, and visitors to these buildings. In a post-September 11 era, it is crucial that federal agencies work together to share information to advance homeland security and critical infrastructure protection efforts. Information is a crucial tool in fighting terrorism, and the timely dissemination of that information to the appropriate government agency is absolutely critical to maintaining the security of our nation. The ability to share security-related information can unify the efforts of federal agencies in preventing or minimizing terrorist attacks. However, in the absence of comprehensive information-sharing plans, many aspects of homeland security information sharing can be ineffective and fragmented. In 2005, we designated information sharing for homeland security as a governmentwide high-risk area because of the significant challenges faced in this area—challenges that are still evident today. It is critical that FPS and GSA—which both have protection functions for GSA buildings, their occupants, and those who visit them— reach consensus on sharing information in a timely manner to support homeland security and critical infrastructure protection efforts. GSA raises strong arguments for having this information and FPS could do more to resolve this situation. FPS provides the law enforcement response for incidents at GSA buildings, during which it may need to communicate with other first responders. Additionally, DHS can call upon FPS to provide law enforcement and security services at natural disasters or special events such as political demonstrations, and FPS must then communicate with other federal, state, and local first responders. For these situations, having an interoperable communication system is desirable. However, first responders continue to use various, and at times incompatible, communications technologies, making it difficult to communicate with neighboring jurisdictions or other first responders to carry out the response. We noted during our review that FPS radios lack interoperability, meaning they are unable to communicate with the equipment used by other law enforcement agencies—federal, state, and local. Delayed communications with area first responders during emergencies could curtail the timeliness and effectiveness of FPS’s law enforcement services. FPS officials at one location told us that only new FPS vehicles have had radio upgrades, some FPS personnel have new hand-held radios, and other handheld radios have not been changed in 6 years. Changes in radio technology can inhibit interoperability among first responders who upgrade equipment as possible. FPS officials at one location told us that FPS can use the same radio frequency as the local police department, but the two organizations’ radio systems are not fully interoperable because the police use a digital system and FPS does not. Therefore, communication between these entities can be limited. FPS officials at one location told us that federal and local law enforcement agencies communicate with FPS via telephone or through the area FPS MegaCenter, instead of directly through radios, because the organizations’ radio systems are not interoperable. Therefore, communication among these entities can be limited. FPS officials at one location told us that FPS’s handheld radios are not interoperable with those of area federal and local law enforcement personnel, because FPS does not use the same radio band spectrum other federal law enforcement agencies use and instead uses its own ultra-high-frequency band. As a result, communication among these entities is limited. FPS officials at one location told us that their radios are not interoperable with those of the local police department. Therefore, communication between the two entities can be limited. FPS is exploring whether it can connect to the police department through a local interagency communications system. FPS is developing a National Radio Program that includes a component intended to make FPS’s radios interoperable with those of other federal, state, and local law enforcement organizations. FPS began planning this initiative in 2008, was working to fill the program manager position by the end of June 2009, and expects to achieve full implementation by 2013. According to FPS officials, they have established a branch under the FPS MegaCenter program specifically dedicated to enhancing and supporting the National Radio Program. Consistent with establishing this new branch, FPS officials said they are working to fill contract positions in each region for radio technicians to support the technical requirements associated with mobile radios, portable radios, programming, and the radio network infrastructure. According to FPS, they are working to contract for a survey and design team to coordinate with FPS’s regional offices, the National Radio Program, and the MegaCenter program to standardize and enhance the national radio infrastructure. According to FPS officials, the enhancements to FPS’s communications will provide solutions for newer technologies and will meet national communications standards and DHS standards for advanced encryption. FPS officials said they are beginning an internal evaluation of FPS’s existing communications capabilities, which should allow future enhancement efforts to be prioritized as part of an overall effort to enhance national radio coverage. FPS officials said they are working to procure, program, and issue more than 2,900 new radios that conform to new equipment standards and will eventually phase out older equipment used by FPS officers and guards. FPS officials said that all future radios issued will conform to updated standards to promote uniformity and enhanced support capabilities. While FPS officials think interoperability will be improved under this initiative, they cautioned that their law enforcement counterparts’ communication equipment must meet DHS’s advanced encryption standard which can be a challenge for state and local partners. FPS has a number of improvements planned or in development that, if fully incorporative of the key practices, will provide greater assurance that FPS is effectively protecting GSA buildings and maximizing security investment dollars. The key practices we examined vis-à-vis FPS— allocating resources using risk management, leveraging technology, and information sharing and coordination—are critical components to an effective and efficient physical security program. However, FPS’s application of these practices had limitations and as a result, there is a lack of assurance that federal buildings under the control and custody of GSA, the employees who work in them, and visitors to them are being adequately protected. Related to allocating resources using risk management, FPS’s assessment of risks at buildings is a critical responsibility considering the results lay the foundation by which GSA and tenants make resource allocation decisions. However, FPS’s current risk assessment process is inadequate and its efforts to improve it through the development of RAMP have been delayed. Related to leveraging technology, planned improvements to the way inspectors acquire security equipment through the National Countermeasures Program have also experienced delays and knowing the cost implications of different alternatives is the foundation of this key practice, although FPS is not directly addressing this critical element. Continued delays in the implementation of improvements in these critical areas—risk management and leveraging technology—are of concern and deserving of greater attention by DHS management. Furthermore, related to information sharing and coordination, FPS’s communications with GSA and tenants could benefit from more clearly defined parameters for consistency, frequency, and content, and issues related to interoperability with other law enforcement agencies surfaced as a concern that FPS is trying to address. Without a greater focus on the key practices, FPS will be ill- equipped to sufficiently manage security at GSA buildings, and assist with broader homeland security efforts as the security landscape changes and new threats emerge. We are making three recommendations to the Secretary of Homeland Security aimed at moving FPS toward greater use of the key practices we assessed. Specifically, we recommend that the Secretary instruct the Director of FPS, in consultation, where appropriate, with other parts of DHS, GSA, and tenant agencies to take the following three actions: 1. Provide the Secretary with regular updates, on a mutually agreed-to schedule, on the status of RAMP and the National Countermeasures Program, including the implementation status of deliverables, clear timelines for completion of tasks and milestones, and plans for addressing any implementation obstacles. 2. In conjunction with the National Countermeasures Program, develop a methodology and guidance for assessing and comparing the cost- effectiveness of technology alternatives. 3. Reach consensus with GSA on what information contained in the BSA is needed for GSA to fulfill its responsibilities related to the protection of federal buildings and occupants, and accordingly, establish internal controls to ensure that shared information is adequately safeguarded; guidance for employees to use in deciding what information to protect with SBU designations; provisions for training on making designations, controlling, and sharing such information with GSA and other entities; and a review process to evaluate how well this information sharing process is working, with results reported to the Secretary regularly on a mutually agreed-to schedule. We provided a draft of the sensitive but unclassified report to DHS and GSA for review and comment. DHS agreed with our assessment that greater attention to key practices would improve FPS’s approach to facility protection and agreed with the report’s recommendations. Furthermore, DHS stated that FPS will continue to work with key stakeholders to address other security issues that were cited in our report, for which specific recommendations were not made. With respect to the first recommendation—to provide the Secretary of Homeland Security with regular updates on the status of RAMP and the National Countermeasures Program—DHS stated that FPS will submit a consolidated monthly report to the Secretary. Although DHS agreed with our second and third recommendations, we are concerned that the steps it described are not comprehensive enough to address the intent of the recommendations. For the second recommendation—to develop a methodology and guidance for assessing and comparing the cost-effectiveness of technology alternatives—DHS commented that such efforts will be a part of FPS’s development of RAMP and that future phases of RAMP will include the ability to evaluate countermeasure alternatives based on cost and the ability to mitigate identified risks. However, RAMP has experienced delays and it is unclear when this future component of RAMP will be developed and implemented. Moreover, as we reported, FPS inspectors have considerable latitude in determining which technologies and other countermeasures to recommend, but receive little guidance to help them assess the cost- effectiveness of these technologies. Until the cost-analysis component of RAMP is implemented, it will be important for inspectors to have guidance they can use to make cost-effective countermeasure recommendations so that GSA and tenant agencies can be assured that their investments in FPS-recommended technologies and other countermeasures are cost- effective, consistent across buildings, and the best available alternatives. Regarding the third recommendation—to reach consensus with GSA on what information contained in the BSA is needed for GSA to fulfill its protection responsibilities and to establish information sharing and safeguarding procedures—DHS responded that FPS is developing a facility security assessment template as a part of RAMP to produce reports that can be shared with GSA and other agencies. However, DHS did not explicitly commit to reaching consensus with GSA in identifying building security information that can be shared, or to the steps we outlined in our recommendation—steps that in our view comprise a comprehensive plan for sharing and safeguarding sensitive information. As we reported, FPS and GSA fundamentally disagree over what BSA information should be shared and FPS has decided not to discuss this matter with GSA as part of the MOA renegotiation. Furthermore, RAMP continues to experience delays and it is unclear when it will produce facility security assessments than can be shared with GSA. Therefore, it is important that FPS engage GSA in identifying what building security information can be shared and follow the information sharing and safeguarding steps we included in our recommendation to ensure that GSA acquires the information it needs to protect the 9,000 buildings under its control and custody, the federal employees who work in them, and those who visit them. GSA agreed with our findings concerning the challenges that FPS faces in delivering security services for GSA buildings. GSA indicated that it will continue to work closely with FPS to ensure the protection of GSA buildings, their tenants, and visitors to these buildings. GSA stated that it will work with FPS to address our recommendation that the two agencies reach a consensus on the sharing and safeguarding of information contained in BSAs. DHS also provided technical comments, which we incorporated where appropriate. DHS’s comments can be found in appendix II and GSA’s comments can be found in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Acting Administrator of General Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objective of this report was to determine whether the Federal Protective Service’s (FPS) approach to security for buildings under the control and custody of the General Services Administration (GSA) reflects key facility protection practices. Through previous work, we identified a set of key practices from the collective practices of federal agencies and private sector entities that can provide a framework for guiding agencies’ protection efforts and addressing challenges. These key practices form the foundation of a comprehensive approach to building protection. We used our key facility protection practices as criteria to evaluate the steps that FPS has taken. We used the following key practices as criteria: allocating resources using risk management; leveraging technology; and information sharing and coordination. For the purposes of this review, we did not consider three other key practices for varying reasons: performance measurement and testing, because we reported on the limitations FPS faces in assessing its performance in 2008; aligning assets to mission, because GSA, not FPS, controls the asset inventory; and strategic management of human capital, because we are currently reviewing FPS’s management of human capital. To examine FPS’s application of key practices at the building level, we selected five sites, basing our selection on factors that included geographical diversity, high occupancy, the building’s designated security level, other potential security considerations such as new or planned building construction, and recent and ongoing work. Selected sites included three multitenant level IV buildings, one single-tenant level IV campus, and one single-tenant level III campus. Collectively, the sites we selected illustrate the range of building protection practices applied by FPS. At each site, we interviewed FPS, GSA, and tenant agency officials with primary responsibility for security implementation, operation, and management. We toured each site and observed the physical environment, the buildings, and the principal security elements to gain firsthand knowledge of the building protection practices. We collected documents, when available, that contained site- specific information on security risks, threats, budgets, and staffing for analysis. Because we observed FPS’s efforts to protect GSA buildings at a limited number of sites, our observations cannot be generalized to all the buildings that FPS is responsible for securing. To supplement these site visits, we interviewed FPS and GSA security officials from the four regions where we had visited buildings—regions 2, 4, 7, and 11. We also interviewed FPS and GSA security officials at the national level and collected supporting documentation on security plans, policies, procedures, budgets, and staffing for analysis. For example, we reviewed the 2006 Memorandum of Agreement between the Department of Homeland Security (DHS) and GSA that sets forth the security responsibilities of FPS and GSA at federal buildings. We also interviewed the executive director of the Interagency Security Committee (ISC), and we analyzed ISC’s Facility Security Level Determinations for Federal Facilities, Security Design Criteria for new Federal Office Buildings and Major Modernization Projects, and Security Standards for Leased Space. We also analyzed the facility security level standards and minimum security requirements set forth by the Department of Justice’s (DOJ) DOJ Vulnerability Assessment of Federal Facilities. We analyzed FPS planning documents, including FPS’s 2008-2011 Strategic Plan and the Risk Assessment and Management Program Concept of Operations. We analyzed laws that described FPS and GSA’s protection authorities including the Homeland Security Act of 2002, and Title 40 of the United States Code. We also analyzed laws and internal documents that govern FPS’s information safeguarding practices including DHS Management Directive 11042.1, Safeguarding Sensitive But Unclassified Information and ICE Directive 73003.1, Safeguarding Law Enforcement Sensitive Information. We conducted this performance audit from January 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contact named above, David Sausville, Assistant Director; Denise McCabe, Analyst-in-Charge; Anne Dilger; Elizabeth Eisenstadt; Brandon Haller; Robin Nye; Susan Michal-Smith; and Adam Yu made key contributions to this report.
There is ongoing concern about the security of federal buildings and their occupants. The Federal Protective Service (FPS) within the Department of Homeland Security (DHS) is responsible for providing law enforcement and related security services for nearly 9,000 federal buildings under the control and custody of the General Services Administration (GSA). In 2004, GAO identified a set of key protection practices from the collective practices of federal agencies and the private sector that included: allocating resources using risk management, leveraging technology, and information sharing and coordination. As requested, GAO determined whether FPS's security efforts for GSA buildings reflected key practices. To meet this objective, GAO used its key practices as criteria, visited five sites to gain firsthand knowledge, analyzed pertinent DHS and GSA documents, and interviewed DHS, GSA, and tenant agency officials. FPS's approach to securing GSA buildings reflects some aspects of key protection practices, and FPS has several improvements underway such as a new risk assessment program and a countermeasure acquisition program. While FPS's protection activities exhibit some aspects of the key practices, GAO found limitations in each of the areas. FPS assesses risk and recommends countermeasures to GSA and tenant agencies; however, FPS's ability to influence the allocation of resources using risk management is limited because resource allocation decisions are the responsibility of GSA and tenant agencies, which may be unwilling to fund FPS's countermeasure recommendations. Moreover, FPS uses an outdated risk assessment tool and a subjective, time-consuming process. As a result, GSA and tenant agencies are uncertain whether risks are being mitigated. Concerned with the quality and timeliness of FPS's risk assessment services, GSA and tenant agencies are pursuing some of these activities on their own. Although FPS is developing a new risk management program, full implementation is not planned until the end of fiscal year 2011 and has already experienced delays. With regard to leveraging technology, FPS inspectors have considerable latitude for selecting technologies and countermeasures that tenant agencies fund, but FPS provides inspectors with little training and guidance for making cost-effective choices. Additionally, FPS does not provide tenant agencies with an analysis of alternative technologies, their cost, and associated reduction in risk. As a result, there is limited assurance that the recommendations inspectors make are the best available alternatives and tenant agencies must make resource allocation decisions without key information. Although FPS is developing a program to standardize security equipment and contracting, the program has run behind schedule and lacks an evaluative component for assessing the cost-effectiveness of competing technologies and countermeasures. FPS has developed information sharing and coordination mechanisms with GSA and tenant agencies, but there is inconsistency in the type of information shared and the frequency of coordination. Lack of coordination through regular contact can lead to communication breakdowns. For example, during a construction project at one location, the surveillance equipment that FPS was responsible for maintaining was removed from the site during 2007. FPS and tenant agency representatives disagree over whether FPS was notified of this action. Furthermore, FPS and GSA disagree over what building risk assessment information can be shared. FPS maintains that the sensitive information contained in the assessments is not needed for GSA to carry out its mission. However, GSA maintains that restricted access to the risk assessments constrains its ability to protect buildings and occupants.
Congress created the CTR program in 1991 to help the states of the former Soviet Union secure and eliminate their weapons of mass destruction and prevent their proliferation. Through the CTR program, the United States has supported activities to eliminate nuclear missiles, build a storage facility for nuclear materials, eliminate chemical weapons, secure biological pathogens, and employ former weapons scientists. As of January 2005, the CTR program has assisted in the elimination of about 570 intercontinental ballistic missiles and nearly 30 nuclear powered ballistic missile submarines. In 2004, Congress authorized DOD to expand the scope of the CTR program to countries outside the former Soviet Union. For example, beginning in 2005, CTR assistance will help Albania destroy its chemical weapons stockpile. Figure 1 shows the DOD management structure for the CTR program. Within the Office of the Under Secretary of Defense for Policy, the CTR Policy Office is responsible for developing and coordinating policy guidance, defining program objectives for the CTR program, and negotiating agreements with CTR recipients. The CTR Policy Office works with the office of the Under Secretary of Defense for Acquisition, Technology, and Logistics through the Deputy Assistant to the Secretary of Defense for Chemical Demilitarization and Threat Reduction. DTRA reports to the Assistant to the Secretary of Defense for Nuclear and Chemical and Biological Defense Programs. The Deputy Assistant to the Secretary of Defense for Chemical Demilitarization and Threat Reduction provides strategic implementation guidance on and oversight of CTR projects, and interacts daily with DTRA on CTR matters. Within DTRA, the Cooperative Threat Reduction (CT) directorate manages the program’s daily operations. The directorate is organized into five program areas: Biological Weapons Proliferation Prevention, Chemical Weapons Elimination, Nuclear Weapons Safety and Security, Strategic Offensive Arms Elimination, and Weapons of Mass Destruction (WMD) Proliferation Prevention. (For a more detailed description of these program areas, see app. IV.) The directorate is located at Ft. Belvoir, Virginia, and several DTRA offices throughout the former Soviet Union provide in-country support for CTR program implementation. At the beginning of the program in 1992, DOD primarily purchased and provided equipment such as cranes, cutting tools, and vehicles to recipient countries. As the program matured, CTR assistance provided more services, such as hiring U.S. contractors who helped recipient countries dismantle nuclear delivery systems and missiles. Currently, CTR provides most assistance to recipient countries through contracts with American firms. DOD executes, manages, and reviews the contracts according to DOD and federal acquisition requirements. Specifically, in 2001, the CTR program began using special contracts with prime contractors who, with their teams of supporting subcontractors, implement the majority of CTR projects in the recipient countries. These five contractors are known as CTR Integrating Contractors. DTRA has also contracted with the Science Applications International Corporation’s Threat Reduction Support Center (TRSC). TRSC staff provide support to CTR program and project managers in the areas of operations, logistics, engineering, financial, and program management. Since 2003, DOD has improved its management and internal controls over the CTR program. Prior to 2003, DOD’s internal controls over the program were limited and did not ensure that CTR program objectives were being met. Following two project failures in Russia, DOD implemented a series of new measures in 2003 that provided a more structured approach to managing the CTR program. Most importantly, in July 2003, DOD filled vacancies within AT&L, the office responsible for ensuring that DTRA’s implementation of CTR projects was meeting cost, schedule, and performance goals. After DOD filled these positions, the new leadership worked closely with DTRA officials to introduce important enhancements to the program’s internal controls. For example, DOD adopted several new methods to assess and mitigate the risks involved in cooperating with CTR- recipient governments. Although these methods attempt to reduce risk to an acceptable level, DOD cannot fully mitigate the risks involved in working jointly with CTR-recipient governments. While DOD’s enhancements are an improvement over the previous management and internal controls for the program, CTR procedures do not include final reviews of CTR projects upon their completion. As such, DOD has no mechanism for assessing the success of completed projects and applying lessons learned to future projects. Beginning in 2003, DOD implemented several new and enhanced management processes to allow program managers to better assess the progress of CTR projects and address program implementation weaknesses to reduce the risk of program failures. For example, DOD filled vacant AT&L positions; developed specific guidance for project managers on reporting objectives, schedules, and cost estimates; and improved communication within the program and with recipient countries. (For a comparison of DOD’s CTR internal controls with selected control standards for the federal government, see app. V.) DOD developed a training course that all CTR project and program managers are required to complete, which provides detailed instruction on incorporating the new requirements of the internal control framework into all CTR projects. According to 24 of the 30 CTR program, policy, and acquisition officials responding to our structured interview, the new framework has helped improve CTR project implementation. For example, CTR officials stated that now the program management review system is more rigorous and project managers know what is expected of them in reporting on the cost, schedule, and performance of their projects. In July 2003, DOD filled AT&L vacancies, closing a critical gap in the department’s ability to ensure that the CTR program was meeting cost, schedule, and performance goals. Previously, DOD had not been carrying out its own management plans for ensuring that CTR projects were meeting stated goals. Specifically, in May 1994, the Deputy Secretary of Defense approved a plan to strengthen the implementation of CTR projects. Under this plan, the CTR policy office was responsible for negotiating agreements with recipient countries, establishing policy guidance, working on the CTR budget, and notifying Congress of developments in the program. After CTR policy approved a project and signed an agreement to begin work, AT&L was responsible for developing detailed implementation plans, monitoring ongoing work, and ensuring that work was meeting cost, schedule, and performance goals. However, DOD left several AT&L positions vacant until 2003, leaving a critical gap in oversight over the CTR program. The CTR policy office began managing daily CTR project activities to fill this leadership gap. However, according to the director of the policy office, staff in that office were not qualified to manage the activities of the program because they were not familiar with DOD acquisition guidelines nor did they have the technical expertise necessary to manage CTR programs. According to a 2004 DOD Inspector General (IG) report on the management of the CTR program, if the AT&L positions had been filled, those officials might have identified some of the risks involved in the two failed CTR projects that cost DOD nearly $200 million. Since the AT&L positions were filled in July 2003, the office now participates in CTR program planning and review, overseeing program review meetings, and providing guidance on issues such as performance measurement and reporting requirements. The Deputy Assistant Secretary of Defense for Chemical Demilitarization and Threat Reduction attends informal monthly meetings with CTR program managers to be updated on the status of projects and other management issues. He also serves as the program reviewer for several CTR projects, making him responsible for overseeing the cost, schedule, and performance of each of those projects and approving them at the end of each project phase. For example, in July 2004, he approved a biological weapons proliferation prevention project’s acquisition program baseline and authorized the program manager to move the project into the demonstration phase. CTR officials stated that it is now clear who they need to report to and when. DOD uses several new methods to assess and mitigate risks associated with CTR projects. DOD identifies a senior official responsible for ensuring the potential risks to meeting objectives are evaluated for each project, requires stakeholders on each project to meet regularly to conduct specific risk management activities, and implements each project in three phases. According to DOD’s risk management guide, risk is defined as a measure of the potential inability of a program to achieve its overall program objectives within defined cost, schedule, and technical constraints. DOD’s approach to assessing program risks was limited prior to 2003. In September 1996, we reported that the CTR multiyear plan did not indicate whether program officials had omitted risk and contingencies from project cost estimates. In addition, a 2003 DOD IG report found that DOD did not identify risks or have adequate controls in place to mitigate risk when managing projects. According to a CTR official, CTR program and project managers periodically included risk assessments in planning their projects, but did not include actions to control the risks identified if problems occurred. The DOD IG reported that the CTR program management’s failure to fully assess project risks contributed to DOD spending nearly $200 million on projects in Russia to construct a liquid rocket fuel disposition facility that was never utilized and to design a solid rocket motor elimination facility that was never constructed. In an effort to improve assessments of the risks associated with CTR projects, DOD began designating an official, known as the Milestone Decision Authority (MDA), to be responsible for ensuring that project managers, with assistance from project stakeholders, assess the risks to meeting project objectives and formulate plans to mitigate these risks. MDAs are assigned to projects based on several factors, including the project’s risk and expected cost. According to an AT&L official, the Deputy Assistant to the Secretary of Defense for Chemical Demilitarization and Threat Reduction is usually assigned as the MDA for high-cost or high-risk projects. For projects with less risk or expense, the MDA is usually the director of the DTRA/CT directorate. MDAs review the risks identified by the project managers and evaluate the plans they have developed to mitigate these risks. In addition, DOD instituted periodic stakeholders meetings to assess and minimize risks associated with CTR projects and to discuss major project issues and milestones. In these meetings, project managers present assessments of potential risks that could impact their ability to meet project objectives. For example, a risk identified for the Russian SS-24 missile elimination project was that political or economic developments in Russia might unexpectedly affect the project’s costs. After the project managers present their assessments, the stakeholders provide input to address these risks and consider additional problems that may arise during project implementation. According to CTR management officials, this team approach to risk assessment ensures consensus early in each phase of the project. It has resulted in more informed decision making because stakeholders meet regularly to receive updates on project status and make decisions on the next phase of project implementation based on the facts presented during those meetings. Of the 30 DOD and CTR officials we interviewed using our structured interview guide, 9 said that this new process of stakeholder involvement was one of the most important new internal controls for the CTR program. Furthermore, DOD now uses a new phased-contract approach that divides each CTR project into three phases. These phases can vary according to project, but usually include phases covering project development, project execution, and project maintenance, according to a CTR official. This approach helps to minimize risk by allowing managers to make the appropriate changes, delay, or stop a project if a problem occurs. For example, in 2003, in the development phase of a Ukrainian SS-24 missile elimination project, DOD decided not to proceed with the project because the risks associated with the missile destruction method that the Ukrainians wanted to use were too high. Project managers are required to develop exit criteria for each project phase that clearly state under what conditions the project will be permitted to move into the next phase and under what conditions DOD will stop the project. For example, for a CTR project tasked with eliminating Russia’s SS-25 missiles, one of the exit criteria for moving into the project’s maintenance phase is that DOD complete negotiations on the contract to maintain the missile elimination facility that is being constructed. In 2003, DOD devised and implemented new guidelines that provide CTR project managers with written instructions on developing and reporting project objectives, schedules, and cost estimates. According to the internal control guidelines for the federal government, it is important for an organization to establish measures to gauge its performance on critical activities and determine if the organization is meeting its objectives. CTR program area and project managers we interviewed stated that prior to 2003 there were no established procedures for developing performance measures, evaluating project performance, or reporting (either orally or in writing) on project implementation to management. In addition, project plans were not comprehensive and lacked established baselines against which to measure performance. According to CTR project managers, the current guidance on performance measurement is clearer and more consistent than in the past. For example, in a training course required for all CTR program and project managers, project managers are instructed on developing measures for how, when, in what sequence, and at what cost specific project tasks will be completed. Our fieldwork included a site visit to a CTR project in Russia that had developed such measures. One measure used to gauge performance on that project is whether the elimination of Russian SS-24 missiles complies with arms control treaty requirements. For each measure, project managers develop objectives – the indicator’s desired outcome – and thresholds – the minimum acceptable performance for that measure. For example, one objective for the SS-24 missile elimination project is to eliminate Russia’s SS-24 missiles by March 2008. However, if the missiles cannot be eliminated by then, they must be eliminated by the threshold date of August 2008. (Figure 2 shows the elimination of an SS-24 engine.) If the threshold is not met at the end of a particular project phase, the project manager and DOD management officials may consider stopping the project. When an indicator is in danger of not being met, the project manager is required to submit a warning report to the project’s MDA to ensure that management is aware of potential delays and that the project manager is addressing the problem. If the indicator is not met, DOD management officials may stop the project until a plan is in place to bring the indicator up to the threshold level. In 2003, DOD introduced a new process to review projects and programs to provide a more systematic and consistent structure to management’s review of CTR projects. According to the internal control guidelines for the federal government, program reviews are important for program management because they provide comparisons of actual performance to planned or expected results and help management assess its programs. Program reviews lacked the detail that allowed senior management to evaluate projects and risks consistently. However, according to CTR program managers we interviewed, before 2003 there was no standardized guidance to assist program managers on developing program reviews or implementing their programs. For example, CTR program area and project managers did not receive any guidance on how to report on the daily management of program operations or on the type of information that status reports should include. Under the new program review system, the designated MDA conducts reviews of a project’s cost, schedule, and performance objectives. During program reviews, which take place periodically throughout the course of a project, project managers report to their MDAs on the status of their projects and whether the objectives are being met. In addition, these review meetings are more detailed than they were before the new system was in place. For example, a project review in 2004 for a CTR project tasked with installing nuclear detection devices in Uzbekistan included details on the project’s schedule over the next 3 years, with specific dates for completion of certain milestones. It also included a detailed breakdown of funding for the project over the next 3 years and a thorough discussion of project risks. The information was not included in the project’s 2003 review. According to several CTR project managers, the new program review system has resulted in more consistently conducted project evaluations. Of the 30 DOD officials we interviewed, 19 said the program review process, conducted by the MDA, was one of the most important new internal controls for the CTR program. They reported that, with the introduction of the MDA, program reviews are occurring at the same intervals for each project and that project managers report cost, schedule, and performance data in the same format to their MDAs during the reviews. Through the course of work we reviewed copies of various MDA project review documents. According to DOD officials, communication within the CT directorate and among the DOD offices involved in the CTR program, has improved with the introduction of new internal controls. DOD also has improved its external communications with CTR-recipient countries. Internal control guidelines for the federal government state that communication mechanisms should exist within an organization to allow the easy flow of information down, across, and up the organization. However, before 2003, internal communications within the CTR program office were not clear, according to DOD officials. For example, all CTR stakeholders were not present during project development meetings nor were they involved in early decision making about project risks. Communications between DOD and CTR recipient governments also were not clear. DOD assumed, without getting written documentation that CTR recipient countries would carry out the responsibilities and commitments to which they agreed. Since 2003, communication among the DOD offices working on the CTR program has improved. Stakeholders on specific projects meet more frequently now than in the past to discuss project issues and problems. Project managers involve stakeholders in the earliest stages of project development on through to the final phase of project completion to assure that stakeholders and managers have regular opportunities to learn about project developments and provide input on project implementation. This system has now been institutionalized and all CTR project managers are instructed in a new training course to convene meetings with stakeholders throughout the life of their projects. In addition, new reporting requirements help ensure that all stakeholders are informed of project developments. All of the 30 DOD officials we interviewed said that they are required to report on the cost, schedule, and performance of their programs and projects periodically, including daily, weekly, monthly, and quarterly. For example, DOD now requires program managers to submit monthly project status reports to ensure that potential problems are documented and stakeholders are informed of them. In addition, 28 of the 30 DOD officials in our structured interview reported that the amount of communication within the CT directorate allows them to effectively implement their projects. Project managers are in frequent contact with contractors implementing projects in recipient countries. We observed a meeting in Russia between a CTR project manager and the Russian contractors implementing the project he manages. During the meeting, they negotiated revisions to a new contract and discussed the project’s status. The project manager makes similar trips at least once a month to the project site to oversee progress and meet with the contractors. Other project managers we interviewed in Russia and in the U.S. stated that they hold weekly phone conferences with contractors, exchange emails, and make regular visits to project sites. Contracting officials in Russia stated that they hold weekly telephone conferences with their CTR project managers and contact them regularly when project implementation issues arise. We observed such a weekly telephone communication during our visit to the International Science and Technology Center in Moscow. Contractors also submit monthly written reports. Project managers also are in daily contact with their program managers and CT directorate management. According to a CTR official, at quarterly program review meetings, program and project managers present detailed information, both orally and in writing, on the status of their projects to all involved stakeholders. DOD has also improved its external communications with CTR-recipient countries. DOD and recipient government officials now consistently share more detailed information on project developments and issues of concern. CTR management officials and program and project managers are in frequent contact with their recipient government counterparts throughout project implementation. In 2004 CTR teams made 165 trips, compared with 70 trips in fiscal year 2001, to meet recipient government officials and improve their monitoring of CTR projects. Russian government officials working on CTR projects stated that they communicate with CTR officials continually and meet regularly with the director of the CT directorate. They also hold weekly teleconferences with project managers, and project managers visit project sites regularly. While traveling with CTR project managers in Russia and Kazakhstan, we observed extensive discussions of important issues during site visits and meetings with contractors and recipient government officials. Furthermore, DOD has introduced and updated its controls to ensure that commitments made by the CTR program and recipient governments are regularly documented and discussed. These controls also are a means to ensure that each party is held accountable for its responsibilities. In 2003, DOD began using Joint Requirements and Implementation Plans (JRIP) to document the commitments and responsibilities agreed to by each party involved in project implementation. For example, a requirements plan for a CTR project tasked with eliminating a specific type of Russian nuclear missile states that one of DOD’s responsibilities in implementing the project is to design and construct storage facilities for the missiles to be eliminated. One of the Russian government’s responsibilities on the same project is to provide DOD with a schedule for the delivery of the missiles to the proper facility for elimination. If either party fails to meet its obligations as articulated in the document, the other party can stop progress on the project. For example, DOD officials halted new construction from March to June 2004 at the CTR-funded chemical weapons destruction facility at Shchuch’ye until the Russian government stopped insisting on unnecessary design changes for the construction of a boiler house on the site. To further enhance communication between CTR program officials and CTR recipient countries, DOD also holds biannual meetings where officials from both sides meet to review and discuss project implementation and revise plans when necessary. According to CTR management officials and JRIP documents we reviewed, these meetings provide a regular forum for discussion that was not previously available and have improved communication between DOD officials and CTR-recipient governments. DOD faces significant challenges in collaborating with CTR-recipient governments to jointly implement projects and ensure that assistance is used to meet program objectives. Successful implementation of CTR projects requires the cooperation of recipient governments, but DOD cannot fully mitigate the risks involved in working jointly with these governments. First, working with CTR-recipient governments often involves lengthy negotiations to reach agreements on various issues throughout a project’s implementation. This can delay U.S. funded efforts to help secure or dismantle weapons of mass destruction by months or years. Second, risks to the project can increase when implementation begins before the necessary agreements are in place. Third, after agreements are reached and implementation is under way, additional risk is introduced by the control environment within the recipient governments. For instance, if a recipient government has a poor control environment risk increases that the agreed to objectives and conditions will not be met. In cooperating with CTR-recipient governments, DOD must negotiate a variety of agreements that can require lengthy negotiations. The highest level of agreements, called umbrella agreements, provide an overall legal framework for U.S. and CTR-recipient countries’ cooperation in implementing projects. Implementing agreements outline the types and amounts of assistance to be provided for specific CTR projects. For instance, projects to eliminate strategic nuclear arms, including strategic bombers, missiles, and related equipment are conducted under the Strategic Nuclear Arms Elimination Implementing Agreement signed by DOD and the Ukrainian Ministry of Defense in December 1993. Agreement amendments update the annual amount of funding that CTR will provide for a specific project within a recipient country. For example, the December 2004 agreement amendment for biological weapons proliferation projects with the government of Kazakhstan provides for $30 million in CTR funding during fiscal years 2004 and 2005. The recipient governments must sign agreements or agreement amendments before projects can begin and funding can be provided or increased, but this may take time and delay projects, according to CTR officials. According to a CTR program area manager, the Russian government took more than 18 months to sign an implementing agreement for nuclear weapons transportation and security projects because it did not want to reveal the location of nuclear weapons storage sites that the government planned to close. In 2004, the government of Kazakhstan took more than 6 months to sign the annual agreement amendment for biological weapons proliferation projects. According to CTR contractors and officials at Kazakhstani biological research facilities, the government’s delay slowed efforts to improve the security and safety of biological pathogens at their institutes. For CTR biological weapons proliferation prevention projects in Russia, however, DOD has no implementing agreement. These projects are implemented through the International Science and Technology Center in Moscow. Until it can conclude a biological threat reduction implementing agreement with the Russian government, DOD has limited the types of projects it initiates in Russia. Risks to CTR projects can increase when DOD begins implementation before the necessary agreements are in place with CTR recipient governments. After more than 10 years, Russia and DOD have yet to negotiate a transparency agreement that would allow U.S. personnel access to the CTR-funded fissile material storage facility at Mayak to ensure that it is being used as intended. DOD designed and built the facility to provide centralized, safe, secure, and ecologically sound storage for weapons-grade fissile material from dismantled Russian nuclear warheads. In December 2003, DOD completed the CTR-funded Mayak facility at a cost of about $335 million, and the Russian government assumed full responsibility for its operation and maintenance. Although the Russian government has pledged its commitment to transparency, it has not signed an agreement with DOD. Therefore, the United States has no reasonable assurance that Russia will only use the facility to store materials from dismantled nuclear weapons and not reuse the materials. According to CTR program officials, the Russian government may soon begin storing nuclear materials at the Mayak facility without an agreement in place. We first raised concerns about the lack of a transparency agreement for the Mayak facility in 1994. Later, in April 1999, we voiced concerns that the United States still lacked clear assurances that Russia would use the Mayak facility in a manner consistent with all U.S. national security objectives for the project. Furthermore, two CTR project failures in Russia illustrate the consequences of DOD not having the necessary agreements in place (see app. I for additional information). In the early 1990s, DOD agreed to assist Russia in constructing a facility to dispose of liquid missile propellant, known as heptyl, which had been drained from intercontinental and submarine-launched ballistic missiles. DOD spent nearly $95 million over 10 years to build a facility to destroy the heptyl, only to learn in January 2002 that Russia had diverted the heptyl to its commercial space program, rather than storing it for eventual destruction. As a result, the facility was never used. The DOD IG reported in 2002 that CTR program officials negotiated a weak implementing agreement with the Russian government. Specifically, the agreement did not require the Russian government to provide the heptyl or provide access for CTR program officials to inspect the heptyl storage facilities and verify the quantities present. Similarly, DOD had agreed in the early 1990s to build a facility in Russia to dispose of solid rocket motors from dismantled missiles. DOD spent almost $100 million over nearly 10 years to design the facility, despite the concerns of local residents about the possible environmental impact. In January 2003, Russian officials notified DOD that the regional government had denied the land allocation permit necessary to begin construction due to the opposition from local residents. As a result, DOD never began construction on the facility. The DOD IG found that the implementing agreement for the design of the solid rocket motor elimination facility at Votkinsk failed to specify Russian responsibilities for the project. Primarily, the Russian government was to obtain the necessary land allocation permits. CTR officials accepted in good faith that Russia would help implement program objectives and therefore assumed that they did not need to document the Russian government’s responsibilities. In addition, despite local protests against construction of the facility from the beginning of the project, DOD project managers did not identify land allocation as a potential risk until April 2002. Even after DOD concludes appropriate agreements, however, risks still may exist due to the control environment of the recipient governments. For instance, if a recipient government has a poor control environment risk increases that the agreed to objectives and conditions will not be met. A good control environment requires that an organization’s structure clearly defines key areas of authority and responsibility. When the Russian government reorganized in early 2004, it was uncertain which agencies and officials were in charge of working with DOD. While the names of some of the agencies had merely changed, other agencies were subsumed into larger organizations or completely dissolved. According to CTR program officials, the reorganization had a significant impact on program implementation. For example, the CTR Policy Office is renegotiating its implementing agreements to reflect the new Russian government entities. CTR projects also experienced delays when the Russian government reorganized the committee that granted tax exemptions and resolved customs issues for all CTR assistance entering Russia. Work on the CTR- funded chemical weapons destruction facility in Russia was delayed until needed equipment was cleared through customs. Furthermore, CTR recipient governments may not provide adequate access to project sites or may pursue priorities that compete with CTR program objectives. DOD’s inability to gain access to all sites where CTR assistance is provided has been an issue since the CTR program began in 1992. The U.S. government has been concerned with its ability to examine the use of its CTR-provided assistance, while CTR-recipient countries have security concerns regarding U.S. access to sensitive sites. For example, as we reported in March 2003, DOD had made only limited progress installing security upgrades at Russian nuclear weapons storage sites and former biological weapons facilities because Russia would not provide DOD access to several sites. Since March 2003, Russia has granted DOD access to some nuclear weapon storage sites, and continues to restrict access to some former biological weapons facilities. While CTR program officials monitor the progress of ongoing projects, DOD has no mechanism to monitor and evaluate the results of completed projects in relation to their meeting program objectives. According to internal control standards, monitoring should assess the quality of project performance over time. Conducting program evaluations, such as reviewing completed CTR projects, may be warranted after major changes in management plans. DOD does not conduct final evaluations of completed CTR projects and currently has no mechanism to document lessons learned and apply them to future project planning and implementation. At its inception, the CTR program primarily provided equipment to recipient countries, but now the vast majority of assistance is provided through contracted services. Although the program has shifted to funding costly, complex, and sometimes high-risk projects that can last for many years, DOD has not expanded the scope of its project monitoring process to include evaluations of the efficiency and effectiveness of CTR projects upon their completion. In June 2001, we recommended that DOD conduct such evaluations to improve DOD’s overall program oversight. In response, DOD agreed to periodically assess the efficiency and effectiveness of CTR assistance, including contracted services. However, DOD lacks a final review process to assess the efficiency and effectiveness of completed CTR projects. As of June 2005, DOD had completed 77 projects, but program officials did not evaluate and record what went well during a project’s implementation and what could have been improved to better meet program objectives. While CTR officials discuss ongoing individual projects performance through the MDA process, senior CTR management officials acknowledged that projects are not evaluated upon their completion and such information is not shared program wide in a systematic manner. As such, it is difficult to apply lessons learned to future CTR projects as they are being planned and implemented and avoid past mistakes. Officials stated that conducting final evaluations could further improve their management of the CTR program,especially as the program expands into countries outside the former Soviet Union. Since DOD does not assess the efficiency and effectiveness of projects as they are completed, it cannot apply the lessons learned from such evaluations to new and ongoing projects in a systematic way. Since 1992, CTR assistance has helped the states of the former Soviet Union eliminate and protect their weapons of mass destruction. Although the CTR program has helped reduce the threat that these weapons could be stolen or misused, incidents such as the heptyl disposition and solid rocket motor elimination projects demonstrated significant problems with DOD’s program management. In the aftermath of these incidents, DOD has worked to revamp its CTR program management to achieve greater assurance that projects are implemented according to program objectives. By standardizing its management approach and applying it consistently across all CTR program areas, DOD is improving its management of the CTR program. DOD has greater assurance that all stakeholders, including recipient governments, are involved in project implementation. CTR program and project managers have clearer guidance on how to conduct their work and report on it. Furthermore, DOD has made progress in more clearly articulating and documenting its cooperative arrangements with CTR recipient countries, as well as holding recipient governments more accountable for implementing the CTR projects in their respective countries. These improved controls cannot eliminate the risks inherent in the program, but the goal is to mitigate risk to an appropriate level given the circumstances. Most significantly, the success of the CTR program requires the cooperation of recipient governments. Good internal controls help mitigate the risks from having to rely on recipient governments to sign agreements, provide access, and support project implementation. Still, governments can change their project goals, deny access to U.S. contractors and officials, or withhold permits to allow work to proceed. DOD’s more robust internal controls have helped minimize the impact of these actions, but they cannot guarantee a project’s success. The U.S. government remains concerned about its ability to determine how CTR- provided assistance is being used, while CTR recipient countries continue to have security concerns regarding U.S. access to their sensitive facilities and sites. In addition, while DOD has made progress over the past 2 years in improving its management of the CTR program, it still does not review the overall performance of projects upon their completion. As projects are completed, assessing and documenting lessons learned will allow DOD to further improve CTR project implementation. As the CTR program completes more projects and the program begins to expand beyond the former Soviet Union, such a mechanism will become more important to overall program management. We recommend that the Secretary of Defense conduct performance reviews upon the completion of CTR projects. Such reviews would provide a mechanism for documenting lessons learned and applying them to future project planning and implementation. DOD provided comments on a draft of this report, which are reproduced in appendix VI. DOD concurred with our recommendation that reviews of completed CTR projects should be conducted to document and apply lessons learned. DOD also provided technical comments, which we have incorporated where appropriate. We are providing copies of this report to the Secretary of Defense and other interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. By 2003, two CTR program project failures caused DOD to reassess its management of the program. In the early 1990s, DOD agreed to assist Russia in constructing a facility to dispose of liquid missile propellant, known as heptyl, and build a solid rocket motor disposition facility. However, DOD terminated these projects after spending nearly $200 million over almost a decade. In the case of the heptyl disposition facility, DOD spent more than $95 million over 10 years on the facility at Krasnoyarsk, Russia, that was never used. In 1993, the Russian government asked for CTR assistance to dispose of heptyl from intercontinental and submarine-launched ballistic missiles that were being destroyed in compliance with arms control agreements. At the time, Russian government officials claimed that existing heptyl storage facilities were full and that they needed a way to dispose of the propellant, according to DOD officials. DOD officials also stated that Russian officials had told them that the heptyl could not be used for the Russian commercial space program. However, when CTR officials were ready to test the almost completed facility in January 2002, officials from the Russian Aviation and Space Agency revealed that nearly all of the heptyl had been diverted to the commercial space program. In February 2003, the Deputy Secretary of Defense approved the dismantlement and salvage of the no-longer needed heptyl disposition facility. According to the DOD IG, a variety of inadequate management controls contributed to the heptyl project failure. The IG reported that AT&L was not assuming its role in managing the CTR program by providing input and direction for projects. Rather, the CTR Policy Office, which had little experience in following DOD acquisition guidelines, establishing milestones, and identifying risks, was managing daily CTR project activities. Because AT&L was not performing adequate oversight of the program, CTR program officials negotiated an implementing agreement without specific programmatic commitments from the Russian government and did not thoroughly identify the risks associated with eliminating the heptyl. Specifically, the agreement did not require the Russian government to provide the heptyl or provide access for CTR program officials to inspect the heptyl storage facilities and verify the quantities present. CTR officials accepted in good faith that Russia would provide the heptyl and therefore assumed that they did not need to document or oversee the Russian government’s responsibilities. In assessing the risks of the heptyl project, CTR project officials failed to identify the possibility that the Russian government would use the heptyl for other purposes and therefore developed no mitigation plan. A second project failed in January 2003. After spending almost 10 years to design the facility at Votkinsk to destroy solid rocket motors, CTR program officials ended the project at a cost of almost $100 million. In the early 1990s, Russia had requested CTR assistance to destroy motors from dismantled missiles in compliance with an arms control agreement. Originally, the facility was to be located at Perm, but pending construction of the facility generated environmental opposition from local residents. The facility was thus moved to Votkinsk in February 1998, where local residents concerned with the environmental impact of the facility also began protests. Still, CTR program officials continued with the design of the facility, remaining optimistic that the regional government would issue the required permits regardless of opposition. Officials from the Russian Aviation and Space Agency told CTR program officials in July 2002 that land for the facility would be allocated no later than September 2002. In a January 2003 letter, however, Russian officials notified DOD that the regional government had denied the land allocation permit due to the opposition from local residents. Inadequate management practices also contributed to the failure of the solid rocket motor disposition project at Votkinsk. As with the failed heptyl project, the DOD IG reported that AT&L did not assume its management role in overseeing the CTR program. The CTR Policy Office was managing daily CTR project activities. The implementing agreement for the Votkinsk project failed to specify Russian responsibilities, such as obtaining the necessary land allocation permits. In addition, despite the local environmental protests against construction of the facility from the beginning, project managers did not identify land allocation as a risk until April 2002. Furthermore, the contracting processes that were in place contained no mechanism to terminate the project when costs increased and the schedule was delayed. DTRA awarded the project contract for the complete design and construction of the facility rather than contracting in phases so that possible CTR program losses could be minimized. As required by section 3611 of the National Defense Authorization Act for Fiscal Year 2004, we reviewed the status of DOD’s implementation of legislative mandates covering the CTR program. Since 1992, Congress has passed 25 pieces of legislation that guide CTR project activities. Specifically, Congress has established a series of (1) requirements that must be met before DOD can fund CTR projects, (2) conditions on CTR expenditures, and (3) reporting requirements on the CTR program and project implementation. Figure 3 illustrates the types of congressional legislation covering the CTR program from fiscal year 1992 to 2004 and includes those legislative requirements that have lapsed. Over the years, DOD has mostly complied with these requirements, except for several occasions when it was late in providing required reports to Congress. Legislation has recently been proposed that would repeal some DOD requirements. Congress has established a variety of requirements that must be met before DOD can fund CTR projects. For example, in establishing the CTR program in 1991, Congress required that CTR assistance provided to the countries of the former Soviet Union could not be expended until the President certified to Congress that the recipient governments were committed to reducing their weapons arsenals. According to CTR officials, verifying CTR program compliance with legislation can be a time-consuming process and may delay the implementation of projects, but they cannot spend CTR funds unless all legislative conditions are met. DOD officials involved with managing the CTR program recognize that Congress is exercising its oversight responsibilities over the CTR program. Congress has also placed limits or conditions on how DOD can spend CTR money. For instance, in Congress placed conditions on CTR money to prohibit spending in certain areas, such as conventional weapons destruction and housing for retired or current members of CTR-recipient countries’ military forces. Also in 2000, Congress halted CTR funding for construction of the Russian chemical weapons destruction facility at Shchuch’ye until fiscal year 2004 when it granted a waiver. Congress also requires DOD to submit reports on overall program implementation, as well as specific projects. Since the beginning of the CTR program, DOD has mostly complied with its congressional reporting requirements. However, as we previously reported, from 1994 through 1999 DOD was late in providing its annual report, which accounts for CTR assistance. Specifically, DOD was 16 months late in submitting its report for 1997 and more than 10 months late in submitting its report for 1998. Beginning in fiscal year 2001, the reporting requirement to account for CTR assistance became part of the annual CTR report. For fiscal years 2002 through 2004, DOD provided its annual CTR report to Congress late. However, DOD provided its annual report to Congress for fiscal years 2005 and 2006 mostly on time. Recently, some members of Congress have introduced bills that may lesson the legislative burden on the CTR program. In February 2005, Senator Lugar introduced the Nunn-Lugar Cooperative Threat Reduction Act of 2005. The bill, among other actions, would repeal some of the restrictions that Congress had previously placed on the CTR program. If enacted the bill would remove (1) a Presidential certification requirement for all CTR recipient countries to receive CTR assistance and (2) the funding constraints placed on the construction of the CTR-funded chemical weapons destruction facility in Russia. In February 2005, members of the House of Representative introduced the Omnibus Nonproliferation and Anti-nuclear Terrorism Act of 2005. This bill also includes a provision for the repeal of the same restrictions outlined in the bill introduced by Senator Lugar. To assess DOD’s management and internal controls over the CTR program, we collected and analyzed DOD documents, including CTR project plans, briefings, annual reports, and milestone decision authority memorandums. We also obtained and analyzed all legislation passed since 1992 that covers the CTR program. We applied the internal standards as described in GAO’s Standards for Internal Control in the Federal Government. We focused on those controls most relevant to the CTR program, including organizational structure, risk assessments, performance measures, program reviews, communications, and monitoring of projects. We also reviewed DOD acquisition management guidance as contained in the Defense Threat Reduction Agency’s Instruction 5000.01 for our assessment of CTR management controls. Using the federal government standards and DOD’s guidance, we developed and tested a semi-structured interview guide that included questions regarding DOD’s internal controls for the CTR program. We included steps in the development and administration of the semi-structured interview guide to minimize errors resulting from the respondents’ interpretation of the questions or from differences in information available to respondents answering the questions. We pretested the instrument with three DOD officials. In addition, an internal survey specialist reviewed our semi-structured interview guide. We modified the interview guide to reflect the questions and comments from the pretests and internal review. We used the semi- structured interview guide to interview 30 DOD officials responsible for managing and implementing the CTR program. We also held meetings with 17 other officials. Specifically, we met with officials from the CTR Policy Office, AT&L, and DTRA’s Business and Cooperative Threat Reduction (CT) directorates. Within CT, we obtained information from the director, deputy director, program and project managers from all five program areas, and officials from the Program Integration office. In addition, we met with officials from DTRA offices in Moscow and Almaty and the Threat Reduction Support Center in Springfield, Virginia. We traveled to the Russian Federation to observe CTR projects involving strategic offensive arms elimination and biological weapons proliferation prevention. We met with Russian officials at the Federal Space Agency, the Federal Agency for Industry, and the Federal Atomic Energy Agency. We also visited the Republic of Kazakhstan to observe CTR-funded projects involving biological weapons proliferation prevention. While in Russia and Kazakhstan, we met with representatives from all five CTR Integrating Contractors to obtain information on their roles in implementing CTR projects. We also reviewed our prior work on the CTR program. Although information about funding for the CTR program and the program’s accomplishments is used for background purposes only, we assessed the reliability of these data by reviewing relevant agency documents and obtaining information from agency officials. We determined that the data used were sufficiently reliable for the purposes of this report. We performed our work from April 2004 through May 2005 in accordance with generally accepted government auditing standards. Since 1992, Congress has authorized DOD to provide more than $5 billion for the CTR program to help the former states of the Soviet Union, including Russia, Ukraine, Belarus, Kazakhstan, Uzbekistan, Azerbaijan, Moldova, and Georgia, secure and eliminate their weapons of mass destruction and prevent their proliferation. As of April 2005, DOD has obligated about $4.5 billion in support of the CTR program. Of this obligated amount, about $2.7 billion funds projects are being implemented under CTR’s five program areas of biological weapons proliferation prevention, chemical weapons elimination, nuclear weapons safety and security, strategic offensive arms elimination, and weapon of mass destruction proliferation prevention initiative, as shown in figure 4. The remaining obligations cover completed CTR projects or other program support areas. In managing the CTR program, standards for internal controls in the federal government provide an overall framework for DOD to establish and maintain management controls and identify and address major performance challenges and areas at risk for mismanagement. The five overall standards for internal control are control environment, risk assessment, control activities, information and communications, and monitoring. Each standard contains numerous factors that an organization’s management can use to evaluate its internal controls. For example, under the control environment standard, there are about 30 factors listed such as whether an agency’s organizational structure has appropriate and clear internal reporting requirements. For this report, we focused on those factors most relevant to CTR program implementation. The scope of our work thus covered factors such as organizational structure, risk assessments, performance measures, program reviews, communications, and monitoring of projects. Table 1 describes the factors selected in reviewing DOD’s current internal controls for the CTR program. In addition to the contact named above, Dave Maurer, Beth Hoffman León, Josie Sigl, Stephanie Robinson, Nima Patel Edwards, Stacy Edwards, Lynn Cothern, Judy Pagano, and Mark Dowling contributed to this report. Etana Finkler also provided assistance.
Section 3611 of the National Defense Authorization Act for Fiscal Year 2004 mandates that GAO assess the Department of Defense's (DOD) internal controls for the Cooperative Threat Reduction (CTR) program and their effect on the program's execution. In addressing the mandate, we assessed DOD's management and internal controls over implementing CTR projects since 2003 by using the control standards for the federal government as criteria. In response to the mandate, we focused on those management and internal control areas considered most relevant to CTR project implementation: (1) building a management structure, (2) risk assessments, (3) performance measures, (4) program reviews, (5) communications, and (6) project monitoring. The Congress also mandated that GAO describe the status of DOD's implementation of legislative mandates covering the CTR program. Through the CTR program, DOD provides assistance to help the former states of the Soviet Union secure and eliminate their weapons of mass destruction. Since 2003, DOD has improved its management and internal controls over the CTR program. Prior to 2003, DOD had problems managing the program and ensuring that the program was meeting its objectives. These inadequacies became apparent in 2003 following two project failures in Russia that cost the CTR program almost $200 million, including the never used liquid rocket fuel disposition facility. Following these incidents, DOD implemented a more structured approach to managing the CTR program. In July 2003, DOD filled vacancies in the office responsible for managing the program, providing a level of leadership and oversight that did not previously exist. Once in place the new leadership made important improvements to the program's internal controls in the areas of organizational structure, risk assessments, performance measures, program reviews, and communication. For example, DOD now assesses and balances risks with project requirements and measures project performance at each phase. DOD also conducts semi-annual meetings to review commitments and responsibilities of CTR-recipient governments and to minimize risk. Although enhancing its internal controls helps mitigate the risks that stem from having to rely on the cooperation of CTR-recipient governments, DOD can never fully eliminate the project risks associated with recipient governments' cooperation. Furthermore, while DOD's enhancements are an improvement over previous internal controls, current mechanisms do not include a separate review of CTR projects upon their completion. As such, DOD lacks a system for evaluating projects upon their completion and applying lessons learned to future projects.
The CNMI consists of 14 islands in the western Pacific Ocean, just north of Guam and 5,500 miles from the U.S. mainland. Most of the CNMI population—53,883 in 2010, according to the U.S. Census—reside on the island of Saipan, with additional residents on the islands of Tinian (3,136) and Rota (2,527). The 1976 covenant between the CNMI and the United States established the islands’ status as a self-governing commonwealth in political union with the United States. The covenant granted the CNMI the right of self- governance over internal affairs and grants the United States complete responsibility and authority for matters relating to foreign affairs and defense affecting the CNMI. The covenant also provided U.S. citizenship to certain CNMI residents. Further, the covenant exempted the CNMI from federal immigration laws and certain federal minimum wage provisions. However, under the terms of the covenant, the U.S. government has the right to apply federal law in these exempted areas without the consent of the CNMI government. Acting under this authority, Congress enacted CNRA in 2008 to apply federal immigration laws to the CNMI; in November 2009, the U.S. government began its application of immigration laws to the CNMI. Between 1980 and 2009, the CNMI used its authority over its own immigration policy to bring in foreign workers under temporary renewable work permits and to allow the entry of foreign business owners and their families. Owing primarily to the influx of these workers between 1980 and 2000, the CNMI population increased from about 16,800 in 1980 to 69,200 in 2000, and the CNMI economy became dependent on foreign labor. Most of these foreign workers were employed in the garment and tourism industries, which together accounted for about 80 percent of all employment in the CNMI in 1995. However, beginning in the late 1990s, the tourism industry experienced a sharp decline, as total visitor arrivals to the CNMI dropped by more than half, from a peak of 736,117 in 1996 to 340,957 in 2011. In addition, in 1999, the garment industry was central to the CNMI economy and employed close to a third of all workers; however, by early 2009, the last garment factory had closed. Because of the decline in the tourism industry and the departure of the garment industry, employment in the CNMI has fallen. Between 2002 and 2010, the number of foreign workers in the CNMI dropped by more than 60 percent, according to CNMI Department of Finance tax data. CNMI tax data also show that in 2010 there were 14,958 foreign workers and 11,336 U.S. workers in the CNMI, with foreign workers outnumbering U.S. workers in all industries but government and banking and finance. Figure 1 shows the numbers of foreign and U.S. workers in the CNMI labor force from 2002 through 2010. As the number of foreign workers has declined, the CNMI’s real gross domestic product (GDP) has also fallen sharply, declining by 49 percent between 2002 and 2009. Figure 2 shows the CNMI’s real GDP from 2002 through 2010. The CNMI government’s revenues have also fallen by 45 percent, from $240 million in fiscal year 2005 to an estimated $132 million for fiscal year 2011. In addition, the cost of labor in the CNMI increased after application of the federal minimum wage began there in 2007. Labor costs may increase further because of the potential application of Federal Insurance Contributions Act (FICA) payroll taxes to some previously exempt workers. CNRA’s stated intent in establishing federal immigration law in the CNMI is in part to minimize, to the extent practicable, any potential adverse economic and fiscal effects of phasing out the CNMI’s own foreign worker permit program and to maximize the CNMI’s potential for economic and business growth. To that end, CNRA provides, among other things, opportunities for individuals authorized to work in the United States, including citizens of the Freely Associated States, and provides a mechanism for the continued use of foreign workers as needed to supplement the CNMI’s resident workforce. (See app. II for the complete statement of congressional intent.) CNRA required that DHS, DOL, and DOI take the following actions, among others: DHS. CNRA required that the Secretary of Homeland Security establish a transitional work permit program for foreign workers in the CNMI during the initial 5-year transition period. In administering this program, the Secretary must determine the number, terms and conditions, and fees for the permits. The Secretary must also provide for an annual reduction in the allocation of permits for foreign workers that results in zero permits by the end of the transition period; however, the length of the transition period can be repeatedly extended by the Secretary of Labor (see the following subsection). The system for allocating permits may be based on any reasonable method and criteria determined by the Secretary of Homeland Security. CNRA also requires DHS to collect from CNMI employers $150 per worker per year, which the Secretary must transfer to the Treasury of the CNMI for the purpose of funding ongoing vocational educational curricula and program development in the CNMI by CNMI educational entities. In adopting and enforcing the program, DHS must consider in good faith any comments and advice submitted by the Governor of the CNMI within 30 days of receipt. DOL. CNRA required that the Secretary of Labor, in consultation with the Secretaries of Homeland Security, Defense, the Interior, and the Governor of the CNMI, ascertain the current and anticipated labor needs of the CNMI and determine whether an extension of the transition period by up to 5 years at a time is necessary to ensure that an adequate number of workers will be available for legitimate businesses in the CNMI. DOL must determine the extension no later than 180 days before the end of the transition period. DOL is to base its decision on the labor needs of legitimate businesses in the CNMI and may consider a number of factors, such as CNMI unemployment rates and efforts to train U.S. citizens, lawful permanent residents, and unemployed foreign workers in the CNMI. (See app. III for a list of the factors that CNRA states DOL may consider.) DOI. CNRA required that the Secretary of the Interior, in consultation with the Governor of the Commonwealth, the Secretary of Labor, the Secretary of Commerce, and CNMI private sector representatives, provide technical assistance to the CNMI, including assistance for recruiting, training, and hiring of workers to assist employers in the CNMI to hire U.S. citizens and nationals and legal permanent residents. Technical assistance must also be provided for identifying economic opportunities and for job skills identification and curricula development. In addition, CNRA required that the Secretaries of Homeland Security, Labor, the Interior, and State negotiate and implement interagency agreements to assign their respective duties in order to ensure the timely and proper implementation of CNRA requirements. In our August 2008 report, we found that the interaction of DHS and DOL decisions—on how many CNMI transitional work permits to allocate annually and whether to extend the transition period, and therefore the CNMI transitional work permit program, past 2014—would significantly affect employers’ access to foreign workers. We also found that because of the prominence of foreign workers in the CNMI labor market, any substantial and rapid decline in permits for foreign workers would have a negative effect on the size of the CNMI economy. However, a more modest reduction in the annual permit allocations would result in minimal effects on the CNMI economy. To illustrate a range of possible effects on the CNMI economy given varying rates of reduction in the annual allocation of CNMI transitional work permits for foreign workers, for our 2008 report we generated simulations that estimated the impact on the CNMI’s economy by an index representing total GDP. We generated these simulations for several scenarios, using a range of assumptions regarding the effect of a reduction of labor on production and the ability of the CNMI resident workforce to substitute for the foreign workers in production. In the first scenario, we found that a steep reduction in the transitional work permits for foreign workers—from 20,000 in 2007 to 1,000 by 2021—would lower the CNMI’s GDP to a range of about 21 percent to 73 percent of its 2007 value by 2021. In the second scenario, we found that a less precipitous decline in the transitional work permits—from about 20,000 in 2007 to about 8,000 by 2021—would lower the CNMI’s GDP to a range of about 64 percent to 85 percent of its 2007 value by 2021. In the third scenario, we found that a much smaller decline in the transitional worker permits— from 20,000 in 2007 to 17,000 by 2021—would lower the CNMI’s GDP to a range of no less than about 92 percent to about 98 percent of its current value by 2021. On September 7, 2011, DHS issued a final rule to implement a CNMI transitional work permit program, as required by CNRA, for foreign workers who would not otherwise be admissible under federal law. The final rule creates a classification of CW-1 status for transitional foreign workers and a CW-2 status for dependents of CW-1 workers. DHS delayed issuing a final rule for almost 2 years, following a court injunction on its October 2009 interim rule that prohibited it from issuing a final rule until it followed notice-and-comment rulemaking procedures to consider public comments on its interim rule. DHS received 146 public comments on the interim rule and made some changes in the final rule in response to those comments. (For more information on DHS’s actions to issue the final rule and respond to public comments on its interim rule, see app. V.) As required by CNRA, DHS’s final rule on the CNMI transitional work permit program addresses (1) the number of permits to be issued; (2) the terms and conditions for the permits; and (3) the fees for the permits, including a $150 vocational education fee. United States Citizenship and Immigration Services (USCIS), a component of DHS, is responsible for processing petitions for these permits. Table 1 describes how DHS’s final rule addresses these implementation decisions. DHS’s methodology for the allocation of CW-1 permits for fiscal years 2011 and 2012 included considering, among other factors, requests by the CNMI governor (as required by CNRA) not to reduce the number of permits in the program’s first 2 years; CNRA’s mandate that it reduce the number of permits annually; and the potential demand for the permits, according to DHS’s final and interim rules for the program. DHS’s final rule states that DHS set the fiscal year 2011 allocation of 22,417 CW-1 permits based on the CNMI government’s estimate of the number of foreign workers in the CNMI when CNRA was enacted in May 2008. The final rule also states that DHS’s reduction of the number of permits by one permit for fiscal year 2012 was intended to (1) effectively maintain a steady level of permits available to CNMI employers for the first 2 years of the CW-1 permit program and (2) accommodate potential demand for the permits expected because CNMI government-issued work permits for foreign workers were set to expire in the first quarter of fiscal year 2012.Further, DHS’s interim rule stated that when making its permit allocation, DHS considered the Governor’s request not to reduce the number of permits in the program’s first 2 years as well as CNRA’s requirement that DHS reduce the number of permits on an annual basis. DHS’s USCIS had processed 49 percent of the petitions it had received for fiscal year 2012 as of July 1, 2012, approving 45 percent and denying or rejecting about 5 percent. USCIS received 5,777 petitions for 11,830 CW-1 permits—about half of its annual allocation of 22,416 permits for fiscal year 2012. According to USCIS officials, USCIS plans to complete an initial review of all of the CW-1 petitions by September 1, 2012, and to process all of the petitions by December 31, 2012. Employers who have filed petitions that are still pending are permitted to continue to employ workers who were living and working lawfully in the CNMI at the time the petition was filed. Figure 3 shows the status of petitions for CW-1 permits that USCIS received between October 1, 2011, and June 30, 2012. USCIS received almost 93 percent of the CW-1 petitions in November and December 2011, shortly before and after the expiration of temporary CNMI-government-issued authorizations for foreign workers on November 28, 2011. Since November 2011, USCIS has substantially increased the number of CW-1 petitions it has processed each month, processing over 850 petitions in May 2012 compared with only 40 petitions in November 2011 (see fig. 4). According to USCIS officials, USCIS has encountered two main challenges in completing the processing of CW-1 petitions: Many CW-1 petitions require further evidence from the petitioner. For example, some petitions do not contain evidence that the petitioner is engaged in a legitimate business and has considered all U.S. workers for the position, among other things. As of July 2012, USCIS had requested additional evidence for 3,200 petitions, or more than half of the CW-1 petitions submitted in fiscal year 2012. The office responsible for obtaining biometric information, such as fingerprints, from workers seeking a CW-1 work permit has had difficulty meeting added workload demand. USCIS only has one office in Saipan that is able to obtain biometric information from workers seeking CW-1 status, and it is staffed to capacity. To help meet the demands of obtaining biometric information from workers seeking CW-1 status in remote areas, USCIS deployed teams of two staff members to the outlying islands of Rota for 9 business days and Tinian for 16 business days. USCIS has also trained additional officers at the USCIS California Service Center to help process, as needed, petitions and applications related to the CW-1 work permit program. DHS officials stated that they are working to determine the permit allocation for fiscal year 2013 and will announce the allocation in the Federal Register by October 1, 2012. CNRA requires that DHS provide for an annual reduction in the number of permits. DHS’s final rule states that DHS will determine the permit allocation on an annual basis and announce it in the Federal Register. DOL officials told us that, as of July 2012, the department had not decided whether or when it will extend the transition period. Officials said the department does not expect to announce a decision before the end of 2012 and pointed out that DOL is not required by CNRA to make its determination until July 5, 2014. DOL officials said they plan to comply with CNRA requirements to consult DHS, DOI, the Department of Defense (DOD), and the Governor of the CNMI before making the determination. DHS has identified a single source of CNMI workforce data that it intends to use for its annual allocation of transitional work permits and has not made public the types of information, such as its methodology, that it will publish with future permit allocations. In commenting on a draft of this report, DHS stated that it will provide its methodology for its fiscal year 2013 permit allocation upon publishing the allocation. In contrast, DOL has identified multiple sources of data on the CNMI labor market and plans to conduct a number of analyses of these data in making its determination of whether and when to extend the transition period. DHS has identified one data source that it will use for implementing CNRA requirements, including its annual permit allocation decision, according to a department official. In 2008, we recommended that DHS develop a strategy for collecting data to help it meet CNRA goals. According to the DHS official, the department subsequently identified one data source that it will use to help it implement CNRA requirements: the U.S. Census Bureau’s County Business Patterns, which provides data on the number of paid workers, businesses, and payroll by industry in the CNMI annually. However, the source does not provide numbers of U.S. and foreign workers by industry, which could help DHS predict future demand for foreign workers. DHS has also not made public the methodology it will use in allocating CW-1 permits for fiscal year 2013, and the September 2011 final rule does not state whether DHS will publish its methodology for future allocations. made public the types of information—including its methodology for the allocation—that it will publish with future permit allocations. In commenting on a draft of this report, DHS stated that it will provide the methodology for the fiscal year 2013 permit allocation upon publishing the allocation. The final rule states that DHS will assess the CNMI’s workforce needs annually when determining the annual permit allocation. According to a senior DHS official, the department has not The final rule states that DHS believes publishing the allocation annually in the Federal Register will provide sufficient notice to the public of the permit allocation. However, DHS also notes that economic analysis is hampered by significant uncertainty regarding future demand for foreign workers and by a general lack of CNMI economic and production data that would allow DHS to estimate the impact of the final rule on the broader CNMI economy. In June 2012, DOL outlined a strategy for obtaining data on the CNMI labor market that it will use to determine whether to extend the transition period. DOL outlined this strategy in responding to a recommendation in our 2008 report that it develop a strategy for collecting data to help it meet CNRA goals. This strategy includes the data source DHS identified to help it implement CNRA requirements as well as other data sources, one of which identifies the number of workers by citizenship and industry in the CNMI.multiple data sources, including the following: According to DOL’s strategy, the department will review U.S. County Business Patterns (U.S. Census Bureau). DOL will request data for 2008 and 2011. U.S. Census (U.S. Census Bureau). DOL will request data for 2010 that includes, among other types of data, social and demographic data by age, sex, race, household relationship, and household type. CNMI tax records (CNMI Department of Finance). DOL will request data for 2008 through 2011 that includes, among other types of data, the number of workers in the CNMI by citizenship, industry, and occupation. Regarding DOL’s methodology, DOL officials indicated that the department plans to analyze the data identified in its strategy, as well as other types of information, to address the factors that CNRA suggested DOL consider when making its determination (see app. III for a listing of the factors suggested by CNRA). For example: DOL plans to request from the U.S. Census Bureau special cross tabulations from the 2010 U.S. Census data by citizenship status, employment status, and place of birth to help it estimate the current unemployment rate of different populations within the CNMI. DOL plans to use the U.S. Census Bureau’s County Business Patterns data, CNMI Department of Finance tax data, U.S. Bureau of Economic Analysis estimates of the CNMI’s GDP, and government studies to help it determine CNMI businesses’ need for foreign workers. DOL plans to conduct structured interviews with representatives and officials from the CNMI Office of the Governor, officials from the CNMI Department of Labor, officials from and members of the Saipan Chamber of Commerce, and CNMI worker advocates. DOL expects that these interviews will help it identify efforts by CNMI businesses and the CNMI Department of Labor to prepare U.S. residents to assume jobs typically held by foreign workers in the CNMI; provide evidence that may indicate whether U.S. citizens or lawful permanent residents are willing to accept jobs offered; and determine whether there is a need for foreign workers to fill specific industry jobs. DOL plans to estimate changes in employment and in the GDP in each CNMI industry sector that might result from a reduction in the number of foreign workers. DOL officials noted that the department’s plans to obtain these data and conduct various analyses are subject to budget and staffing constraints, and the complete and timely response to its various data requests. DOL officials also said that they will solicit feedback in the near future from DHS, DOI, DOD, and the Governor of the CNMI on DOL’s data collection strategy and the additional analyses DOL plans to conduct using these data. The estimated number of foreign workers with a valid or pending CW-1 permit petition include those whose petitions were initially rejected because they lacked a signature or the correct fee; in such cases, petitions can be resubmitted with the correct permit fee and/or signature. eligible for employment authorization to work in the CNMI; 134 foreign workers (about 1 percent of all foreign workers) with valid nonimmigrant worker or investor status. Officials and members of the Saipan Chamber of Commerce (Chamber) stated that businesses face difficulties in finding U.S. workers to replace foreign workers.workers available to replace foreign workers in the CNMI is limited. Officials said that U.S. citizens graduating from high school are more likely to leave the CNMI to obtain a university degree and not return because of the limited availability of jobs, the high cost of living in the CNMI, and the higher earning potential elsewhere. See text box for examples of comments from CNMI businesses. According to Chamber officials, the number of U.S. Comments from CNMI Businesses on Challenges of Finding Replacements for Foreign Workers “There simply not enough employable U.S. eligible worker in the CNMI. . . . We have lost two workers who have resigned and decided to return to the Philippines . It been very difficult to replace them.” “If a local person wants a degree in computer science, would have to travel to the United States and spend at least 4 years in college. After receiving a degree, these local people do not want to return to the CNMI. . . . Their salaries would be less than half of . . . . If we are forced to send all of our highly trained personnel back to the Philippines, then I will be forced to close our doors and go out of business.” “There simply will not be enough able-bodied individuals entering the workforce to replace the departing workers.” “We have already tried working with employees who are U.S. citizens. This has not been particularly successful in our profession, because typically the individuals available are of a lower professional standard, or entry level. . . . U.S.-based professionals in our field are not willing to move to the CNMI.” These include H1B, H2B, L1A, R1, E2, and E2C. E2 and E2C investors were included because they are authorized to work in the CNMI. CNMI businesses’ uncertainty about future access to foreign workers, due to the limited availability of information regarding future work permit allocations and any extension of the transition period, may be creating disincentives for investment. Economics literature on the effects of uncertainty suggests that government policies and regulations can become a source of uncertainty. When facing uncertainty, businesses tend to delay investments that cannot be easily recovered, such as investments in expanding a hotel or building a new golf course; by delaying investments, businesses keep their options open until more information becomes available. For example, if a hotel in the CNMI were trying to decide whether to expand its operation by adding new rooms, it currently would face significant uncertainty regarding its future cost structure and its access to foreign workers. In 2009, labor costs accounted for approximately one-third of a hotel’s operating cost, according to a survey of CNMI businesses that we conducted. Without access to foreign workers, a hotel might have to increase its wage rate significantly to attract enough U.S. workers with the specific skills, such as the ability to speak a foreign language. Facing such uncertainty regarding future payroll costs, the hotel might decide not to invest until it has more information about its continued access to foreign labor. Consistent with the economics literature, Chamber officials and some Chamber members identified uncertainty regarding pending federal actions that may affect employers’ access to foreign labor as a factor that has limited investment in the CNMI. Specifically, members said that uncertainty has caused them to limit their plans for future investments. The text box below shows examples of comments from CNMI businesses that are members of the Chamber. Comments from CNMI Businesses on Impact of Uncertain Access to Foreign Labor on Investment Plans “The uncertainty over the access to foreign skilled labor is preventing investment in the CNMI. . . . While we aim to maintain a presence in the CNMI and continue to look for investment opportunities, it gets harder and harder. In today’s global economy . . . we are looking elsewhere for further and future investment.” “By not indicating that a permanent fix or an extension is in the plan, [federal agencies have] simply made CNMI more unattractive for investors.” “It is probable we will increase as a precaution to the difficulty in obtaining enough qualified people after 2014.” “The uncertainty with labor force puts my company middle and long-term plans on hold. . . . Our company will not purchase any expensive machinery, construct any new facility, open new work positions, which will mean less business for company’s partners and other local businesses.” Chamber officials also cited limited access to adequate health care in the CNMI, the high cost of living, and the high cost of shipping goods to and from the CNMI as factors that may have affected businesses’ investment decisions in the CNMI. DOL, DOI, and DHS have made available a combined total of about $6.5 million for worker training in the CNMI in fiscal years 2010 through 2012 (see table 2). DOL provided funding through annual grants throughout this period; DOI, through a one-time grant in fiscal year 2011; and DHS, through a cash transfer in fiscal year 2012.reporting requirements. WIA created a comprehensive workforce investment system that brought together multiple federally funded employment and training programs into a single system, called the one-stop system. One-stop centers serve two customers—job seekers and employers. Pub. L. No. 105-220, 112 Stat. 936 (codified at 29 U.S.C. § 2801 et seq.) According to DOL official, funds are made available for WIA Adult, Dislocated Workers, and Youth programs annually for three years; however, DOL officials said DOL requires 70 percent of the funds to be obligated within the year that the funds are made available. For more information on the WIA, see GAO, Workforce Investment Act: Innovative Collaborations between Workforce Boards and Employers Helped Meet Local Needs, GAO-12-97, (Washington, D.C.: Jan. 19, 2012). programs in the CNMI.dislocated workers, and youths that provide a broad range of services including job search assistance, assessment, and training for eligible individuals. Under the terms and conditions of the WIA grants, DOL requires quarterly performance and financial reports from the CNMI agency. The DOI awarded the remaining $300,000 to hire an economic advisor in the CNMI. In 2010, we found that the DOI had opportunities to better oversee grants and reduce the potential for mismanagement and recommended that the DOI evaluate its existing authorities that could be used to ensure more efficient use of funds by insular areas, establish its staffing needs for grant project monitoring, and clarify its grant management policy on the movement of funds between projects. See GAO, U.S. Insular Areas: Opportunities Exist to Improve Interior’s Grant Oversight and Reduce the Potential for Mismanagement, GAO-10-347 (Washington, D.C.: Mar. 16, 2010). collects these funds annually from the $150 vocational educational funding fee assessed for each foreign worker on a CW-1 petition. According to a senior DHS official, DHS will continue to collect and transfer these funds to the CNMI Treasury until the end of the transition period. The senior official said that because DHS transfers these funds directly to the CNMI Treasury, the funds are not subject to DHS grant terms or conditions, such as performance or financial reporting requirements. The official also noted that CNRA does not direct DHS to impose any such requirements on the funds. DOL and DOI collect information from the CNMI government regarding job training activities and results funded by, respectively, the DOL WIA grants and the DOI technical assistance grant. We also obtained information from CNMI legislative staff regarding the CNMI governor’s plans to use the funds collected from the DHS work permit program. In two instances, the same organizations are designated to receive vocational educational funding from two or more of the agencies. DOL. The CNMI State Workforce Investment Agency, the recipient of the WIA grants, submits investment performance reports to DOL on a quarterly basis, as required by the grants’ terms and conditions. The agency submitted its most recent performance report, with information on training program participants, characteristics, demographics, and services, in May 2012. According to the DOL grant manager, the services most often provided in the CNMI through WIA’s Adult Worker Program, Dislocated Worker Program, and Youth program are on-the- job training and work experience. Providers of training include the Northern Marianas College, the Northern Marianas Trades Institute, CNMI government agencies, and private businesses. Between June 2011 and June 2012, the CNMI State Workforce Investment Agency reported to the DOL grant manager that it had provided training services to 247 individuals in the Adult Program, 17 individuals in the Dislocated Worker Program, and 719 individuals in the Youth Program. According to the DOL grant manager, most of the youths received training services through summer youth programs. The CNMI State Workforce Investment Agency further reported that 50 participants of the Adult Program, 6 participants of the Dislocated Worker Program, and 38 participants of the Youth Program in fiscal year 2011 have obtained employment. DOL said that because of budget restrictions, the DOL grant manager has not performed an on- site monitoring review of the WIA grants to the CNMI since 2006. According to DOL officials, information on the performance of this grant is available to Congress on request. DOI. The CNMI Department of Commerce, the recipient of the $700,000 DOI technical assistance grant for on-the job training programs, submits a narrative on the status of the grant project to DOI on a semiannual basis, as required by the grant’s terms and conditions. The CNMI Department of Commerce submitted its last narrative report to the DOI grant manager in July 2012. The CNMI Department of Commerce reported to the DOI grant manager that as of June 2012 it had trained 227 U.S. workers with grant funds and that 51 U.S. workers were hired or employed. The June report also documents that the CNMI Department of Commerce had awarded $586,000 of the $700,000 grant to a variety of CNMI business and industry groups and institutions to support their on-the-job training programs. According to the CNMI Department of Commerce, recipients of funds from the DOI grant include the CNMI Aquaculture Producers Association, Marianas Resource Conservation and Development Council, Northern Mariana Trades Institute, Society for Human Resources Management, Saipan Sabalu Farmers Market Incorporated, and The Bridge Project. DOI representatives completed at least two site visits to each of the CNMI grant recipients during the last 6-month reporting period. According to a DOI official, information on the performance of this grant is available to Congress on request. DHS. Because DHS’s transfers to the CNMI government of vocational educational funds collected through the CW-1 work permit fee are not subject to reporting requirements, the CNMI government has not submitted any performance or financial reporting on its use of these funds. However, according to CNMI legislative staff, the CNMI governor has communicated plans to the CNMI legislature for the use of $1.5 million of the $1.8 million in DHS-transferred vocational educational funding in fiscal year 2013. According to the plans, the funds will be used to defray qualified expenses at the Public School System ($500,000), Northern Marianas College ($500,000), and Northern Marianas Trades Institute ($500,000). Since the United States applied federal immigration law to the CNMI in 2009, the CNMI economy has remained dependent on foreign workers despite their declining numbers. Part of CNRA’s stated intent is to minimize, to the greatest extent practicable, harm to the CNMI’s economy by providing a mechanism for the continued use of foreign workers as needed and providing opportunities to individuals authorized to work in the United States. In keeping with CNRA’s stated intent, CNRA gives DHS and DOL flexibility to preserve CNMI businesses’ access to foreign workers through DHS’s annual permit allocation and DOL’s ability to extend the transition period. DHS published its methodology for its allocations of transitional worker permits for fiscal years 2011 and 2012, stating that the allocations were responsive both to CNRA’s mandate and to potential demand for the permits. However, to date, DHS has not made public the methodology it will be using for the fiscal year 2013 allocation or whether it will provide its methodology when publishing future permit allocations beyond fiscal year 2013. Making this information available could help allay CNMI businesses’ uncertainty regarding the unknown impact of DHS’s pending decision on their future access to foreign workers and help mitigate the effect of such uncertainty on CNMI businesses’ investment plans. Despite the limited availability of economic data for the CNMI, DHS has identified one data source to help it implement CNRA requirements, including its annual allocation of the transitional worker permits. Meanwhile, DOL has outlined multiple sources of data that it plans to collect and analyses that it plans to complete to ascertain CNMI labor needs and determine whether to extend the transition period. Using the data DOL collects and the analyses it performs could enable DHS to more accurately assess the CNMI workforce needs when determining its next permit allocation and the potential economic impact. Recognizing the need to locate, educate, and train workers to replace departing foreign workers, CNRA requires DOI to provide technical assistance to assist employers in the CNMI in securing employees from among U.S. citizens and national residents. CNRA also requires DHS to collect fees from its permit program for worker training and to transfer those funds to the CNMI Treasury. Currently, information on the use and performance of the DOI grant and DOL grants that fund, among other things, worker training in the CNMI, is available to Congress on request. DHS does not collect information on the use and performance of its cash transfers to the CNMI, because these funds are not subject to reporting requirements. We recommend that the Secretary of Homeland Security take the following two actions: 1. to help ensure that CNMI’s businesses are better able to plan for future permit allocations, provide, when publishing DHS’s annual permit allocation, the methodology it used and the factors it considered to determine the number of permits allocated; and 2. to help ensure that DHS has the information it needs to assess the needs of the CNMI workforce as it decides on future permit allocations, use Department of Labor analyses of economic data on the labor needs of the CNMI as a factor in deciding on future permit allocations. We provided a draft of this report to the Departments of State, Homeland Security, the Interior, Labor, and the government of the CNMI for their review and comment. We received written comments from DHS and the government of the CNMI, which are reprinted in app. VI and VII, respectively. We also received technical comments from the Departments of the Interior and Labor, which we incorporated, as appropriate. The Department of State had no comments. Following are summaries of the written comments from DHS and the CNMI government. DHS concurred with our recommendation that it provide, when publishing its annual permit allocation, the methodology it used and the factors it considered to determine the number of permits allocated. DHS agreed that it is appropriate when publishing its annual transitional worker permit allocation to provide at least a brief explanation of how the number was derived. For its fiscal year 2013 allocation, DHS said that it plans to publish a notice that will describe the methodology it used to determine the allocation. DHS did not concur with our recommendation that it use DOL analyses of economic data on the labor needs of the CNMI as a factor in deciding on future permit allocations, DHS agreed in general that relevant sources of information should be used and that the items described by DOL may be useful in future transitional worker permit allocation decisions. However, DHS will await further information as to what DOL has to provide. Once the DOL analysis is available, DHS will review it to determine its appropriateness and whether it should be factored into decision making for future permit allocations. We maintain that the data and analyses DOL plans to collect and complete will help DHS to more accurately assess the workforce needs of the CNMI when determining future permit allocations. The CNMI government agreed with both of our recommendations to DHS. The CNMI government also expressed concern with uncertainties related to CNMI labor resources and capacity. For example, the CNMI government said that the CNMI does not have the capacity to effectively track unemployment rates, available and vacant positions, as well as skill sets, which the government said is necessary to forecast the CNMI’s labor pool. The CNMI government also said that it is in full support of the continued use of foreign labor in the CNMI economy. The government said that it believes in the overall intention of the CNRA to reduce foreign workers through the transitional worker permit program and to prioritize U.S. qualified workers in order to stabilize the CNMI economy. The CNMI government said that it supports any reduction of transitional worker permits for fiscal year 2013 to at least 50 percent of the original estimate. We are sending copies of this report to interested congressional committees. We also will provide copies of this report to the U.S. Secretaries of Homeland Security, Labor, the Interior, and State and to the Governor of the CNMI. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. The objectives of this review were to (1) assess the status of federal implementation of the Commonwealth of the Northern Mariana Islands (CNMI) transitional work permit program for foreign workers, (2) examine the economic implications for the CNMI of pending federal actions related to this permit program, and (3) provide the status of federal efforts to support worker training in the CNMI. To assess the status of federal implementation of the CW-1 transitional work permit program, we reviewed the Department of Homeland Security’s (DHS) final and interim rules containing regulations implementing the CW-1 work permit program and determined what changes to the interim rule were made in the final rule. We also independently reviewed the 146 public comments DHS received on its interim rule and determined whether DHS captured the comments’ major areas of concern in its discussion of the public comments in the final rule. In addition, we obtained data from DHS’s U.S. Citizenship and Immigration Services (USCIS) on the number of petitions and applications submitted for foreign worker beneficiaries of the CW-1 work permit and their dependents. We interviewed officials from DHS components, including USCIS, U.S. Immigration and Customs Enforcement, U.S. Customs and Border Protection, and USCIS Office of Performance and Quality. We also interviewed officials from the Departments of Labor, State, and the Interior and officials from the Saipan Chamber of Commerce. In addition, we obtained information from the CNMI government. To examine the economic implications of pending federal actions on the CNMI economy, we obtained and analyzed tax data from the CNMI Department of Commerce that was originally generated by the CNMI Department of Finance. These data came from W-2 forms prepared annually by employers in the CNMI and submitted to the CNMI Department of Finance’s Division of Revenue and Taxation to support wages paid and taxes withheld. Because employers completed the W-2 form, we were unable to determine how foreign workers who had been granted permanent residency in the CNMI were categorized in the data (i.e., whether as U.S. or non-U.S. workers). Further, to determine the number of foreign workers in the CNMI in 2012, we obtained USCIS data on the number of foreign workers with an approved and/or pending worker status including CW-1, H1B, H2B, L1A, P1B, and R1 and investor status (which also authorizes the investor to work) including E2 and E2C as of June 2012. We also obtained USCIS data on the number of foreign workers with an approved and/or pending Employment Authorization Document (EAD) as of April 2012. To determine whether a worker or investor status was still valid, we took the maximum validity period for each status and examined the number of statuses granted during that time. For example, if a status was valid for up to 3-years, we calculated the number of DHS approved statuses within a 3-year interval. Similarly, if a status was valid for 1-year, we calculated the number of DHS approved statuses within a 1-year interval. Further, to determine whether an approved EAD was still valid, we examined the number of EADs approved in a 1-year interval. According to USCIS, in most cases EADs are valid for 1 year. We also obtained USCIS data on the number of aliens residing in the CNMI who had been granted U.S. permanent residency or U.S. citizenship. In addition, we conducted interviews with officials from the Saipan Chamber of Commerce (Chamber) and provided the Chamber with a list of questions on how pending federal actions have impacted CNMI businesses. Officials responded to the questions and also distributed them to their members, seven of whom submitted responses. We also obtained data on the CNMI’s gross domestic product (GDP) from the U.S. Bureau of Economic Analysis. We reviewed economics literature on the effects of uncertainty about future economic conditions on business investments, and data from the 2011-2012 HVS Hotel Development Cost Survey and a 2010 GAO survey of CNMI businesses. To provide the status of federal efforts to support worker training in the CNMI, we obtained information on the Department of Labor’s (DOL) Workforce Investment Act (WIA) grant, such as the grant’s terms and conditions, performance and financial reports, and notice of obligations; the Department of Interior (DOI) technical assistance grant that provides funding for worker training programs, such as grant terms and conditions, the most recent performance report submitted for the grant, and federal award amounts; and DHS information on the amount of funds DHS had collected from its transitional work permit program transferred to the CNMI Treasury to support CNMI vocational curricula and program development. Further, we conducted interviews with DOL officials from the Employment and Training Administration, including the grant manager responsible for monitoring WIA programs in the CNMI; DOI officials from the Office of Insular Affairs; and DHS officials from the USCIS Accounting and Reporting Bureau and USCIS Office of Chief Counsel. In addition, we obtained information on whether DHS funds would be subject to the Single Audit Act from the CNMI’s Office of the Public Auditor and one of the private companies generally responsible for completing the CNMI audits required under the act. In general, to establish the reliability of USCIS data that it uses to document immigration benefits in the CNMI, CNMI Department of Finance data that it uses to document the number of U.S. and foreign workers in the CNMI, and DHS budget data, we systematically obtained information about the way in which data were collected and tabulated. When possible, we checked for consistency across data sources. Although the CNMI Department of Finance data provided by the CNMI Department of Commerce had a limitation, we determined that the available data were adequate and sufficiently reliable for the purposes of our review. We conducted this performance audit from January 2012 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. CNRA includes the following statement of congressional intent in establishing federal immigration law in the CNMI, which we present here verbatim: a. Immigration and Growth.—In recognition of the need to ensure uniform adherence to long-standing fundamental immigration policies of the United States, it is the intention of the Congress in enacting this subtitle— 1. to ensure that effective border control procedures are implemented and observed, and that national security and homeland security issues are properly addressed, by extending the immigration laws (as defined in section 101(a)(17) of the Immigration and Nationality Act (8 U.S.C. 1101 (a)(17)), to apply to the Commonwealth of the Northern Mariana Islands (referred to in this subtitle as the “Commonwealth”), with special provisions to allow for— A. the orderly phasing-out of the nonresident contract worker program of the Commonwealth; and B. the orderly phasing-in of Federal responsibilities over immigration in the Commonwealth; and 2. to minimize, to the greatest extent practicable, potential adverse economic and fiscal effects of phasing-out the Commonwealth's nonresident contract worker program and to maximize the Commonwealth's potential for future economic and business growth by— A. encouraging diversification and growth of the economy of the Commonwealth in accordance with fundamental values underlying Federal immigration policy; B. recognizing local self-government, as provided for in the Covenant To Establish a Commonwealth of the Northern Mariana Islands in Political Union With the United States of America through consultation with the Governor of the Commonwealth; C. assisting the Commonwealth in achieving a progressively higher standard of living for citizens of the Commonwealth through the provision of technical and other assistance; D. providing opportunities for individuals authorized to work in the United States, including citizens of the freely associated states; and E. providing a mechanism for the continued use of alien workers, to the extent those workers continue to be necessary to supplement the Commonwealth's resident workforce, and to protect those workers from the potential for abuse and exploitation. b. Avoiding Adverse Effects.—In recognition of the Commonwealth's unique economic circumstances, history, and geographical location, it is the intent of the Congress that the Commonwealth be given as much flexibility as possible in maintaining existing businesses and other revenue sources, and developing new economic opportunities, consistent with the mandates of this subtitle. This subtitle, and the amendments made by this subtitle, should be implemented wherever possible to expand tourism and economic development in the Commonwealth, including aiding prospective tourists in gaining access to the Commonwealth's memorials, beaches, parks, dive sites, and other points of interest. CNRA states that DOL is to base its determination of whether to extend the transition period on the labor needs of legitimate businesses in the CNMI. CNRA states that in determining these needs, DOL may consider, among other relevant factors, the following, which we present here verbatim: 1. government, industry, or independent workforce studies reporting on the need, or lack thereof, for alien workers in the Commonwealth’s businesses; 2. the unemployment rate of U.S. citizen workers residing in the 3. the unemployment rate of aliens in the Commonwealth who have been lawfully admitted for permanent residence; 4. the number of unemployed alien workers in the Commonwealth; 5. any good faith efforts to locate, educate, train, or otherwise prepare U.S. citizen residents, lawful permanent residents, and unemployed alien workers already within the Commonwealth, to assume those jobs; 6. any available evidence tending to show that U.S. citizen residents, lawful permanent residents, and unemployed alien workers already in the Commonwealth are not willing to accept jobs of the type offered; 7. the extent to which admittance of alien workers will affect the compensation, benefits, and living standards of existing workers within those industries and other industries authorized to employ alien workers; and 8. the prior use, if any, of alien workers to fill those industry jobs, and whether the industry requires alien workers to fill those jobs. In April 2010, the DOI recommended that Congress consider permitting foreign workers who have lawfully resided in the CNMI for a minimum of 5 years—which the DOI estimated at 15,816 individuals—to apply for long- term resident status under the Immigration and Nationality Act. The DOI estimated that 15,816 foreign workers would be affected and recommended that Congress consider allowing these workers to apply for one of the following: (1) U.S. citizenship; (2) permanent resident status leading to U.S. citizenship (per the normal provisions of the Immigration and Nationality Act relating to naturalization), with the 5-year minimum residence spent anywhere in the United States or its territories; or (3) permanent resident status leading to U.S. citizenship, with the 5-year minimum residence spent in the CNMI. Additionally, the DOI noted that under U.S. immigration law, special status is provided to individuals who are citizens of the freely associated states (the Federated States of Micronesia, the Republic of the Marshall Islands, and the Republic of Palau). Following this model, the DOI suggested that foreign workers could be granted a nonimmigrant status similar to that negotiated for citizens of the Freely Associated States. The foreign workers would be allowed to live and work either in the United States and its territories or in the CNMI only. In April 2011, legislation introduced in Congress proposed CNMI resident status for certain long-term residents. To qualify for this status, an individual must be (1) born in the CNMI between January 1, 1974, and January 9, 1978; (2) classified by the CNMI government as a permanent resident; (3) a spouse or child of an individual covered by (1) or (2); or (4) an immediate relative of a U.S. citizen on May 8, 2008. The legislation is currently pending in the U.S. House of Representatives. The timeline in table 3 summarizes the actions DHS took to implement the final rule on the transitional work permit program for the CNMI. DHS received 146 public comments on its interim rule to establish the transitional work permit program for foreign workers in the CNMI, including comments from the CNMI government, the Saipan Chamber of Commerce, CNMI businesses, and U.S. and foreign workers. The comments expressed concern on a variety of issues, such as the ability of permit holders to conduct foreign travel, the fees associated with the permit, the permit’s validity period, whether foreign workers already living and working in the CNMI should be given first preference for the permits, and whether certain occupational categories, such as domestic workers, should be excluded from the final rule. In its final rule, DHS said that it considered all of the 146 public comments it received on its interim rule and made some changes to the final rule based on those comments. See examples below: Travel. DHS modified the final rule to allow nationals of the Philippines, who made up a majority of the foreign workforce in the CNMI in 2010 according to CNMI Department of Finance data, to travel from the CNMI to the Philippines via a direct Guam transit without violating their CW-1 or CW-2 permit status. Fees associated with the CW-1 permit. DHS modified the final rule to make the petitioner of the CW-1 worker responsible for submitting the $85 biometric fee per worker seeking a work permit in the CNMI instead of requiring the worker to submit the biometric fee. However, according to DHS, either the worker or the employer may pay the fee. DHS also authorized need-based fee waivers for spouses and children applying for derivative CW-2 permits. CW-1 permit’s validity period. DHS clarified the CW-1 permit’s validity period but did not extend it. DHS also provided that, in the case of termination of employment, the worker would have 30 days in which to find a new petitioning employer and would not be considered to have violated the CW-1 permit if the new employer files a petition for the worker within that time. Further, in the case of a change of employment, DHS authorized the worker to begin the new employment upon filing of the new petition. Permit preferences. DHS made no changes to the final rule to give first preference for worker permits to foreign workers already living and working in the CNMI. However, DHS did make a significant change to its work authorization regulations to authorize the continued employment of beneficiaries of CW-1 petitions filed on or before November 27, 2011, for lawfully present workers and pending adjudication of the petitions. According to DHS, this provision allowed large numbers of workers whose work authorization under their former CNMI government-authorized “umbrella permits” expired on that date to continue their employment while USCIS worked through the petitions filed for them. Exclusions of occupational categories. DHS did not exclude any occupational categories from the final rule, such as dancing, domestic workers, or hospitality service workers. DHS said it had considered excluding these categories in the interim rule because of human trafficking concerns. DHS also strengthened the attestations required of petitioning employers regarding compliance with applicable laws, and emphasized the limits on the ability of individual households to petition for domestic workers stemming from the rule’s requirements that established that legitimate businesses file petitions. DHS made other minor changes to the final rule to clarify certain references or remove references that were no longer relevant. In addition to the person named above, Emil Friberg, Assistant Director; Kira Self; Ben Bolitzer; Ashley Alley; Ming Chen; and Reid Lowe made key contributions to this report. Additional assistance was provided by David Dayton, Marissa Jones, R. Gifford Howland, and Julia A. Roberts.
In November 2009, the United States applied U.S. immigration law to the CNMI, as required by CNRA. To minimize the potential for adverse effects on the CNMI economy, CNRA established a 5-year transition period scheduled to end in 2014. CNRA required DHS to establish a transitional work permit program for foreign workers in the CNMI and annually reduce the number of permits issued, reducing them to zero by the end of the transition period. CNRA also required DOL to determine whether to extend the transition period past 2014, based on an assessment of the CNMI's labor needs. CNRA further required GAO to report on the implementation and economic impact of federal immigration law in the CNMI. This report (1) assesses the status of federal implementation of the transitional work permit program, (2) examines economic implications for the CNMI of pending federal actions, and (3) provides the status of federal efforts to support worker training in the CNMI. GAO reviewed CNRA, U.S. regulations, and information from federal agencies and the CNMI government; and interviewed U.S. government officials and private sector representatives. On September 7, 2011, the Department of Homeland Security (DHS) issued a final rule establishing a transitional work permit program in the Commonwealth of the Northern Mariana Islands (CNMI) for foreign workers not otherwise admissible under federal law. The final rule addressed key requirements of the Consolidated Natural Resources Act of 2008 (CNRA); for example, the rule sets the permit allocations for fiscal years 2011 and 2012. As of July 2012, DHS had processed about half of the petitions for work permits that employers submitted in fiscal year 2012. The DHS decision on its permit allocation for fiscal year 2013 and a Department of Labor (DOL) decision on whether and when to extend the transition period, both required by CNRA, are both pending. DHS plans to announce the permit allocation for fiscal year 2013 by the end of September 2012. DHS has identified one source of CNMI workforce data that it intends to use for its annual work permit allocations. However, the data source does not provide numbers of U.S. and foreign workers by industry, which could help DHS predict future demand for foreign workers. According to a senior DHS official, DHS has not made public the types of information, including its methodology, that it will publish with future permit allocations. Knowledge of DHS's methodology could help allay any public uncertainty regarding future access to foreign workers in the CNMI. DOL has not determined whether to extend the transition period, according to DOL officials, and is not required to do so until July 2014. DOL has identified multiple sources of data on the CNMI labor market, including a source that provides the number of workers in the CNMI by citizenship and industry. DOL has also identified the methodology it plans to use in making its determination. According to DOL officials, DOL plans to estimate changes in CNMI employment and gross domestic product that might result from a reduction in foreign workers. These data sources and analyses could help DHS assess workforce needs and determine its annual permit allocation. Uncertainty about the impact of the pending DHS and DOL decisions on access to foreign workers may be limiting business investment in the CNMI. Foreign workers made up more than half of the CNMI workforce in 2012, and CNMI businesses reported challenges in finding replacements for foreign workers. Some CNMI businesses indicated that uncertainty over pending federal actions has caused them to limit their plans for future investments in the CNMI. DOL, the Department of the Interior (DOI), and DHS made available a combined total of about $6.5 million to train workers in the CNMI in fiscal years 2010 through 2012. DOL provided annual grants that support worker services. DOI provided a grant in 2011 to support on-the-job training programs, in response to CNRA requirements. As of July 2012, DHS had transferred to the CNMI Treasury about $1.8 million it had collected through its permit program for CNMI vocational educational curricula and program development, as required by CNRA. Information on the use of DOL and DOI grants is available to Congress on request, but DHS does not collect information on the use of funds it transfers to the CNMI Treasury. GAO recommends that the Secretary of Homeland Security (1) provide, on publication of its permit allocations, the methodology used and (2) use DOL analyses as a factor in deciding future permit allocations. DHS agreed to publish its methodology but will wait to review DOL analyses until they are available before deciding to use them.
When MDA was given the mission to develop a global integrated Ballistic Missile Defense System (BMDS), DOD’s intention was for MDA to develop missile defense elements, such as the proposed interceptor and radar sites in Europe, and then transfer the elements to the lead services designated to operate and support them. We have previously reported that the transition process may, for some missile defense elements, end at a point that DOD calls transfer—which is the reassignment of the MDA program office responsibilities to a service. According to MDA and Office of the Under Secretary of Defense for Acquisition, Technology and Logistics officials, not all BMDS elements will ultimately transfer; the decision to transfer them will be made on a case-by-case basis and the conditions under which this may happen will be identified in agreements between MDA and the services for each element. In September 2008, we reported that DOD has taken some initial steps to plan for long-term operations and support of ballistic missile defense elements, but planning efforts to date are incomplete because of difficulties in transitioning and transferring responsibilities from MDA to the services and in establishing operations and support cost estimates. We noted that DOD has established limited operations and support cost estimates for ballistic missile defense elements in its Future Years Defense Program, DOD’s 6-year spending plan; however, the estimates do not fully reflect the total life cycle cost of the BMDS. As a result, we reported that the operations and support costs that had been developed were not transparent to DOD senior leadership and congressional decision makers and recommended that DOD establish a standard process for long-term support planning for the BMDS and a requirement to estimate BMDS operations and support costs. DOD has begun planning for the construction and implementation of the European missile defense sites; however, challenges affecting DOD’s implementation of ballistic missile defenses in Europe remain. First, neither Poland nor the Czech Republic has ratified key bilateral agreements with the United States, limiting DOD’s ability to finalize key details of the sites, such as how security will be provided. Second, DOD’s efforts to establish the roles and responsibilities of key U.S. stakeholders for the European sites remain incomplete. Without clear definitions of the roles that MDA and the services will be responsible for and agreement on criteria for transfer, DOD will continue to face uncertainties in determining how the European Interceptor Site and the European Midcourse Radar Site will be sustained over the long term. DOD has made progress in planning for the construction, implementation, and operations and support for the European missile defense sites. In 2002, the President signed National Security Presidential Directive 23 that called for missile defense capabilities to protect the United States, its deployed forces, and its allies. As part of that direction, MDA considered several European sites where it could base a missile defense capability to provide additional U.S. protection and could provide a regional defense for its European allies against a missile launch from Iran. DOD approached both Poland and the Czech Republic about basing elements of its proposed European missile defense system, and MDA briefed the President about the potential capability in 2003. Both U.S. and Polish officials told us that Poland was a likely host site because many of the trajectories from Iran went through Poland. In May 2006, the Czech government sent a formal letter to the United States to request that the United States consider placing missile defense assets in the Czech Republic. DOD has completed site selection and begun site design for the European Interceptor Site in Poland and the European Midcourse Radar Site in the Czech Republic. The proposed European Interceptor Site is located outside of Slupsk, Poland, near the Baltic Sea. The site is planned to consist of 10 two-stage, silo-based interceptors—modified versions of the three-stage interceptors located at Fort Greely, Alaska, and Vandenberg, California. The site is designed to protect the U.S. homeland and U.S. allies from incoming ballistic missiles launched from the Middle East. The initial MDA estimate indicated that the site would be operational by 2013, and the Army is the lead service that will be tasked with operating and supporting the interceptor site once it becomes operational. Site analysis is under way at the European Interceptor Site, but no physical site preparation or construction has begun. The photograph in figure 1 was taken at the site in February 2009 and shows the area where the planned interceptor field will be located. The proposed European Midcourse Radar Site is located at the Brdy military training area, approximately 90 kilometers southwest of Prague, Czech Republic. This land-based X-band radar will provide ballistic missile tracking data to the European Interceptor Site as well as the greater BMDS. The radar proposed for deployment to the Czech Republic is currently located at Kwajalein Atoll in the Marshall Islands. The radar will undergo an upgrade before its installation in the Czech Republic. The Air Force is the lead service that will be tasked with operating and supporting the radar site once it becomes operational, which MDA initially estimated would occur in 2013. Site analysis is under way at the European Midcourse Radar Site, but no physical site preparation or construction has begun. As part of ballistic missile defenses in Europe, DOD is considering the placement of an AN/TPY-2 mobile forward-based radar at another site in Europe in addition to the European Interceptor Site and the European Midcourse Radar Site. The transportable, land-based X-band radar is being considered in order to provide additional warning of ballistic missile launches from a location that is closer to Iran. The site for this radar has not yet been proposed, and at this time, negotiations with potential host nations have not been authorized. The State Department and DOD have negotiated the key bilateral Ballistic Missile Defense Agreements necessary to move forward on the European interceptor and radar sites. In 2008, the United States, Poland, and the Czech Republic signed bilateral Ballistic Missile Defense Agreements that formally approved the basing of the European Interceptor Site and the European Midcourse Radar Site, and both agreements are now waiting for ratification by the Polish and Czech parliaments. The Ballistic Missile Defense Agreements are the first of several necessary agreements expected to govern the fielding of ballistic missile defenses in each country. The Ballistic Missile Defense Agreements establish the rights and obligations of the United States, Poland, and the Czech Republic specific to each site and provide general guidelines on personnel, construction, and land use, among other things. A second key set of agreements, supplementary arrangements to the NATO Status of Forces Agreement, are expected to govern ballistic missile defense at both sites. The overall NATO Status of Forces Agreement was created soon after the NATO alliance was established in 1949 and sets the general status of forces for member nations as they operate in each others’ territories. The supplementary Status of Forces Agreement adds mission- specific matters addressed only broadly in the NATO Status of Forces Agreement, such as the legal status of U.S. civilian and military personnel working at each site. The Czech Republic and the United States have negotiated a supplementary Status of Forces Agreement, and it is now waiting for ratification by the Czech parliament. However, the supplementary Status of Forces Agreement with Poland had not been completely negotiated as of June 2009. After the Ballistic Missile Defense Agreements and supplementary Status of Forces Agreements are ratified by each host nation’s parliament, implementing arrangements will be negotiated. The implementing arrangements will serve as the executing documents for both of these agreements and address the day-to-day working relationship between the countries on a range of issues, including security. NATO’s overall role in European ballistic missile defense is still under consideration. Although NATO has not been party to the bilateral negotiations between DOD and the host nations, DOD and NATO have worked together to begin addressing interoperability of the U.S. BMDS and NATO’s Active Layered Theater Ballistic Missile Defense system. NATO has also taken recent steps to show support for the European Interceptor Site and European Midcourse Radar Site. For example, NATO’s 2008 Bucharest Summit Declaration recognized that ballistic missile proliferation poses an increasing threat to NATO, and recognized that the European missile defense sites would provide a “substantial contribution” to NATO’s protection. NATO stated that it is exploring ways to link U.S. missile defense assets with current NATO missile defense efforts. DOD has also made progress in coordinating with key U.S. stakeholders and by establishing the Army Corps of Engineers-Europe District as the construction agent for both sites. DOD has established lead services for both the interceptors and the radar and the Army and Air Force have identified which command will be specifically tasked to lead each ballistic missile element. The Army’s Space and Missile Defense Command has been assigned as the lead command for the European Interceptor Site and the Air Force Space Command is the lead command for the European Midcourse Radar Site. As lead services, both the Army and Air Force have conducted planning sessions and negotiation of roles and relationships with MDA. For example, MDA and the Army and Air Force are establishing roles and responsibilities for the long-term operations and support of the European sites through negotiation of Overarching Memorandums of Agreement and ballistic missile defense element–specific annexes to the overarching agreements. However, with the exception of the Overarching Memorandum of Agreement between MDA and the Army, completed in January 2009, these agreements are not yet complete. In addition, the Army Corps of Engineers-Europe District is the construction agent for both the European Interceptor Site and the European Midcourse Radar Site. As such, the Corps is responsible for issuing and commissioning site preparation and construction contracts for the sites. The Corps will manage the contracts to ensure that the sites are developed and constructed to meet MDA and service facility requirements. However, no contracts can be issued or site preparation commissioned until the Ballistic Missile Defense Agreements and supplementary Status of Forces Agreements with the host nations are signed and ratified. For the Czech Republic, construction may begin after ratification of agreements between the United States and the Czech Republic; however, for Poland, construction may begin only after ratification of the agreements by both countries. MDA officials told us that since Poland and the Czech Republic did not ratify their respective agreements by spring 2009, both sites will experience construction delays based on target construction completion dates of the first quarter of fiscal year 2013 for the radar site and the second quarter of fiscal year 2013 for the interceptor site. While DOD has made progress with key international partners and U.S. stakeholders on the planning and implementation of missile defenses in Europe, several challenges affect DOD’s ability to carry out its plans for the ballistic missile defenses in Europe. Neither Poland nor the Czech Republic has ratified either its overall Ballistic Missile Defense Agreement or a supplementary Status of Forces Agreement. The lack of ratified agreements limits DOD’s ability to negotiate specific details, such as security, that are expected to be formalized in implementing arrangements to each overall agreement. Table 1 shows the status of these key documents. U.S. and Polish officials also told us that the ratification process in Poland is on hold until the supplementary Status of Forces Agreement is negotiated and the new administration establishes its policy toward ballistic missile defenses in Europe. Additionally, U.S. officials indicated that the ratification process is also on hold in the Czech Republic pending the new administration’s policy. While DOD’s $7.8 billion fiscal year 2010 budget proposal for missile defense reflects an increased emphasis on bolstering near-term capabilities to respond to specific theater threats, as opposed to an overall long-term global ballistic missile defense capability, DOD officials have stated that the European missile defense capability in particular will be reevaluated as part of DOD’s Quadrennial Defense Review, which is expected to be completed in early 2010. In the interim, the lack of negotiated and ratified agreements affects many aspects of DOD’s ability to plan for the sites, ranging from the services’ ability to plan for the numbers of personnel that will be required to the types of support infrastructure that will be needed for the personnel. For example, the exact numbers of security personnel needed at each site will not be finalized until the implementing arrangements are complete and decisions are made regarding the extent to which the Polish and Czech governments will contribute security personnel to the sites. In addition, U.S. European Command is leading meetings, working groups, and consultations on land use considerations in Poland, but the specific topics included in the land use implementing arrangement cannot be finalized until Poland and the United States have agreed on the contents of the bilateral supplementary Status of Forces Agreement. Moreover, Congress has placed restrictions on DOD’s ability to fund procurement, site activation, military construction, and deployment of a missile defense system at the sites until the agreements have been ratified. Both the 2008 and 2009 National Defense Authorization Acts prohibit DOD from funding such activities at the radar site until the Czech parliament ratifies and the Prime Minister approves the missile defense and supplementary status of forces agreements. However, in Poland such activities can begin only after ratification and approval of agreements by both countries. Once DOD is able to begin, construction of both European sites is expected to take approximately 3 years to complete. Completion of the sites’ weapon systems installation, integration, and testing will continue after completion of construction. Finally, DOD’s efforts to finalize roles and responsibilities for the European sites remain incomplete because MDA and the services have not yet made important determinations, such as establishing the criteria that must be met before the transfer of specific European missile defense sites to the services. MDA has been directed by DOD since 2002 to begin planning for the transfer of missile defense elements, including the direction to coordinate with the services on resources and personnel needed to deliver an effective transition of responsibility. In addition, our prior work assessing interagency collaboration has shown that agreed- upon roles and responsibilities that clarify who will do what, organize joint and individual efforts, and facilitate decision making are important to agencies’ abilities to enhance and sustain their collaborative efforts. While the Army was designated lead service for the European Interceptor Site in October 2006 and the Air Force was designated lead service for the European Midcourse Radar Site in August 2007, the specific responsibilities related to these roles remain undefined. MDA and the services have begun to establish these roles and responsibilities through Overarching Memorandums of Agreement, with the purpose to outline the general delineation of responsibilities for the ballistic missile defense development and ongoing operations and support, as each element transitions and transfers from MDA to the services. While the Army and MDA completed their Overarching Memorandum of Agreement in January 2009, negotiations between the Air Force and MDA on their Overarching Memorandum of Agreement are ongoing. In addition, the Overarching Memorandums of Agreement are expected to include element-specific annexes for each of the ballistic missile defense elements, including the European Midcourse Radar Site and the Ground- Based Midcourse Defense, which will include details on the European Interceptor Site. The annexes are expected to specifically state the criteria that must be met by MDA before the elements transfer to the Army and the Air Force and detail specific roles and responsibilities for each organization. Further, the annexes will indicate the extent to which MDA will retain control of a missile defense element’s materiel development and the services will assume control of the remaining supporting responsibilities, such as doctrine, organization, training, leader development, personnel, and facilities. However, MDA and the Army and Air Force are still negotiating the annexes for the Ground-Based Midcourse Defense and the European Midcourse Radar Site and it is unclear when these annexes will be complete. As a result, the roles and responsibilities specific to the European sites remain undefined because MDA and the services have not yet agreed to the terms of transfer that are to be established in these annexes. Table 2 shows the status of the Overarching Memorandums of Agreement and element-specific annexes being negotiated between MDA and the Army and Air Force. Until specific roles and responsibilities for the sites are established and key criteria that will guide the transfer of the elements from MDA to the Army and Air Force are defined, uncertainty will persist in how the European Interceptor Site and the European Midcourse Radar Site will be sustained over the long term. The delay in ratification creates an opportunity for DOD and MDA to address some of the planning challenges DOD faces for the European sites. DOD’s initial cost estimates for total military construction and operations and support costs for ballistic missile defenses in Europe had significant limitations. First, DOD’s fiscal year 2009 military construction estimates did not fully account for all costs at the European Interceptor Site and the European Midcourse Radar Site and consequently could increase significantly. Second, DOD’s operations and support cost estimates are not complete and it is unclear how these costs will be funded over the elements’ life cycles. Without full information on total military construction and operations and support costs for the European missile defense sites, DOD and congressional decision makers do not have a sound basis on which to evaluate the investment required to implement plans for ballistic missile defenses in Europe. DOD’s initial military construction cost estimates for ballistic missile defenses in Europe have significant limitations and restrict Congress’s ability to evaluate the investment required to implement plans for ballistic missile defenses in Europe. Key principles for cost estimating state that complete cost estimates are important in preparing budget submissions and for assessing the long-term affordability of a program. However, DOD’s fiscal year 2009 estimates, the first military construction estimates for ballistic missile defenses in Europe, did not fully account for all costs at the sites. MDA initially submitted military construction cost estimates for the European Interceptor Site and the European Midcourse Radar Site to Congress in February 2008 for inclusion in DOD’s fiscal year 2009 budget. MDA projected that a total of $837.5 million would be required to complete site preparation and construction activities at the sites— $661.4 million for the interceptor site in Poland and $176.1 million for the radar site in the Czech Republic. However, the initial estimates did not include all costs primarily because MDA developed and submitted the military construction estimate to Congress before key site design work had been completed and without an Army Corps of Engineers review of the estimate. MDA stated that its approach was based on initial congressional authorization to field ballistic missile defense capabilities with research, development, testing, and evaluation funds; however, the fiscal year 2008 National Defense Authorization Act required that MDA begin using military construction funds for ballistic missile defense site construction for the fiscal year 2009 budget. Military construction regulations stipulate that a military construction program should reach the 35 percent design phase, a key construction design milestone, and that the Army Corps of Engineers should review the military construction estimates before they are submitted to Congress. However, MDA, asserting that it had statutory authority enacted by Congress to field initial ballistic missile defense capabilities with research, development, testing, and evaluation funds, developed and submitted its fiscal year 2009 military construction estimates without following traditional military construction requirements. MDA officials told us that MDA, in an effort to meet budget and construction timelines, developed and submitted its initial military construction estimates to Congress without completing key site design work. Army Corps of Engineers officials—although not involved in the development of the initial fiscal year 2009 military construction estimates—reaffirmed that the initial estimates were done without completing key site design work and that MDA based its estimates on assumptions and previous design experience from Fort Greely and other overseas operations, such as Shariki, Japan, rather than design data from the European sites, and did not have complete and accurate information about the sites when it submitted its estimates to Congress for the 2009 budget. For example, the initial figures overestimated the availability of local resources at both sites, such as local power supply, water and wastewater treatment facilities, and emergency support services. Army Corps of Engineers officials said that the Corps did not have the opportunity to provide input to or independently review MDA’s initial military construction estimates before they were submitted, as would typically be required under DOD military construction regulations. MDA’s initial military construction estimates were submitted in February 2008, but the Corps did not begin providing input to the design for the European Midcourse Radar Site and the European Interceptor Site until after it was issued design directives for the sites in September and October 2008, respectively. An Army Corps of Engineers official told us that the Corps has since made significant input to MDA’s military construction estimates and has worked with MDA to refine the cost estimates based on updated data. However, an Army Corps of Engineers official stated that had the Corps been involved in the early planning and development of the military construction cost estimates for the sites, given its experience and prior work in Eastern Europe, the Corps may have been able to influence the initial military construction estimates. According to this official, the Corps would have likely recommended that more studies of the sites be performed, and subsequently, more actual data from the site studies would have been used to influence the estimates before they were submitted to Congress for the fiscal year 2009 budget. Additionally, DOD’s initial military construction estimates for the interceptor and radar sites do not include Army and Air Force base operating support costs, such as military personnel housing. The Army, as the lead service designated to operate the European Interceptor Site, has begun planning for base operating support facilities and estimates that it will need $88 million in military construction funds to build the facilities that it requires for the Army personnel who are expected to be at the site. However, the Army’s estimated facility and personnel requirements are based on assumptions that may change. For example, the estimate assumes that Poland, the host nation, will contribute military personnel for security at the interceptor site, even though the United States and Poland have not yet agreed on Poland’s security personnel contribution. The implementing arrangements to be negotiated between the United States and Poland will determine the number of security personnel that Poland will contribute to the site, and this, in turn, will drive the Army’s personnel and facility requirements at the site. Until these implementing arrangements are negotiated and Army personnel determinations are finalized, Army base support construction estimates for the interceptor site will be based on assumed host nation contributions for security and the total Army military construction requirements at the European Interceptor Site will not be confirmed. Conversely, the Air Force, as the lead service for the European Midcourse Radar Site, has not yet developed any military construction estimates for base support facilities at the site. Air Force officials have acknowledged that the Air Force will require, at a minimum, dining facilities; some form of military housing; and morale, welfare, and recreation services at the radar site to support Air Force personnel, but the Air Force has not yet determined its total base support facility requirements because Air Force personnel requirements are not finalized. The Air Force is anticipating that the Czech Republic will contribute personnel to assist the United States in providing security at the site, but it is unclear how many personnel the Czech government will provide. The implementing arrangements that will be negotiated between the United States and the Czech Republic are expected to determine the number of security personnel that the Czech Republic will contribute to the site, which will drive the Air Force’s personnel and facility requirements at the site. Accordingly, the total Air Force military construction requirements at the European Midcourse Radar Site will not be confirmed until the implementing arrangements are negotiated and the Air Force personnel concept is finalized. Until that point, a DOD official stated that any Air Force base support construction estimates for the radar site will be based on assumed host nation contributions for security. As a result, DOD’s current military construction cost estimates for base support facilities at the European missile defense sites should be considered preliminary. Another military construction cost that has not been included in the initial estimates is the cost to protect the European Midcourse Radar Site against a possible high-altitude electromagnetic pulse event. The Air Force believes that protection of the radar against a high-altitude electromagnetic pulse event is important to ensuring survivability of the site and has included it as part of its required criteria for transfer. However, Air Force officials told us that MDA is not planning to protect the site against this type of event and has not accounted for those costs in its military construction estimates for the site. MDA and the Air Force have not reached agreement on whether the site will include these protective measures and, if so, who will pay for them. Air Force officials told us that the costs to protect the site could increase the total military construction cost for the radar mission facilities by 10 to 20 percent if the protective steps are included in the design phase and construction of the radar. If the protective action is done after the radar site has been constructed, the cost could be much higher. Further, MDA did not account for foreign currency fluctuations in its estimates. Unfavorable currency exchange rate fluctuations could increase the total cost of construction as military construction funds will be obligated in U.S. dollars and site preparation and construction contracts will be awarded in euros. Although it is possible that currency fluctuations could occur in DOD’s favor, an Army Corps of Engineers official estimated that an additional 20 percent of the total military construction cost estimate should be set aside for possible currency fluctuations. Without accounting for possible changes in the exchange rate, DOD risks exceeding its budgeted military construction funds if currency rates fluctuate unfavorably. As a result of the above limitations, DOD’s projected military construction costs for the European Interceptor Site and the European Midcourse Radar Site are expected to increase significantly from DOD’s original $837.5 million estimate in the fiscal year 2009 budget. In May 2009, an Army Corps of Engineers official told us that after analyzing design data, the Corps recommended that MDA increase its military construction estimates for the European sites to almost $1.2 billion—$803 million for the European Interceptor Site and $369 million for the European Midcourse Radar Site. Whether MDA will accept this recommendation and the extent to which total military construction cost estimates at the European sites will increase remains unclear. Despite the expected increase in projected military construction costs, MDA has not provided Congress updated military construction estimates since the initial estimates were submitted for the fiscal year 2009 budget in February 2008. Without complete information on the total military construction costs for the European missile defense sites, DOD and congressional decision makers do not have a sound basis on which to evaluate the investment required to implement plans for ballistic missile defenses in Europe or the extent to which those plans could divert resources from other national security priorities. MDA was appropriated $151.1 million in military construction funds for fiscal year 2009—$42.6 million for the European Interceptor Site and $108.5 million for the European Midcourse Radar Site. However, MDA will likely be unable to obligate any of these appropriated funds in fiscal year 2009 for site activation or military construction activities at the interceptor and radar sites as key bilateral agreements have not been ratified by the Polish and Czech parliaments. Moreover, the future of the sites is pending the outcome of the ongoing DOD review of plans for ballistic missile defense. According to MDA officials, MDA plans to request DOD and congressional authority to reprogram $50 million to $80 million of the $151 million to use for planning and design efforts at the European missile defense sites, but as of June 2009, no formal action had been taken. However, MDA plans to retain the residual military construction funds— an estimated $70 million to $100 million—to preserve DOD’s options for potential construction at those sites as the schedule for construction is determined. DOD’s operations and support cost estimates for ballistic missile defenses in Europe are not complete because they do not include operations and support costs for base operations managed by the Army and Air Force. While MDA has estimated the operations and support costs it will need for the interceptors and radar—an estimated $612 million in the 2008-2013 Future Years Defense Program—this estimate does not include funds that the services may require to provide basing and support of the sites, such as facilities support, housing costs, and administration. Additionally, MDA and the Army and Air Force have not yet determined the full extent of these operations and support costs. Although MDA and the Army and Air Force have initiated the development of total operations and support cost estimates for the interceptor and radar sites, these estimates are not yet complete as key cost factors that will affect those estimates remain undefined. For example, the total number and distribution of U.S. military personnel, civilian contractors, and host nation-contributed military personnel that will be required to operate, support, and secure the sites will drive total operations and support costs, but has not yet been determined. These determinations depend on the number of personnel that Poland and the Czech Republic will contribute for security at the sites, to be negotiated as part of the implementing arrangements. Without complete information on the true costs of operating and supporting the European sites, the usefulness of information regarding those sites in DOD’s Future Years Defense Program for congressional decision makers will be limited. Moreover, MDA and the Army and Air Force have not yet agreed on how the operations and support costs for the European Interceptor Site and the European Midcourse Radar Site will be funded over the elements’ life cycles or who will pay for these costs. As we have previously reported, operations and support costs are typically over 70 percent of a system’s total lifetime cost. Therefore, the future costs to operate and support the European sites over their lifetimes could reach billions of dollars. In September 2008, we reported that MDA and the services had not yet agreed on which organization(s) will be responsible for funding operations and support costs for the European Interceptor Site and the European Midcourse Radar Site after fiscal year 2013 and over the elements’ life cycles. Although MDA and the Army have agreed on the overarching terms and conditions for the transition and transfer of elements from MDA to the Army, this agreement does not provide specific details on how operations and support costs will be funded following transfer of the European Interceptor Site. For the European Midcourse Radar Site, the Air Force and MDA are drafting an agreement that will establish, among other things, which organization(s) will have funding responsibilities for the radar, but it is unclear when this agreement will be complete. As part of DOD’s ballistic missile defense life cycle management process established in September 2008, DOD intends to pay for ballistic missile defense costs, including operations and support costs, other than those already agreed to be paid by the services, through defensewide accounts. In theory, these defensewide accounts would allow all ballistic missile defense costs to be clearly identified and would alleviate the pressure on the services’ budgets to fund operations and support for ballistic missile defense programs. However, MDA and the services have not yet determined the amount and duration of funding for the individual ballistic missile defense elements, such as the European Interceptor Site and the European Midcourse Radar Site, that will come from the defensewide accounts and there are disagreements about what costs should be covered by these accounts. For example, according to Air Force officials, the Air Force position is that the defensewide accounts should cover all costs for the radar over its life cycle, whereas MDA officials told us that all Army and Air Force base operating support requirements related to the missile defense sites in Europe should be paid for by the services. Until MDA and the Army and Air Force determine which organization(s) will be responsible for funding European missile defense operations over the life cycles of those elements, these costs will not be reflected in the Future Years Defense Program. As a result, DOD and congressional decision makers will have difficulty assessing the affordability of the plans for missile defenses in Europe over time and uncertainty will persist regarding how these elements will be supported over the long term. DOD has made progress in planning for the implementation of the proposed ballistic missile defense sites in Europe. However, the future of the sites is currently unclear and largely depends on the outcome of DOD’s ongoing review of the ballistic missile defense program. This has, in turn, limited the willingness of Poland and the Czech Republic to complete and ratify necessary agreements with the United States. The delays in ratification of key agreements with Poland and the Czech Republic, however, create an opportunity to consider how MDA and the Army and Air Force should collaborate in the implementation of ballistic missile defenses in Europe and the future operations of the European Interceptor Site and the European Midcourse Radar Site. An opportunity now exists to more clearly define roles and responsibilities for the sites as well as establish key criteria that will guide the transition and transfer of the elements from MDA to the Army and Air Force. Planning for transition and transfer of the ballistic missile defense elements from MDA to the military services has been a persistent challenge that has hindered DOD’s ability to plan for the long-term support of the system. Without agreement on how the elements will transfer and clear definitions of the roles that MDA and the services will be responsible for, DOD will continue to face difficulties in determining how the European Interceptor Site and the European Midcourse Radar Site will be sustained in the near and long term. These sites will require a significant investment, but DOD has not yet provided Congress with an updated estimate of the costs for European ballistic missile defenses, restricting its ability to prepare for and weigh the trade-offs of a proposal that will likely cost billions of dollars over the long term. To date, MDA has not assessed the full costs of the sites, to include not only mission-related costs incurred by MDA over the long term, but also some base operating support costs that may be borne by the services. Given the program’s limited information on costs to date, potential increases in military construction costs, and other uncertainty surrounding future costs, such as the extent of host nation contributions to security, as the new administration considers its position on missile defenses full information on the true cost of the European missile defense sites is increasingly important for decision makers as they evaluate policy options. It is therefore critical that congressional decision makers are regularly provided complete cost information with which to evaluate budget requests in the near term and future to determine whether fielding plans are affordable over the long term. Until DOD develops accurate, realistic, and complete cost estimates for military construction and operations and support for ballistic missile defenses in Europe, the credibility of its budget submissions will continue to be a concern. Moreover, until MDA and the Army and Air Force reach agreement on how missile defense operations and support costs for the European Interceptor Site and the European Midcourse Radar Site will be funded over the long term, DOD risks that the services may not be financially prepared to operate and support these elements. We recommend that the Secretary of Defense take the following five actions: To improve planning for the long-term support of the ballistic missile defense sites in Europe, direct MDA, the Army, and the Air Force to finalize the Overarching Memorandums of Agreement and element- specific annexes that detail the specific roles and responsibilities for the European sites and define the criteria that must be met before the transfer of those sites from MDA to the Army and Air Force. To provide for military construction cost estimates for ballistic missile defenses in Europe that are based on the best available data, direct MDA, in coordination with the Army and Air Force, to provide Congress annually, in alignment with the budget, updated military construction cost estimates for the European Interceptor Site and the European Midcourse Radar Site that reflect the data gathered from all site design efforts since project initiation; have been independently reviewed and verified by the Army Corps of Engineers; account for all military construction costs for the sites, including Army and Air Force base support facility requirements, recognizing that certain assumptions about host nation contributions will have to be made; and include costs for possible currency fluctuations. To provide for more complete military construction estimates for future ballistic missile defense sites, such as the still-to-be-determined European site for the mobile radar system, direct MDA to follow military construction regulations by utilizing the Army Corps of Engineers to complete required site design and analysis and verify military construction cost estimates before submitting cost estimates to Congress. To improve fiscal stewardship of DOD resources for ballistic missile defense, direct MDA and the Army and Air Force, in time for the fiscal year 2011 budget submission, to complete life cycle operations and support cost estimates for the European Interceptor Site and the European Midcourse Radar Site and clearly define who is responsible for funding these operations and support costs over the elements’ life cycles. In written comments on a draft of this report, DOD concurred with three and partially concurred with two of our recommended actions. The department’s comments are reprinted in appendix II. DOD also provided technical comments, which we have incorporated as appropriate. DOD concurred with our recommendation that MDA, the Army, and the Air Force finalize the Overarching Memorandums of Agreement and element-specific annexes that detail the specific roles and responsibilities for the European sites and define the criteria that must be met before the transfer of those sites from MDA to the Army and Air Force. In its comments, DOD stated that the element-specific Army annexes are in coordination for estimated completion in calendar year 2009 and the Air Force Overarching Memorandum of Agreement is expected to be signed by the end of calendar year 2009. We believe these are positive steps. As noted in our report, we believe that an opportunity exists for DOD to clearly define roles and responsibilities for the sites as well as establish key criteria that will guide the transition and transfer of the elements from MDA to the Army and Air Force. Since the element-specific annexes are expected to specifically state the criteria that must be met by MDA before the elements transfer to the Army and the Air Force and detail specific roles and responsibilities for each organization, it is important for DOD to meet its estimated dates to finalize the Army annexes and complete the MDA-Air Force Overarching Memorandum of Agreement, and further, to negotiate Air Force element-specific annexes to ensure that the crucial details that will guide the long-term support of the European sites are clearly defined. Until MDA and the Army and Air Force reach agreement on how these elements will transfer, DOD will continue to face difficulties in determining how the European Interceptor Site and the European Midcourse Radar Site will be sustained in the near and long term. DOD concurred with both of our recommendations to improve military construction cost estimates for ballistic missile defense sites. DOD concurred with our recommendation that MDA provide Congress annually updated military construction cost estimates for the European Interceptor Site and the European Midcourse Radar Site. DOD stated that the BMDS Life Cycle Management Process and the associated BMDS Portfolio provide an opportunity for MDA, the Army, and the Air Force to integrate military construction cost estimates. DOD noted that the BMDS military construction projects and associated estimates will continue to be coordinated with the Army Corps of Engineers for certification, independent cost estimating, and reviews for scope completeness and technical sufficiency. Furthermore, DOD stated that Army and Air Force base support facility requirements will be planned, programmed, budgeted, and executed by the services and will not be included in MDA’s BMDS Portfolio. Rather, DOD stated that the budgets for these sites will be collated and provided by the Office of the Secretary of Defense from the coordinated requirements submitted by MDA, the Army, and the Air Force. However, until the BMDS Life Cycle Management Process and the BMDS Portfolio are fully implemented, it is unclear whether they will facilitate improved military construction estimates for the European sites. Further, DOD did not set a date by which it would annually provide Congress updated military construction estimates for the sites. Our report explains the importance of providing complete BMDS military construction cost information to congressional and DOD decision makers on a regular basis, which is the impetus for this recommendation. Also, DOD concurred with our recommendation that for future ballistic missile defense sites, MDA follow military construction regulations by utilizing the Army Corps of Engineers to complete required site design and analysis and verify military construction estimates before submitting cost estimates to Congress. In its comments, DOD stated that it is MDA’s policy to follow appropriate regulations in execution of design and construction of BMDS sites and that MDA recognizes the Army Corps of Engineers as the DOD military construction agent for these projects, will follow military construction policy, and will remain responsive to DOD direction in deploying BMDS assets. DOD partially concurred with our two recommendations to improve fiscal stewardship of DOD’s operations and support resources. DOD partially concurred with our recommendation that MDA and the Army and Air Force complete life cycle operations and support cost estimates for the European Interceptor Site and the European Midcourse Radar Site in time for the fiscal year 2011 budget submission. In its comments, DOD stated that MDA will not be able to complete these cost estimates before the fiscal year 2011 budget submission, but that MDA will include available information on life cycle operations and support cost estimates in the fiscal year 2012 submission. DOD noted that information needed to complete a life cycle cost analysis will not be available until host nation ratifications are signed, site design is complete, and administration policy is set. While we understand the limitations that DOD faces in developing complete operations and support cost estimates before all of the details of the sites have been finalized, we continue to believe that it is crucially important for congressional decision makers to have the most up-to-date information on the long-term costs of the sites in order to assess the affordability of the proposed ballistic missile defenses in Europe. We continue to believe the recommendation is valid for MDA, the Army, and the Air Force to provide estimates of all known operations and support costs for the sites in the 2011 budget. DOD also partially concurred with our recommendation that MDA and the Army and Air Force clearly define who is responsible for funding operations and support costs over the elements’ life cycles in time for the fiscal year 2011 budget submission. DOD noted that MDA will continue to work with the Army and Air Force to define responsibility for future operations and support cost funding, and reiterated that the Overarching Memorandums of Agreement between the lead services and MDA, which define responsibility for life cycle costs, have not yet been finalized. Determining responsibility for the long-term operations and support costs of the BMDS elements has been a persistent challenge for DOD and until MDA and the Army and Air Force determine which organization(s) will be responsible for funding European missile defense operations over the life cycles of those elements, these costs will not be fully reflected in DOD’s Future Years Defense Program and DOD risks that the services may not be financially prepared to operate and support these elements. We are sending copies of this report to the Secretary of Defense; the Director, Missile Defense Agency; the Under Secretary of Defense for Acquisition, Technology and Logistics; and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (404) 679-1816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has planned for the initial implementation of ballistic missile defenses in Europe, including coordination with key international partners and U.S. stakeholders, we conducted site visits, reviewed key documentation, and interviewed relevant DOD, State Department, and host nation officials. During this review, we focused on the European Interceptor Site in Poland, the European Midcourse Radar Site in the Czech Republic, and the planned mobile forward-based radar to be fielded in a still-to-be- determined location. We conducted site visits and toured the base located outside of Slupsk, Poland, that is the proposed European Interceptor Site and the Brdy military training area, which is the proposed location of the European Radar Site. We met with DOD, State Department, and host nation officials to discuss the efforts under way to plan for the sites and examined key documents, including ballistic missile defense agreements with the host nations, memorandums of agreement between key U.S. stakeholders, and Missile Defense Agency (MDA), Army, Air Force, and Army Corps of Engineers documents for planning and site preparation. Using GAO key principles for management, we evaluated the collaboration efforts among the agencies to determine whether DOD, Army, Air Force, and State Department officials followed key practices that can help agencies enhance and sustain their collaborative efforts to determine what aspects of planning may be missing that would hinder the implementation of ballistic missile defenses in Europe. For both objectives, we reviewed key legislation related to ballistic missile defenses in Europe and DOD’s overall approach for preparing to support ballistic missile defense. During our review of the ballistic missile defenses in Europe, GAO contacted agency officials at the Office of the Secretary of Defense; the State Department; the Joint Staff; U.S. Strategic Command; U.S. Northern Command; U.S. European Command; U.S. Army Europe; U.S. Air Force Europe; MDA; the Department of the Army; Army Space and Missile Defense Command; the Department of the Air Force; Air Force Space Command; U.S. Embassy Warsaw; U.S. Embassy Prague; the U.S. Mission to the North Atlantic Treaty Organization; the European Interceptor Site in Poland; and the European Midcourse Radar Site in the Czech Republic. To assess whether DOD has estimated the total costs, including military construction and long-term support costs for the ballistic missile defenses in Europe, we examined budget documents, including DOD’s fiscal year 2009 Future Years Defense Program (including budget data for fiscal years 2008-2013), MDA’s fiscal year 2009 military construction cost estimates, and the Army’s military construction cost estimates. We reviewed DOD policies related to estimating military construction costs and key principles for cost estimating as well as our best practices for developing and managing capital program costs. We interviewed DOD officials to determine how the cost estimates were developed. We discussed the status of military construction cost estimates with officials from MDA, the Army, and the Army Corps of Engineers-Europe District. We also interviewed Air Force officials to determine whether military construction cost estimates had been developed for the radar site. In addition, to determine whether DOD has estimated long-term operations and support costs for ballistic missile defenses in Europe, we assessed key documents, such as the Ballistic Missile Defense Life Cycle Management Process memo and the Army’s Ballistic Missile Defense System Overarching Memorandum of Agreement with MDA, to determine the extent to which MDA and the Army have agreed to fund operations and support costs for ballistic missile defenses in Europe and confirmed our understanding with MDA and the Army. We interviewed Air Force officials to determine whether long-term operations and support cost estimates had been developed and the extent to which MDA and the Air Force have agreed to fund operations and support costs for ballistic missile defenses in Europe. We discussed our findings with officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; MDA; the Army; and the Air Force. We conducted this performance audit from October 2008 to August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marie A. Mak, Assistant Director; Pat L Bohan; Tara Copp Connolly; Susan C. Ditto; and Kasea L. Hamar made key contributions to this report. Defense Management: Key Challenges Should be Addressed When Considering Changes to Missile Defense Agency’s Roles and Missions. GAO-09-466T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: March 13, 2009. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Ballistic Missile Defense: Actions Needed to Improve the Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Defense Acquisitions: Missile Defense Agency’s Flexibility Reduces Transparency of Program Cost. GAO-07-799T. Washington, D.C.: April 30, 2007. Missile Defense: Actions Needed to Improve Information for Supporting Future Key Decisions for Boost and Ascent Phase Elements. GAO-07-430. Washington, D.C.: April 17, 2007. Defense Acquisitions: Missile Defense Needs a Better Balance between Flexibility and Accountability. GAO-07-727T. Washington, D.C.: April 11, 2007. Defense Acquisitions: Missile Defense Acquisition Strategy Generates Results but Delivers Less at a Higher Cost. GAO-07-387. Washington, D.C.: March 15, 2007. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goals. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000.
The Missile Defense Agency (MDA) estimated in 2008 that the potential costs of fielding ballistic missile defenses in Europe would be more than $4 billion through 2015. Planned ballistic missile defenses in Europe are intended to defend the United States, its deployed forces, and its allies against ballistic missile attacks from the Middle East. They are expected to include a missile interceptor site in Poland, a radar site in the Czech Republic, and a mobile radar system in a still-to-be-determined European location. GAO was asked to evaluate the Department of Defense's (DOD) plans for missile defense sites in Europe and address to what extent DOD has (1) planned for the sites' implementation and (2) estimated military construction and long-term operations and support costs. Accordingly, GAO reviewed key legislation; examined policy and guidance from MDA, the Army, the Air Force, and the Army Corps of Engineers; analyzed budget documents and cost estimates; and visited sites in Poland and the Czech Republic. DOD has begun planning for the construction and implementation of the European missile defense sites, including coordinating with international partners and U.S. stakeholders; however, several challenges affecting DOD's implementation of ballistic missile defenses in Europe remain. First, neither Poland nor the Czech Republic has ratified key bilateral agreements with the United States, limiting DOD's ability to finalize key details of the sites, such as how security will be provided. Second, DOD's efforts to establish the roles and responsibilities of key U.S. stakeholders for the European sites remain incomplete because MDA and the services have not yet made important determinations, such as establishing the criteria that must be met before the transfer of the European missile defense sites from MDA to the Army and Air Force. Since 2002, MDA has been directed by DOD to begin planning for the transfer of missile defense elements, including the direction to coordinate with the services on resources and personnel needed to provide an effective transition of responsibility. Without clear definitions of the roles that MDA and the services will be responsible for and agreement on criteria for transfer, DOD will continue to face uncertainties in determining how the European Interceptor Site and the European Midcourse Radar Site will be sustained over the long term. DOD's cost estimates for military construction and operations and support have limitations and do not provide Congress complete information on the true costs of ballistic missile defenses in Europe. Key principles for cost estimating state that complete cost estimates are important in preparing budget submissions and for assessing the long-term affordability of a program. Further, according to DOD military construction regulations, the Army Corps of Engineers typically certifies that key construction design milestones have been met and verifies military construction cost estimates before the estimates are submitted as budget requests. However, DOD's original military construction estimates in the fiscal year 2009 budget did not include all costs, primarily because MDA submitted the estimates before accomplishing key design milestones and without a review by the Army Corps of Engineers. Consequently, DOD's projected military construction costs for the interceptor and radar sites could potentially increase from DOD's original $837 million estimate to over $1 billion. DOD operations and support cost estimates are also incomplete because they do not include projected costs for base operations that will be managed by the Army and Air Force. Key cost factors that will affect these estimates, such as how security will be provided at the sites, remain undefined. In addition, MDA and the services have not yet agreed on how the operations and support costs for the interceptor and radar sites will be funded over the long term. As a result, Congress does not have accurate information on the full investment required for ballistic missile defenses in Europe.
GPRAMA requires OMB to make publicly available, on a central governmentwide website, a list of all federal programs identified by agencies. For each program, the agency is to provide to OMB for publication: an identification of how the agency defines the term “program,” consistent with OMB guidance, including program activities that were aggregated, disaggregated, or consolidated to be considered a program by the agency; a description of the purposes of the program and how the program contributes to the agency’s mission and goals; and an identification of funding for the current fiscal year and the previous 2 fiscal years. In addition, GPRAMA requires OMB to issue guidance to ensure that the information provided on the website presents a coherent picture of all federal programs. In January 2012, OMB announced that it would conduct a pilot for implementing the inventory provisions for trade, export, and competitiveness programs. According to OMB’s memorandum, pilot agencies were to identify an inventory of programs and map them to established programmatic and organizational structures as well as the agency’s strategic plan and performance goals. In addition, agencies were to provide program purpose and funding information for each program. The memorandum noted that, based on the pilot, OMB would issue guidance to all federal agencies detailing the approach for developing a governmentwide inventory. In its August 2012 update to its guidance for implementing GPRAMA, OMB included guidance for broader implementation of the inventory requirements. Initial implementation was limited to the 24 agencies that developed agency priority goals (APG) for fiscal years 2012-2013. In addition, the guidance provided a phased approach for these 24 agencies. For the first phase, for publication in May 2013, agencies were to describe their program definition approach, identify their programs, and provide limited funding and performance information. OMB subsequently published 24 separate inventory documents on the central governmentwide website required under GPRAMA, which it has implemented as Performance.gov. For the second phase, originally planned for publication in May 2014, the 24 agencies were to update their inventories based on any stakeholder feedback they received and provide additional program-level funding and performance information. OMB’s guidance also stated that at that time the inventory information was to be presented in a more dynamic, web-based approach. However, agencies did not publish updated inventories in May 2014. According to OMB staff, plans for updating the inventories are on indefinite hold as OMB re-evaluates next steps for what type of information will be presented in the inventories and how it will be presented based on (1) recent legislative actions that could affect the information required for the inventories; (2) a lack of stakeholder feedback; and (3) insufficient funding for,related to, presenting the inventories in a web-based format on and technological challenges Performance.gov. Regarding legislative action, OMB staff are considering how implementation of the expanded reporting requirements for federal spending information under the recently-enacted Digital Accountability and Transparency Act of 2014 (DATA Act) could be tied to the program inventories. The DATA Act is intended to increase accountability and transparency in federal spending. It requires federal agencies to publicly report, on at least a quarterly basis, information about any funding made available to, or expended by, an agency or a component of the agency. This reporting covers a variety of budget information—budget authority, obligations, unobligated balances, outlays, and any other budgetary resources—at different levels of aggregation and disaggregation: appropriations account, program activities, and object class. Reporting under the act is required to begin no later than 3 years after enactment (no later than May 9, 2017). In addition, the DATA Act provides for the establishment of government-wide financial data standards to, among other things, produce consistent and comparable data across program activities. The act also holds agencies more accountable for the quality of the information disclosed. As we reported in April 2014, such increased transparency provides opportunities for improving the efficiency and effectiveness of federal spending and improving oversight to prevent and detect fraud, waste, and abuse of federal funds. In addition, the House has passed and the Senate is considering a bill that would amend the program inventory provisions of GPRAMA. On February 25, 2014, the House passed H.R. 1423, the Taxpayers Right-to- Know Act, which would require agencies to identify, for each program, an estimate of the number of clients/beneficiaries served, an estimate of (for the previous fiscal year) the number of full-time federal employees who administer the program, and the number of full-time employees—at organizations that administer or assist in administering the program (e.g. grantees or contractors)—whose salary is paid in part of full by the federal government. It would also require agencies to provide additional program- level information in their inventories, including total administrative costs and expenditures for services. On May 21, 2014, the Senate Committee on Homeland Security and Governmental Affairs favorably reported to the full Senate S. 2113, the Taxpayers Right-to-Know Act, which would also amend the program inventory provisions of GPRAMA to require agencies to identify additional program-level information in their inventories. GPRAMA requires agencies to identify how they define the term “program,” consistent with guidance provided by the Director of OMB. As a starting point, OMB’s guidance cites, in part, the definition for “program” contained in our September 2005 Glossary of Terms Used in the Federal Budget Process: an organized set of activities directed toward a common purpose or goal that an agency undertakes or proposes to carry out its responsibilities.glossary. The use of inconsistent approaches by agencies to define their programs, illustrated in Table 2, limits the comparability of programs within agencies as well as governmentwide. A few agencies did not use the same program definition approach across their subcomponents or offices, which limits comparability of their own programs. For example, the Department of State’s (State) programs are defined using two separate approaches. State operations programs follow a budget approach while State and U.S. Agency for International Development (USAID) foreign assistance programs follow both a budget and outcome-based approach. In addition, while the U.S. Army Corps of Engineers (Corps) is an agency within the Department of Defense (DOD), the Corps—Civil Works Program and DOD developed separate inventories that took different approaches to define programs. To illustrate how program definition differences limit comparability across agencies for similar programs, we selected two areas of potential fragmentation, overlap, and duplication highlighted in our past annual reports: science, technology, engineering and mathematics (STEM) education and nuclear nonproliferation and determined which programs were recognizable in both. We compared the lists of programs developed from our past work to those contained in the relevant agencies’ inventories. As shown in Table 3, of the 158 programs previously listed in our work on STEM education, we were able to identify 51 programs in relevant agency program inventories. Of those 51, 9 were an exact match between how we previously identified a program in our past work and how the agency identified it in its inventory. For example, as figure 2 illustrates, both our past work on STEM education and the Department of Education’s program inventory identify “Mathematics and Science Partnerships” as a program. We were able to identify the other 42 programs based on related information contained in program descriptions in relevant agency inventories. For four agencies, we were not able to identify any STEM education programs from our past work in their inventories—covering 27 of the 158 programs. In addition, although our past work identified 3 STEM education programs at the Nuclear Regulatory Commission, it was not 1 of the 24 agencies directed by OMB to produce a program inventory for 2013. In the nuclear nonproliferation area, we were able to identify 9 of the 21 programs identified in our past work in relevant agency program inventories, as shown in table 4. None of the nine nuclear nonproliferation programs we were able to identify in the inventories were exact matches. Rather, we were able to identify them as activities under larger programs in the inventories based on the related program descriptions. For example, our past work identified the Department of Homeland Security’s (DHS) “Global Nuclear Detection Architecture” as a program within its Domestic Nuclear Detection Office. DHS’s inventory identifies one program, Domestic Radiological/Nuclear Detection, Forensics, and Prevention Capability, within the Domestic Nuclear Detection Office. The inventory describes Global Nuclear Detection Architecture as an activity within that program, as illustrated in figure 3. For both illustrative examples, we were not able to locate a majority of the programs identified in our prior work in the relevant agency inventories. The lack of comparability of similar programs across agencies in part could be due to differences in the way we previously defined “program” and the way each agency defined its programs, such as different levels of aggregation of related activities to constitute a program. The flexibility OMB’s guidance afforded in defining programs allows agencies to align their definitions with multiple purposes, including carrying out their mission and goals; serving beneficiaries, customers, and other target populations; and providing benefits, services, and products. However, it also can lead to differences in how programs are identified. In addition, the lack of comparability may also be the result of agencies not working across organizational boundaries when developing their inventories. We asked agency officials whether they had worked with any external parties when determining their program definition approach or identifying their programs. Officials from all 24 agencies stated that they had not sought input from any entities outside their own agencies, with the exception of OMB. One of OMB’s stated purposes for the inventories is to facilitate coordination among programs that contribute to similar outcomes. If agencies worked together to more consistently define their programs, it could also help them identify where they have programs that contribute to similar outcomes, and therefore opportunities to collaborate. OMB’s guidance directed agencies to work with OMB to determine the appropriate primary approach (or mix of approaches) and level of aggregation/disaggregation to be used to define their programs. It also identified characteristics agencies should consider when determining Officials at each of the 24 what constitutes a program (see text box).agencies told us that they worked with OMB—with most stating it was staff in relevant Resource Management Offices as well as the Office of Performance and Personnel Management—to determine (1) the appropriate approach and (2) what constitutes a program. Similarly, staff within OMB’s Office of Performance and Personnel Management told us that they worked closely with agencies as they were defining and identifying their programs. OMB’s Characteristics of a Program Externally recognizable. Agencies should use programs that are or relate to programs or objectives used in Congressional Budget Justifications, statute, are recognized by Congress and stakeholders, or are already publicly known; agencies should use program names that are known outside the agency, and generally not create new names. Operationally meaningful. Agencies should use programs that are operationally meaningful to agency senior leadership and components of the agency, and programs should represent how the agency is managed and delivers on its mission. Link to an organizational component(s), such as headquarters, bureau or office. Programs should be operationally meaningful to the agency and agency senior leadership. Persistent. Generally, programs that persist over time should be included. However, agencies have the flexibility to identify short-term efforts as programs, such as activities related to the Recovery Act. When we asked officials at each agency if the inventory covered everything the agency does, most responded that the inventory included all agency activities. However, officials at a few agencies told us that their inventories did not always include certain activities. In most cases, activities were excluded from the inventory because they did not meet one of the characteristics of a program described in OMB’s guidance. For instance, the Small Business Administration (SBA) only included programs that were permanent in nature and excluded pilot programs from its inventory because, according to SBA officials, they did not consider pilot programs to meet the “persistent” characteristic. As an example, they cited SBA’s Boots to Business program, which was in a pilot phase when the agency’s 2013 inventory was published. SBA officials told us they expected to add the program in the next inventory update as the pilot phase had been completed. Because OMB’s guidance does not clearly define when a short-term activity has persisted long enough to be considered a program, agencies may be using different criteria for when to include them as programs in their inventories. As a result, agencies’ inventories may not be as comprehensive as desired. Clearer guidance from OMB could lead to a more complete picture of federal programs in the inventory. In other instances, Office of Personnel Management (OPM) officials told us they did not include a voting rights oversight program in their inventory because it was not operationally meaningful, in that it did not easily align with OPM’s mission. Social Security Administration (SSA) officials told us they, in consultation with OMB staff, did not include the Special Benefits for Certain World War II Veterans program in their inventory because it is relatively small in size, in terms of its funding ($8 million of the agency’s total net budget authority of $55.9 billion in fiscal year 2013) and number of beneficiaries (approximately 1,000, based on information in SSA’s fiscal year 2015 congressional budget justification) compared to the agency’s other programs. In response to a question during our interview, Department of the Interior (Interior) officials realized they did not include functional management offices, such as the Offices of Policy and the Chief Financial Officer, as part of a set of “programs” in their inventory falling under the Office of the Secretary. Agency officials at OPM, SSA and Interior told us they intend to account for these programs in future iterations of their program inventories. The federal government is one of the world’s largest and most diverse entities, with about $3.5 trillion in outlays in fiscal year 2013, funding an extensive array of programs and operations. In responding to the varied and increasingly complex issues the federal government seeks to address, it faces a number of significant fiscal, management, and governance challenges. To operate as effectively and efficiently as possible and to make difficult decisions to address these challenges, Congress, the administration, and federal managers must have ready access to reliable and complete financial and performance information. However, crucial information on the federal government’s programs is decentralized, according to OMB’s guidance. The federal program inventory has the potential to improve public understanding about what federal programs currently operate and how programs link to budget, performance, and other information by centralizing this information from disparate sources—such as the President’s Budget, Congressional Budget Justifications, USAspending.gov, and the Catalog of Federal Domestic Assistance—in a single location: Performance.gov. Agencies should consider the differing needs of various users—agency top leadership and line managers, OMB, Congress, and stakeholders—to ensure that this information will be both useful and used in decision To be useful, this information must meet users’ needs for making. accuracy, completeness, consistency, reliability, and validity, among other factors. GPRAMA requires agencies to identify for each program in their inventories: the program activities that are aggregated, disaggregated, or consolidated to be considered a program by the agency; and funding for the current and two previous fiscal years (for the inventories published in May 2013, that would have been fiscal years 2013, 2012, and 2011). OMB’s 2012 and 2013 guidance delayed implementation of these provisions until the planned May 2014 update. Although OMB’s guidance delayed this, some agencies provided this information in their inventories. A few agencies provided information about how their program activity (PA) lines align with their programs as defined in their inventory. For example, the National Science Foundation’s inventory notes that programs presented in it are consistent with the PA lines presented in the budget. In addition, the Department of Transportation’s inventory states that its programs generally consist of one to four PA lines. For example, the Pipeline and Hazardous Materials Safety Administration’s “Pipeline Safety” program receives its funding from four PA lines. In addition, OMB’s guidance directed agencies to include funding (budget authority) for agency bureaus or subcomponents for fiscal years 2012, 2013, and 2014 (requested). Since some agencies identified a bureau/subcomponent to constitute a program, the bureau-level funding information in their 2013 inventories also met GPRAMA’s requirement for program-level funding information. For example, the Department of Energy (Energy) defined its Office of Nuclear Energy as a program. Because the office is also considered a subcomponent, Energy provided the related funding information, as illustrated in figure 4. As noted earlier, plans for updating the inventories are on indefinite hold and agencies did not publish updated inventories, with the planned program-level budget data, in May 2014—in part due to the enactment of the DATA Act. According to OMB’s 2014 update to its guidance, it is working with agencies to merge implementation of the DATA Act and federal program inventory requirements to the extent possible, but has not yet determined its implementation strategy. Agency reporting for both sets of requirements is web-based, which could more easily enable linkages between the two sites or incorporating information from each other. The Senate version of the Taxpayers Right-to-Know Act would require this linkage in lieu of incorporating this budget information on the program inventory site. As OMB stated, one of its purposes in implementing the program inventory requirements is to consolidate information from various sources. By leveraging the DATA Act’s requirements, OMB could expand the transparency of budget information made available in the inventories—beyond PA lines and budget authority and at different levels of disaggregation—and help ensure that information is consistent across agencies. In addition, the disaggregated budget information is important since our annual reports on fragmentation, overlap, and duplication have found that it is not always available for the programs and activities covered by our work. However, the DATA Act provides up to 3 years for full implementation of its reporting requirements. Thus, until OMB and agencies are ready to move forward with the implementation of the two laws’ requirements, one option to help ensure the inventories remain relevant and useful would be for OMB to direct agencies to update the existing information in the 2013 versions for publication. GPRAMA requires agencies to describe for each program in their inventories the purposes of the program and the program’s contribution to the agency’s mission and goals. OMB’s guidance for the 2013 inventories directed agencies to provide a program description (purpose) and identify supported strategic goals and objectives. In addition, they were to add language directing readers to Performance.gov to see programs supporting their agency priority goals (APG) and the cross-agency priority (CAP) goals. Agencies provided descriptions of their programs’ purposes. However, some agencies did not consistently show how each program supports agency missions and goals. With the exception of one inventory—the joint State/USAID inventory for foreign assistance programs—all provided a purpose for each program, although State and USAID provided a link to a separate document that provided purpose information for their foreign assistance programs. Only a few inventories (4 of 24) identified how the agencies’ programs contribute to their mission. However, because agency strategic goals and objectives are an outgrowth of the agency’s mission, the linkage between agency programs and those goals can show the program’s support to the mission. A majority of the inventories (17 of 24) identified the agency strategic goals each program supported, while less than half of the inventories (11 of 24) identified the agency strategic objectives each program supported. Finally, most of the inventories (20 out of 24) referred readers to Performance.gov for information about how the agencies’ programs support both CAP goals and APGs. OMB’s review of the inventories did not always identify instances where agencies omitted this information. Providing this information allows agencies to explain how programs contribute to the results the federal government is achieving. Consistently showing the alignment between programs and goals could also improve implementation of other GPRAMA provisions. First, GPRAMA requires OMB and agencies to identify for publication on Performance.gov the various organizations, program activities, and other activities that contribute to the CAP goals and APGs, respectively. In addition, OMB’s guidance directs agencies to identify these contributors for their strategic objectives. These provisions are important because they show how agencies are coordinating efforts toward a common outcome. As we have previously reported, uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. Our past work has found that OMB and agencies did not always identify all relevant contributors to their CAP goals and APGs. Accordingly, in May 2012 and April 2013, we made recommendations to OMB. Table 5 describes these recommendations and related actions taken to date. Second, GPRAMA requires OMB and agencies, on at least a quarterly basis, to review and report on Performance.gov progress towards the CAP goals and APGs, respectively. As part of these reviews, OMB and agencies are to (1) involve relevant stakeholders from contributing organizations and program activities and (2) assess the contributions of the various organizations, program activities, and other activities that support these goals.GPRAMA directed agencies, beginning this year (2014), to conduct annual reviews of progress towards their strategic objectives—the outcomes or impacts the agency is intending to achieve. According to OMB’s guidance, the strategic reviews should inform many of the agency’s decision-making processes, as well as decision making by the In addition, OMB’s 2013 guidance for implementing agency’s stakeholders, such as informing long-term strategy, annual planning and budget formulation; strengthening collaboration on crosscutting issues; and improving transparency. If successfully implemented in a way that is open, inclusive, and transparent—to Congress, delivery partners (e.g., state and local governments, grantees, non-profit organizations, associations, contractors), and a full range of stakeholders—these reviews could help decision makers assess the relative contributions of various programs that contribute to a given goal. Successful reviews could also help decision makers identify and assess the interplay of public policy tools that are being used, to ensure that those tools are effective and mutually reinforcing, and results are being efficiently achieved. However, our past work has found that relevant contributors were not always included in these reviews and accordingly we made recommendations to OMB, as described in table 6. OMB’s guidance does not direct agencies to identify in their inventories the performance goals to which each program contributes. Although not specifically required for the program inventory, GPRAMA requires agencies, in their annual performance plans, to identify the various organizations, program activities, and other activities that contribute to each performance goal. As noted in the Senate Committee on Homeland Security and Governmental Affairs report that accompanied GPRAMA, this provision, among others, was intended to help describe the strategies to be used to achieve results and the resources to be applied to those strategies. This information can help Congress understand and assess the relationship between the agency’s resources and results. Having a clear description of the strategies and resources an agency plans to use will allow Congress to assess the likelihood for an agency to achieve its intended results. The act also requires that agency performance goals and related information be available in a web-based format on Performance.gov. According to OMB staff, efforts to expand the site to include this information have not been possible given the available appropriations for the E-Government Fund, which supports Performance.gov and other electronic government requirements and initiatives. OMB’s 2012 guidance directed agencies to work with key stakeholders, which can include Congress, state and local governments, third party service providers, and the public, to validate that their program inventories would be both internally and externally recognizable. According to OMB staff, they want to ensure that the inventories are useful and meaningful to the agencies and their stakeholders. Although officials we spoke with or received written responses from at each of the 24 agencies told us that they involved internal stakeholders— officials and staff from various bureaus/component agencies and offices within their respective department or agency—to validate their programs, most also shared that they saw no benefit to their agencies from developing the inventories. Many told us that they viewed it as a paperwork exercise, repackaging information available elsewhere with the resulting inventory not useful to their internal agency decision making. In a few instances, officials offered different views. For example, an official at DHS told us that her department’s approach to linking programs to performance information helped the department communicate the results it was achieving. She noted that DHS has maintained an external-facing, mission-oriented program structure since it was created. In addition, an official from SBA told us he thought the program inventories—once fully implemented as envisioned in OMB’s guidance for the 2014 update— could help his agency identify instances to enhance their coordination and collaboration with other agencies to achieve common outcomes. Officials we spoke with or who provided written responses to our questions also told us that they did not solicit feedback on their inventories from external stakeholders, including Congress. In addition, none of them reported receiving any feedback on their inventories beyond that from OMB staff during development. In some instances, agency officials told us that they didn’t seek stakeholder input because their approach for defining their programs resulted in programs with which their stakeholders were already familiar. For example, officials at several agencies told us that using a budget approach resulted in programs that were recognizable to congressional appropriators. In other instances, agency officials told us they thought OMB staff were collecting feedback on the inventories, but they had not heard any results at the time of our interview. When we asked OMB staff about any feedback they had solicited or received, they told us they had briefed two congressional committees when the inventories were published in May 2013, but that they had not received any formal feedback from Congress or any other stakeholders, with the exception of the preliminary results of our review, which we published as part of a testimony before the Senate Committee on Homeland Security and Governmental Affairs in March 2014. In addition, although Performance.gov has a mechanism for users to provide feedback on the website, OMB staff stated that they had not received any feedback on the program inventories from that venue. See, for example, GAO-13-518; GAO-13-174; GAO-12-621SP; GAO, Performance Budgeting: PART Focuses Attention on Program Performance, but More Can Be Done to Engage Congress, GAO-06-28 (Washington, D.C.: Oct. 28, 2005); Managing For Results: Enhancing the Usefulness of GPRA Consultations Between the Executive Branch and Congress, GAO/T-GGD-97-56 (Washington, D.C.: Mar. 10, 1997); and Executive Guide: Effectively Implementing the Government Performance and Results Act, GAO/GGD-96- 118 (Washington, D.C.: June 1996). stakeholder input, and congressional input in particular, on program inventories and the information presented therein would provide OMB and agencies another opportunity to ensure they are presenting useful information for stakeholder decision making. GPRAMA requires OMB to ensure that the inventory information provided by agencies and published on Performance.gov presents a coherent As our annual reports on fragmentation, picture of all federal programs.overlap, and duplication have stated, the federal program inventory could be a key tool for addressing crosscutting issues. For example, as highlighted in OMB’s guidance, the federal program inventory has the potential to facilitate coordination across programs by making it easier to find programs that may contribute to a shared goal. Moreover, in its memorandum regarding the inventory pilot project for trade, export, and competitiveness programs, OMB noted that duplicative programs make government less effective, waste taxpayer dollars, and make it harder for the public to navigate government services. It states that, in order to continue efforts to reduce duplication and overlap and improve program outcomes through better coordination across agencies, the executive branch must achieve greater transparency into all federal programs. GPRAMA requires OMB to make publicly available a list of all federal programs identified by agencies. While the 24 agencies have begun implementing the inventory requirements, OMB has not defined plans for when this effort will be expanded beyond these agencies. As noted earlier in the report, the lack of such a comprehensive list makes it difficult to determine the scope of the federal government’s involvement in particular areas and, therefore, where action is needed to address crosscutting issues. In addition, OMB has not included all types of federal programs in its plans for the program inventory. OMB’s guidance identified 12 different types of federal programs, defined in table 7, for agencies to assign to their programs.each program in their inventories. As noted earlier, GPRAMA requires OMB to publish the federal program inventory on Performance.gov. While the inventories published in May 2013 were individual agency documents, OMB’s guidance and staff have stated that eventually the inventory would move to a more dynamic, web- based approach—originally planned for the May 2014 update and now on hold. This web-based approach could make it easier to tag and sort related or similar programs. For instance, OMB’s plans to have agencies tag each of their programs by one or more program type in a future iteration of the inventory would provide a sorting capability for identifying the same type of program. By providing a sorting mechanism by program type, OMB could help address one of our open recommendations, described in the text box below, by identifying (1) all programs in a given type, and (2) of those programs, any that have developed strategies to effectively overcome measurement challenges. GAO Recommended OMB and the PIC Develop a Detailed Approach to Examine and Address Long-standing Performance Measurement Challenges In our June 2013 report, Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges (GAO-13-518), we found that agencies continue to face common, long-standing difficulties in measuring the performance of various types of federal programs and activities—contracts, direct services, grants, regulations, research and development, and tax expenditures. We recommended the Director of OMB work with the PIC to develop a detailed approach to examine these difficulties across agencies, including identifying and sharing any promising practices from agencies that have overcome difficulties in measuring the performance of these program types. In commenting on a draft of the report, OMB staff agreed with this recommendation. As of July 2014, OMB and the PIC have taken some initial steps to address this recommendation. According to OMB staff, this includes efforts related to achieving several of the CAP goals. For example, the “Benchmark and Improve Mission-Support Operations” CAP goal involves developing common standards and benchmarks to measure the performance and cost of various agency administrative operations, such as information technology and acquisition management. In addition, PIC staff told us they have taken initial steps to address performance measurement issues in a few areas, including a pilot effort focused on acquisitions (contracts). PIC staff said they plan to expand the model to focus on other types of programs with performance measurement issues, such as grants and regulations. We will continue to monitor progress. As highlighted in OMB’s guidance, the federal program inventory has the potential to facilitate coordination across programs by making it easier to find programs that may contribute to a shared goal or a common outcome. This could also help identify and address instances of fragmentation, overlap, and duplication. In its guidance for the 2014 update before it was put on hold, OMB intended for agencies to link each program to the existing web pages on Performance.gov for strategic goals, strategic objectives, APGs, and CAP goals. According to OMB staff, once they move forward with the next inventory update and move to a web-based presentation, this would allow users to sort programs by the goals to which they contribute. This approach also would allow users to identify programs that contribute to broader themes on Performance.gov. The themes generally align with budget functions from the President’s Budget and include administration of justice; general science, space, and technology; national defense; and transportation, among others. Currently, the themes can be used to sort goals on Performance.gov that contribute to those broad themes. The coordinated efforts of multiple federal agencies, different levels of government, and sectors are generally needed to achieve meaningful results. This is underscored by our annual reports on fragmentation, overlap, and duplication, which to date have identified over 90 areas, each of which involves a patchwork of federal agencies, programs, and activities that attempt to address the same issue. Executive branch and congressional efforts to identify, manage, or resolve instances of fragmentation, overlap, and duplication are hindered by the lack of a comprehensive list of all federal programs. Such a list, along with related budget and performance information, could help decision makers determine the scope of the federal government’s involvement, investment, and performance in a particular area, which in turn could help pinpoint where action is needed to better address or avoid fragmentation, overlap, and duplication. Therefore, effective implementation of GPRAMA’s federal program inventory requirements could provide decision makers with critical information that could be used to better address crosscutting issues, among other purposes. The executive branch has taken some initial steps to develop program inventories with related budget and performance information. In 2012, OMB involved 11 agencies in a pilot program inventory effort for federal trade, export, and competiveness programs and subsequently developed guidance—based on lessons learned from the pilot—for broader implementation at 24 agencies. In May 2013, the 24 agencies published inventories, providing information about 1,524 programs they collectively identified. Despite these initial efforts, we have identified a number of areas where implementation should be improved. OMB needs to take further actions to effectively meet GPRAMA’s requirement for the inventories to present a coherent picture of all federal programs. First, OMB’s guidance provides agencies with flexibility in how they defined their programs. As a result, agencies used different approaches to identify their programs, which, in turn, led to a lack of comparability for similar programs across agencies. This was illustrated by our analysis of agency inventories for STEM education and nuclear nonproliferation programs, in which we could only exactly match a small fraction of the programs covered by our past work on fragmentation, overlap, and duplication in those areas. According to OMB staff, the flexibility provided in guidance is derived from lessons learned from the pilot effort—that agencies are different for valid and legitimate reasons and a one-size-fits-all approach would not work for all agencies. While this may be true, OMB could do more to direct agencies to find common ground on similar programs. One of OMB’s stated purposes for the inventories is to facilitate coordination across programs that contribute to similar outcomes. However, as we discovered through our interviews with agency officials involved with the inventory efforts, none of the agencies sought input from other agencies on how they defined and identified their programs. If agencies worked together to more consistently define their programs, it could also help them identify where they have programs that contribute to similar outcomes, and therefore opportunities to collaborate. Second, OMB’s plans to date do not adequately ensure that the inventories include all federal programs. It has yet to provide firm plans on when implementation of GPRAMA’s inventory requirements will be expanded beyond the current 24 agencies. OMB’s guidance does not provide a clear time frame for how long activities must persist to be considered programs, which may have resulted in agencies excluding certain activities from their inventories. In addition, OMB did not include tax expenditures in the inventory effort. Although GPRAMA does not specify that tax expenditures should be included in the program inventory, our work over the past 20 years has shown the need for tax expenditures to be held to the same scrutiny as spending programs, given the sizeable federal investment they represent. The omission of these agencies, programs, and activities severely limits the usefulness of the inventory as a tool for addressing crosscutting issues. In addition, other factors limit the usefulness of the inventory as a source of information for decision makers. To be useful, information must meet various users’ needs for accuracy, completeness, consistency, reliability, and validity, among other factors. Agency officials also told us they did not consult with external stakeholders, including Congress. Although OMB’s 2012 guidance instructed agencies to seek this input, none of them did. Subsequently, OMB has removed this direction from its guidance. By consulting with stakeholders to understand their needs, agencies would better ensure that the information provided in the inventories is useful for stakeholder decision making. Involving congressional stakeholders is of critical importance given Congress’s power to create and fund programs. Our work also identified limitations with the performance information contained within the inventories. Agencies did not consistently identify the strategic goals, strategic objectives, agency priority goals, and cross- agency priority goals that each program supports, as directed by OMB guidance. These omissions were not always identified during OMB’s review of each agency’s inventory. In addition, OMB’s guidance does not direct agencies to identify in their inventories the performance goals to which each program contributes. Although not required for the inventories, GPRAMA requires this type of connection—albeit by performance goal rather than by program—in agency performance plans. This information can help Congress and others understand and assess the relationship between the agency’s resources and results. Without it, it will be difficult for Congress to assess the likelihood of the agency’s success in achieving intended results. To ensure the effective implementation of federal program inventory requirements and to make the inventories more useful, we make the following eight recommendations to the Director of the Office of Management and Budget. To better present a more coherent picture of all federal programs, we recommend the Director of OMB take the following five actions: direct agencies to collaborate with each other in defining and identifying programs that contribute to common outcomes, and provide a time frame for what constitutes “persistent over time” that agencies can use as a decision rule for whether to include short-term efforts as programs; define plans for when additional agencies will be required to develop include tax expenditures in the federal program inventory effort by designating tax expenditure as a program type in relevant developing, in coordination with the Secretary of the Treasury, a tax expenditure inventory that identifies each tax expenditure and provides a description of how the tax expenditure is defined, its purpose, and related performance and budget information. To help ensure that the information agencies provide in their inventories is useful to federal decision makers and key stakeholders, and to provide greater transparency and ensure consistency in federal program funding and performance information, we recommend the Director of OMB take the following three actions: revise relevant guidance to direct agencies to consult with relevant congressional committees and stakeholders on their program definition approach and identified programs when developing or updating their inventories, and identify in their inventories the performance goal(s) to which each program contributes; and ensure, during OMB reviews of inventories, that agencies consistently identify, as applicable, the strategic goals, strategic objectives, agency priority goals, and cross-agency priority goals each program supports. We provided a draft of this report for review and comment to the Director of OMB and the 24 agencies that developed program inventories. In oral comments provided on October 15, 2014, staff from OMB’s Office of Performance and Personnel Management stated that they agreed with five of our eight recommendations and would consider how to implement those recommendations as they move forward with merging program inventory implementation with that of the DATA Act. For the other three recommendations—designating tax expenditures as a type of program, developing an inventory of tax expenditures, and directing agencies to identify how their programs contribute to their performance goals—OMB staff neither agreed nor disagreed. OMB staff told us that until they had firmer plans on how program inventory and DATA Act implementation would be merged, they could not determine if implementing these three recommendations would be feasible. We also received technical comments from OMB, the Departments of Energy and State, and the National Aeronautics and Space Administration, which we incorporated as appropriate. We are sending copies of this report to the Director of OMB and the heads of the 24 agencies that developed program inventories as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix II. The GPRA Modernization Act of 2010 (GPRAMA) requires us to review implementation of the act at several critical junctures. This report is part of our response to that mandate. Our specific objectives for this report were to (1) assess how OMB and agencies defined and identified the programs contained in the inventories, (2) examine the extent to which the inventories provide useful information for federal decision makers, and (3) examine the extent to which the inventories provide a coherent picture of the scope of federal involvement in particular areas. To address all three objectives, we assessed the implementation of relevant GPRAMA requirements by the Office of Management and Budget (OMB) and the 24 agencies that developed program inventories, which were published on Performance.gov in May 2013. requires OMB to make publicly available on a central governmentwide website, a list of all federal programs identified by agencies. For each program, the agency is to provide to OMB for publication: an identification of how the agency defines the term “program,” consistent with OMB guidance, including program activities that were aggregated, disaggregated, or consolidated to be considered a program by the agency; a description of the purposes of the program and how the program contributes to the agency’s mission and goals; and an identification of funding for the current fiscal year and the previous 2 fiscal years. GPRAMA also requires OMB to issue guidance to ensure that the information provided on the website presents a coherent picture of all federal programs. The 24 agencies directed by OMB to develop program inventories are the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs, as well as the U.S. Agency for International Development, U.S. Army Corps of Engineers—Civil Works Program, Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, National Science Foundation, Office of Personnel Management, Small Business Administration, and Social Security Administration. We also assessed program inventory implementation based on relevant OMB guidance (Circular No. A-11, Part 6) and related leading practices from our past work on managing for results, such as those related to involving Congress and stakeholders in performance management activities and ensuring performance information is useful to decision makers. In addition, we developed a structured set of questions and conducted interviews with, or received written responses from, officials involved in developing the inventories at the 24 agencies. We also interviewed staff from OMB’s Office of Performance and Personnel Management who led governmentwide implementation of these requirements and developed guidance for agencies. Additionally, to address our first objective, we selected two areas of fragmentation, overlap, and duplication from our annual reports—science, technology, engineering, and mathematics (STEM) education and nuclear nonproliferation programs—and compared the lists of programs developed by our work to those contained in agency inventories. To select these two areas, we first reviewed the existing body of 83 areas of fragmentation, overlap, and duplication published in our 2011, 2012, and 2013 annual reports,areas that focused on management functions, since they were unlikely to be captured in the program inventories. For the remaining 60 areas, we identified the number of agencies involved in the area as reported in our past worked, whether those agencies had developed and published an inventory in May 2013, and whether the area primarily involved domestic or foreign assistance/national security programs—to ensure diversity among the agencies and programs we reviewed. Finally, we sorted those lists to select the areas with the most agencies involved in them: STEM education for domestic programs and nuclear nonproliferation programs for foreign assistance/national security programs. and identified and eliminated from selection 23 We compared the lists of programs developed from our past work on STEM education and nuclear nonproliferation to those contained in the relevant agencies’ inventories. We identified possible matches in two different manners. First, we identified instances in which program names exactly matched between the lists developed for our past work and the agency’s inventory. In addition, we also reviewed the program descriptions contained in the relevant inventories to determine if they would lead one to programs/activities related to STEM education or nuclear nonproliferation. For example, if the program description contained language such as “science education” or “technology education,” we included that program as a broader match. In some instances, we were also able to identify programs from our past work listed as activities within the program descriptions in relevant inventories. To ensure consistency in this analysis, the work to identify programs conducted by one analyst was reviewed and verified by another analyst. Because the two selected areas are a non-generalizable sample of instances where fragmentation, overlap, and duplication exist, our results cannot be generalized more broadly. However, the results of our analysis illustrate how various approaches to defining programs can lead to differences in how programs are identified. We conducted this performance audit from August 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the above contact, Lisa M. Pearson (Assistant Director) and Benjamin T. Licht supervised this review and the development of the resulting report. Gerard Burke, Amy Bush, Virginia Chanley, A. Nicole Clowers, Karin Fangman, Heather Krause, and Steven Putansu made significant contributions to this report. Carole J. Cimitile, Dewi Djunaidy, Emily Gruenwald, Donna Miller, Leah Q. Nash, and Erinn L. Sauer also provided key contributions.
GAO's reports over the past 4 years have found more than 90 areas where opportunities exist for the executive branch or Congress to better manage, reduce, or eliminate fragmentation, overlap, and duplication. GPRAMA calls for the creation of a list (inventory) of all federal programs, along with related budget and performance information, which could make it easier to determine the scope of the federal government's involvement in particular areas and, therefore, where action is needed to address crosscutting issues, including instances of fragmentation, overlap, or duplication. GPRAMA requires GAO to periodically review its implementation. This report is part of GAO's response to that mandate and examines (1) how OMB and agencies defined programs, (2) the extent to which inventories provide useful information for decision makers, and (3) the extent to which inventories provide a coherent picture of the scope of federal involvement in particular areas. To address these objectives, GAO analyzed the 24 inventories using GPRAMA requirements, OMB guidance, and related leading practices from GAO's past work, and interviewed OMB staff and agency officials. For the first objective, GAO also selected two areas of fragmentation, overlap, and duplication identified in past GAO work—STEM education and nuclear nonproliferation—and compared the lists of programs developed in its past work to those contained in agency inventories. The two areas were selected based on various factors, including the number of agencies involved and whether those agencies published an inventory. To date, the approach used by the Office of Management and Budget (OMB) and agencies has not led to the inventory of all federal programs, along with related budget and performance information, envisioned in the GPRA Modernization Act of 2010 (GPRAMA). In developing the inventory, OMB allowed for significant discretion in several areas—leading to a variety of approaches for defining programs and inconsistencies in the type of information reported. The inconsistent definitions, along with agencies not following an expected consultation process, led to challenges in identifying similar programs in different agencies. As a result of these limitations, the inventory is not a useful tool for decision making. OMB is considering options for enhancing the inventory. GPRAMA requires OMB to publish a list of all federal programs on a central governmentwide website. It also requires OMB to issue guidance, and agencies to identify and provide to OMB for publication information about each program— including how they defined their programs in line with OMB's guidance. OMB is taking an iterative approach to implement these requirements. Based on experiences from a pilot involving 11 agencies in 2012, OMB issued guidance allowing agencies flexibility to define their programs using different approaches, but within a broad definition of what constitutes a program—a set of related activities directed toward a common purpose or goal. According to OMB staff, this was based on a lesson learned from the pilot effort: a one-size-fits-all approach does not work well; agencies and their stakeholders use the term “program” in different ways because agencies achieve their missions through different programmatic approaches. In May 2013, OMB published the inventories developed by 24 agencies, which used various approaches to define and identify 1,524 programs (see table below). Because agencies used different approaches, similar programs across agencies may not be identifiable. To illustrate the shortcomings of the inventory, GAO attempted to locate in relevant agencies' inventories the various science, technology, engineering, and mathematics (STEM) education and nuclear nonproliferation programs identified in GAO's past work. GAO was unable to identify in the inventories a large majority of the programs previously identified in its work: 9 of the 179 programs matched exactly and 51 others were identified based on program descriptions. The lack of comparability may also be the result of agencies not working with each other when developing their inventories. One of OMB's stated purposes for the inventories is to facilitate coordination among programs that contribute to similar outcomes. However, agencies did not work together to consistently define their programs. Increased coordination could help agencies identify where they have programs that contribute to similar goals and thus, opportunities to collaborate in achieving desired outcomes. The 24 inventories developed by agencies in 2013 did not provide the programs and related budget and performance information required by GPRAMA. This limits the usefulness of the inventories to various decision makers, including Congress and stakeholders. To be useful, the inventories must meet various users' needs for accuracy, completeness, consistency, reliability, and validity, among other factors. Specific steps OMB and agencies could take to ensure the inventories are more useful to decision makers include: Presenting program-level budget information. Although GPRAMA requires agencies to identify program-level funding, OMB did not direct agencies to include this information in their 2013 inventories—it was to be part of a planned May 2014 update. However, OMB subsequently put the 2014 update on hold to determine how to merge these requirements with implementation of the federal spending information to be reported under the Digital Accountability and Transparency Act of 2014 (DATA Act). Reporting for both laws is web-based, which could more easily enable linkages between the two sites or incorporating information from each other. The Senate version of a bill, the Taxpayers Right-to-Know Act of 2014, which is currently under consideration, would require this linkage in lieu of incorporating budget information on the program inventory site. Providing complete performance information . GPRAMA and OMB's guidance require agencies to describe each program's contribution to the agency's goals. However, there are instances where agencies omitted that information. For example, agencies did not consistently show how some or all of their programs supported strategic goals (in 7 of 24 inventories) or strategic objectives (in 13 of 24 inventories). Ensuring agencies illustrate this alignment would better explain how programs support the results agencies are achieving. Consulting with stakeholders . None of the agencies sought input on their inventories from external stakeholders, such as Congress, state and local governments, and third-party service providers, although OMB's 2012 guidance instructed agencies to do so. In several instances, agency officials stated that they thought OMB was soliciting feedback on all inventories. By consulting with stakeholders to understand their needs, agencies would better ensure that the information provided in the inventories is useful for stakeholder decision making. Other features of OMB's approach further limit the program inventories' ability to present a coherent picture of all federal programs, as required by GPRAMA. First, to date, OMB has only included 24 agencies in this effort. Second, while not specified by GPRAMA, tax expenditures were not included in the 2013 inventory. Tax expenditures, which represent a reduction in a taxpayer's tax liability through credits, deductions, or other means, resulted in $1.1 trillion in forgone revenue in fiscal year 2013, nearly the same amount as discretionary spending that year. By including tax expenditures, OMB could help ensure that agencies are properly identifying their contributions to the achievement of agency goals, as OMB's guidance directs them to do. Finally, OMB's guidance and staff have stated that eventually the inventory will move to a more dynamic, web-based presentation. This could make it easier to tag and sort related or similar programs, for instance, by type of program or contribution to the same or similar goals. Covering additional agencies and tax expenditures in the federal program inventory, along with web-based sorting capabilities, would help decision makers determine the scope of the federal government's involvement in a particular area, and therefore where action is needed to better address fragmentation, overlap, or duplication. GAO makes several recommendations to OMB. To present a more coherent picture of all federal programs, GAO recommends OMB revise its guidance to direct agencies to collaborate when defining and identifying programs that contribute to a common outcome, define plans for expanding implementation beyond the current 24 agencies, and include tax expenditures in the federal program inventory. In addition, to improve the usefulness of the information in inventories GAO recommends OMB ensure agencies consistently identify the various goals each program supports, and consult with stakeholders when developing or updating their inventories. OMB staff generally agreed with these recommendations, although they neither agreed nor disagreed with three of GAO's recommendations related to including tax expenditures and additional performance information. OMB staff stated that until they had firmer plans on how program inventory and DATA Act implementation would be merged, they could not determine if implementing those recommendations would be feasible.
The DATA Act requires OMB and Treasury to establish government-wide financial data standards for the specific items to be reported under the act. These specific items are generally referred to as “data elements.” The standards for these data elements consist of two distinct but related components as described in the text box: Definitions which describe what is included in the element with the aim of ensuring that information will be consistent and comparable. Technical specifications on the format, structure, tagging, and transmission of each data element. OMB and Treasury have developed a data exchange, also known as a technical schema, which is intended to provide a comprehensive view of the data definitions and their relationships to one another. OMB and Treasury have proposed standardizing 57 data elements for reporting under the act. They released 15 elements in May 2015, a year after the passage of the act, and have since released 12 more. Eight of these were new elements required under the DATA Act; the balance of the first 15 data elements were required under the Federal Funding Accountability and Transparency Act of 2006 (FFATA). Figure 1 provides a list of these data elements and their roll-out schedule. Officials told us that they expect to complete the process by the end of the summer. The DATA Act requires the establishment of standards that produce consistent and comparable data across programs, agencies, and time. We reviewed the first set of 15 data standards finalized by OMB and Treasury in May 2015. We found that most of the elements adhere to the definitions used in widely accepted government standards such as OMB Circular A-11 and the Census Bureau’s North American Industry Classification System. For example, as required by the DATA Act, OMB and Treasury provided a standard for “program activity” and finalized the definition as “a specific activity or project listed in the program and financing schedules of the annual budget of the United States Government.” Program activities are a long-standing reporting structure in the federal budget and are intended to provide a meaningful representation of the operations funded by a specific budget account. Therefore, program activities can be mission or program focused. For example, the Federal Emergency Management Agency’s program activities include “Response,” “Recovery,” and “Mitigation,” and the Environmental Protection Agency’s program activities include “Clean and Safe Water” and “Healthy Communities and Ecosystems.” Program activities can also be organized by type of personnel such as Officers, Enlisted, and Cadets, in the Army’s Military Personnel Account, or by organizational unit such as the National Cancer Institute, and National Heart, Lung, and Blood Institute in the National Institutes of Health. As the examples illustrate, OMB and Treasury will need to build on the program activity structure and provide agencies with guidance if they are to meet the stated purpose of the DATA Act to “link federal contract, loan, and grant spending information to federal programs to enable taxpayers and policy makers to track federal spending more effectively.” To underscore the differences between program activities and programs, our September 2005 Glossary of Terms Used in the Federal Budget Process defines a program as “an organized set of activities directed toward a common purpose or goal that an agency undertakes or proposes to carry out its responsibilities.” The GPRA Modernization Act of 2010 (GPRAMA), among other things, requires OMB to make publicly available, on a central government-wide website, a list of all federal programs identified by agencies. For each program, the agency is to provide to OMB for publication an identification of how the agency defines the term “program,” consistent with OMB guidance, including program activities that were aggregated, disaggregated, or consolidated to be considered a program by the agency; a description of the purposes of the program and how the program contributes to the agency’s mission and goals; and an identification of funding for the current fiscal year and the previous 2 fiscal years. Effective implementation of both the DATA Act and GPRAMA’s program inventory provisions, especially the ability to crosswalk spending data to individual programs, could provide vital information to assist federal decision makers in addressing significant challenges the government faces. As our annual reports on fragmentation, overlap, and duplication have highlighted, creating a comprehensive list of federal programs along with related funding and performance information is critical for identifying potential fragmentation, overlap, or duplication among federal programs or activities. The lack of such a list makes it difficult to determine the scope of the federal government’s involvement in particular areas and the results it is achieving, and therefore, where action is needed to eliminate, reduce, or better manage fragmentation, overlap, or duplication. Until these steps are taken and linked to the appropriate program activity data element, OMB and Treasury will be unable to provide a complete picture of spending by federal programs as required under the act. Our recent work reviewing implementation of GPRAMA identified a number of challenges related to executive branch efforts to identify and define federal programs. OMB staff explained that a one-size-fits-all approach does not work well; agencies and their stakeholders use the term “program” in different ways because agencies achieve their missions through different programmatic approaches. Therefore, OMB issued guidance allowing agencies flexibility to define their programs using different approaches, but within a broad definition of what constitutes a program—a set of related activities directed toward a common purpose or goal. Not surprisingly, our October 2014 report reviewing implementation of GPRAMA’s program inventory requirements showed that agencies did indeed use different approaches to define their programs. We reported that these differences limited the comparability of programs within and across agencies. We made related recommendations in our October 2014 report aimed at improving the completeness and comparability of the program inventory. In commenting on that report, OMB staff generally agreed with those recommendations. According to OMB staff, as of June 2015 they have not taken any actions to address these recommendations, because implementation of the program inventory requirements remains on hold as OMB determines how best to merge that effort with implementation of the DATA Act. One approach could be for OMB to explore ways to improve the comparability of program data by using tagging or similar approaches that allow users to search by key words or terms and combine elements based on the user’s interests and needs. This merging could help ensure consistency in the reporting of related program-level spending information. As a result, we recommend the following: To ensure that federal program spending data are provided to the public in a transparent, useful, and timely manner, we recommend that the Director of OMB accelerate efforts to determine how best to merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal program inventory. The DATA Act requires Treasury, in consultation with OMB, to publish a report of funds made available to, or expended by, federal agencies and their components on USAspending.gov or an alternative system. Given that OMB has not yet provided an example of the form and content of the envisioned financial reporting under the DATA Act, it is difficult to determine at this point whether additional data standards and elements are needed. As Treasury and OMB continue establishing the DATA Act data standards and elements, linking them to established financial accounting and reporting processes will be important in helping ensure consistency and comparability of the information reported and could provide a means for determining data quality between new financial information reported under the DATA Act and information in audited agency financial statements. For example, certain data standards and elements used by agencies in reporting financial data in their audited Statement of Budgetary Resources may also be used to report certain agency budgetary data under the DATA Act. In addition, the DATA Act requires Treasury to include certain financial information similar to that reported in the Schedule of Spending, which is included in agency annual financial reports, as required by OMB Circular No. A-136, Financial Reporting Requirements. Therefore, established data standards and elements used by agencies in preparing this unaudited schedule could be used to report certain information under the DATA Act. Further, leveraging existing and establishing new controls over the data standards and elements—financial and non-financial—used in reporting under the DATA Act could help ensure data reliability. The DATA Act also requires OMB and Treasury to incorporate widely accepted common data standards and elements, to the extent reasonable and practicable, such as those developed and maintained by international standards-setting bodies and accounting standards organizations, in a machine-readable format. As OMB and Treasury move forward with establishing data standards, given their limited time and resources, they could benefit from leveraging existing international standards for digital reporting of financial, performance, risk, or compliance information. For example, the International Organization for Standardization (ISO) has developed data standards such as one that describes an internationally accepted way to represent dates and times which may help address the DATA Act requirement to establish a standard method of conveying a reporting period. The ISO also has a standard for a digital object identification system which may help address the DATA Act requirements to have a unique identifier and use a widely accepted, nonproprietary, searchable, platform-independent, machine-readable format. The use of such standards helps reduce uncertainty and confusion with organizations interpreting standards and reporting differently which could lead to inconsistent results and unreliable data. Treasury’s draft technical schema is intended to standardize the way financial assistance, contract, and loan award data, as well as other financial data, will be collected and reported under the DATA Act. Toward that end, the technical schema describes, among other things, the standard format for data elements including their description, type, and length. We reviewed version 0.2 of the technical schema that was publicly Treasury officials said that they are testing this released in May 2015.schema and are continually revising it based on considerations of these tests as well as feedback they receive from stakeholders. In light of this, we shared the following potential issues with Treasury. Treasury developed a subset of the schema based on the U.S. Standard General Ledger, which provides a uniform chart of accounts and technical guidance for standardizing federal agency accounting of financial activity. We found that some of the data elements, as defined in the most recent draft version available for us to review, could allow for inconsistent information to be entered. For example, alphabetic characters could be entered into a data field that should only accept numeric data. This could, in turn, affect the proper reporting, reliability, and comparability of submitted data. Further, OMB and Treasury intended to fulfill a portion of their requirements by leveraging existing agency reporting. Going forward, the technical schema will need to describe enhancements or changes to current financial reporting. We also noted that the schema does not currently identify the computer markup language (i.e., standards for annotating or tagging information so that it can be transmitted over the Internet and readily interpreted by disparate computer systems) that agencies can use for communicating financial data standards. Treasury officials said they plan to address this issue in a forthcoming version of the schema, which they estimated would be publicly released by the end of the summer. We will continue to review additional versions of the schema and will share our views with Treasury and you. The DATA Act designates OMB and Treasury to lead government-wide implementation efforts. Toward that end, OMB and Treasury have established a governance framework that includes structures for both project management and data governance. At the top of this framework is an executive steering committee, which is responsible for setting overarching policy guidance and making key policy decisions affecting government-wide implementation of the act. The executive steering committee consists of two senior administration individuals: OMB’s Controller and Treasury’s Fiscal Assistant Secretary. The executive steering committee is supported by the Interagency Advisory Committee (IAC), which is responsible for providing recommendations to the steering committee related to DATA Act implementation. The IAC includes the chairs of various federal government-wide councils as well as other agency officials. In addition, the IAC members are responsible for updating their respective agencies and for providing leadership in implementing DATA Act requirements. As part of their plans for agency implementation, OMB and Treasury have asked federal agencies to identify a Senior Accountable Official and organize an agency-wide team to coordinate agency-level implementation activities. OMB and Treasury have made progress in developing a governance structure for government-wide implementation. However, a recent Treasury Office of the Inspector General (OIG) report raised a number of concerns with Treasury’s project management practices that the OIG believes could hinder the effective implementation of the act if not addressed.management documents designed to track the implementation of significant DATA Act workstreams lacked several key attributes—such as project planning tools, progress metrics, and collaboration documentation—called for by project management best practices. Due to the complexities involved, OMB and Treasury are using a mix of both agile and traditional project management approaches to implement the DATA Act. Specifically, the Treasury OIG found that project However, the Treasury OIG found that project planning documents did not describe the different approaches being used for each workstream. The Treasury OIG recommended that Treasury’s Fiscal Assistant Secretary strengthen project management over the DATA Act’s implementation by defining the project management methodology being used for each significant workstream and ensuring that project management artifacts appropriate to those methodologies are adopted and maintained. Treasury agreed with the OIG findings and stated that it was taking corrective action in response, including a commitment to implementing a recognized agile development approach in an appropriate and disciplined manner for each workstream and improving documentation to identify when the agile approach is being used. Treasury OIG officials told us that they are continuing to monitor OMB and Treasury project management efforts and will report their audit findings on an ongoing basis. In coordination with the Treasury OIG, we will be monitoring OMB and Treasury’s governance process as part of our ongoing work as well. OMB and Treasury have taken steps to establish a governance process for developing data standards. However, more effort is needed to build a data governance structure that not only addresses the initial development of the data standards but also provides a framework for adjudicating revisions, enforcing the standards, and maintaining the integrity of standards over time. One of the key responsibilities of the IAC is to provide support for the development of data standards. In this capacity, the IAC is responsible for developing white paper proposals and building consensus within members’ respective communities for new standardized data elements that align with existing business practices across multiple reporting communities (e.g., grants, procurement, and financial reporting) that will be using the standards. OMB and Treasury officials told us that while they have established a process to develop data standards through the IAC, they have not yet instituted procedures for maintaining the integrity of the standards over time. According to these officials, they are taking an iterative approach to developing additional procedures for data governance, similar to their overall approach for managing the implementation of the act. Industry and technology councils, and domestic and international standards-setting organizations, endorse the establishment and use of governance structures to oversee the development and implementation of standards. While there are a number of governance models, many of them promote a set of common principles that includes clear policies and procedures for broad-based participation from a cross-section of stakeholders for managing the standard-setting process and for controlling the integrity of established standards. Standards-setting organizations, such as the Software Engineering Institute (SEI), define data governance as a set of institutionalized policies and processes that can help ensure the integrity of data standards over time. According to these entities, a data governance structure should have a defined focus, such as monitoring policies and standards, monitoring and reporting on data quality, and ensuring the consistency of the standards across potentially different data definitions. These organizations also suggest that for a data governance structure to be successful, an organization needs clear processes and methods to govern the data that can be standardized, documented, and repeatable. Ideally, this structure could include processes for evaluating, coordinating, approving, and implementing changes in standards from the initial concept through design, implementation, testing, and release; maintaining established standards; and gaining a reasonable degree of agreement from stakeholders. Going forward, in the absence of a clear set of institutionalized policies and processes for developing standards and for adjudicating necessary changes, the ability to sustain progress and maintain the integrity of established data standards may be jeopardized as priorities and data standards shift over time. As a result, we are recommending the following action: To ensure that the integrity of data standards is maintained over time, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, establish a set of clear policies and processes for developing and maintaining data standards that are consistent with leading practices for data governance. One component of good data governance involves establishing a process for consulting with and obtaining agreement from stakeholders. In fact, the DATA Act requires OMB and Treasury to consult with public and private stakeholders when establishing data standards. Recognizing the importance of engaging on data standards, OMB and Treasury have taken the following steps: convened a town hall meeting on data transparency in late September 2014 to, in part, allow stakeholders to share their views and recommendations; published a Federal Register notice seeking public comment on the establishment of financial data standards by November 25, 2014; presented periodic updates on the status of DATA Act implementation to federal and non-federal stakeholders at meetings and conferences; solicited public comment on data standards using GitHub, an online collaboration space, including the posing of general questions in December 2014 and subsequently seeking public comment on proposed data standards beginning in March 2015; and collaborated with federal agencies on the development of data standards and the technical schema through MAX.gov, an OMB- supported website. Such efforts by OMB and Treasury have provided valuable opportunities for non-federal stakeholders to provide input into the development of data standards. However, more can be done to engage in meaningful two-way dialogue with these stakeholders. Creating such a dialogue and an “open exchange of ideas between federal and non-federal stakeholders” is identified as an explicit goal of the Federal Spending Transparency GitHub site established by OMB and Treasury. Moreover, the site’s landing page links such interactive communication with the successful development of data standards. However, we found only a few examples that OMB and Treasury have engaged in such a dialogue or have otherwise substantively responded to stakeholder comments on the site. When we asked OMB and Treasury officials how public comments from GitHub were considered when finalizing the first 15 data standards issued in May 2015, they said that none of the comments warranted incorporation and confirmed that substantive replies to stakeholder comments were not posted. Our work examining the implementation of the Recovery Act underscored the importance of obtaining stakeholder input as guidance is developed to address potential reporting challenges. We found that during implementation of the Recovery Act, OMB and other federal officials listened to recipients’ concerns and changed guidance in response, which helped recipients meet reporting requirements. Some stakeholders we spoke with cited the process OMB followed in developing Recovery Act guidance as an example of effective two-way communication; however, these stakeholders indicated that they have not experienced this same level of outreach and communication with OMB and Treasury thus far with DATA Act implementation. Without similar outreach for OMB and Treasury’s current initiatives there is the possibility that reporting challenges may be neglected or not fully understood and therefore not addressed, potentially impairing the data’s accuracy and completeness or increasing reporting burden. As DATA Act implementation progresses, establishing an effective two- way dialogue will likely become even more important. As they primarily pertain to federal budget reporting activities, the first set of 15 data elements finalized in May 2015 may not have been viewed as being directly applicable to some non-federal stakeholders including state and local governments. However, future data elements to be issued by OMB and Treasury are directly related to federal grants and contracts. These may be perceived as being more relevant to states, localities, businesses, nonprofits, and other non-federal stakeholders, resulting in increased questions and desire for input and involvement from these communities. Additional policies and procedures that address the whole lifecycle of standards development will be needed to ensure the integrity of government-wide financial data standards is maintained over time. These policies and procedures could also provide an opportunity for OMB and Treasury to establish effective two-way communication with a broad representation of federal fund recipients to ensure all interested parties’ concerns are addressed as this important work continues. As a result we are making the following recommendation: To ensure that interested parties’ concerns are addressed as implementation efforts continue, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, build on existing efforts and put in place policies and procedures to foster ongoing and effective two-way dialogue with stakeholders including timely and substantive responses to feedback received on the Federal Spending Transparency GitHub website. The DATA Act authorizes Treasury to establish a data analysis center or to expand an existing service, to provide data, analytic tools, and data management techniques for preventing or reducing improper payments and improving the efficiency and transparency in federal spending. Should Treasury elect to establish a data analysis center or expand an existing service, all assets of the Recovery Accountability and Transparency Board (Recovery Board) that support the operations and activities of the Recovery Operations Center (ROC)—a central data analytics service to support fraud detection and prevention and assist the oversight communities in their efforts to prevent fraud, waste, and abuse—will be transferred to Treasury by September 30, 2015, the day that the authority for the Recovery Board expires. Treasury officials have told us that the department does not plan to transfer any of the ROC’s assets, and, as discussed below, outlined the challenges that led to this decision. As a consequence, some OIGs who were the primary users of the ROC will either need to develop, replace, or lose the existing capabilities for certain audit and investigative services. The Recovery Act created the Recovery Board, made up of inspectors general to promote accountability by overseeing recovery-related funds and transparency by providing the public with easily accessible information. To accomplish this goal, the Recovery Board established the ROC to provide predictive analysis capability to help oversight entities focus limited government oversight resources based on risk indicators such as a program previously identified as high-risk, high-dollar-value projects, past criminal history of key parties involved in a project, and tips from citizens; and in-depth fraud analysis capability to identify non-obvious relations between legal entities using public information about companies. After its initial mandate to oversee Recovery Act funds, subsequent legislation expanded the Recovery Board’s mandate to include oversight of all federal spending as well as funds appropriated for purposes related to the impact of Hurricane Sandy. In addition to expanding its authority, the legislation also extended the termination date of the Recovery Board from September 30, 2013 to September 30, 2015. The ROC serves as an independent central repository of tools, methods, and expertise for identifying and mitigating fraud, waste, and mismanagement of federal funds. The Recovery Board’s assets supporting the ROC include human capital, hardware, data sets, and software. (See figure 2.) The ROC developed specialized data analytic capabilities that members of the federal oversight community could leverage by submitting a request for analysis. For instance: The Appalachian Regional Commission (ARC) OIG used the ROC’s capabilities to analyze text from A-133 single audit data to search for indications of risk and identify the highest risk grantees for review. This approach allowed the ARC OIG to identify 30 to 40 grantees out of approximately 400 grants per year based on risk rather than selecting grantees randomly based on geography and grant type. The Environmental Protection Agency (EPA) OIG used the ROC’s data visualizations of a link analysis, which identifies relationships among entities involved in activities such as a fraud ring or an effort to commit collusion, to present to juries. An EPA OIG official said that the visualization of these relationships made it easier for juries to understand how entities had collaborated in wrongdoing. Since 2012, after its mandate was expanded to cover all federal funds, over 50 federal OIGs and agencies have asked the ROC for help. Based on requests for analysis compiled in the Recovery Board’s Annual Reports, the ROC researched roughly 1.7 million entities associated with $36.4 billion in federal funds during fiscal years 2013 and 2014. The largest single user of ROC assistance over this time was the ARC OIG in 2012 and the Department of Homeland Security OIG in fiscal years 2013 and 2014. To facilitate a potential transition, Recovery Board officials provided a transition plan to Treasury in late spring of 2014. The plan provided an overview of the ROC’s assets and presented possible scenarios for a transition and steps needed including estimated time frames assuming a transfer by September 30, 2015. In May 2015, Treasury officials told us that the agency does not plan to transfer any of the ROC’s assets, identifying the following challenges to assuming ROC’s assets: Hardware. Although Treasury officials viewed hardware as being feasible to transfer, in their assessment it was not cost effective to do so because the ROC’s hardware is aging, lessening the value of these assets. Human capital. The agency would have to use the competitive hiring process to hire key ROC employees, which can be time consuming. In addition, because some ROC staff were term-limited hires or contractors, a competitive hiring process would not guarantee that ROC staff would ultimately be selected for employment. Data sets. The ROC obtained access to federal datasets through memoranda of understanding, which are not transferrable and therefore would need to be negotiated. Commercially procured data sets also are not transferrable but would instead have to go through a procurement process. Software contracts. Because the Recovery Board extended its software contracts on a sole source basis when it was re-authorized for 2 additional years, Treasury would need to use a competitive procurement process to obtain these data analytic tools. Because of these challenges, Treasury focused on facilitating information sharing through meetings between the ROC and Treasury’s Do Not Pay (DNP) initiative, which assists agencies in preventing improper payments. Treasury officials stated that the expertise developed at the ROC was its most valuable asset, so officials focused on meeting with the ROC staff to discuss best practices and share knowledge with the DNP staff. In addition, Treasury officials noted that they had hired the former Assistant Director for Data and Performance Metrics at the Recovery Board as the Director of Outreach and Business Process for DNP. Officials further noted that the Director’s experience at the ROC included leveraging data to identify high risk entities and conducting outreach to the ROC’s user community—skills that Treasury officials said were complementary to DNP’s activities. (See figure 3.) In 2013, the Council of the Inspectors General on Integrity and Efficiency (CIGIE) explored the viability of assuming some ROC assets to continue providing analytic capabilities to the OIG community. CIGIE estimated that it would cost $10.2 million per year to continue to run the ROC and because CIGIE is primarily funded by membership dues, CIGIE determined the additional cost to operate the ROC would be too burdensome for the organization. A CIGIE official indicated they have continued to look for opportunities to provide centralized data analytic resources to OIGs. However, this official said given its financial resources, any resources CIGIE might provide would be at a significantly scaled back level compared to the ROC. Some large OIGs that previously used the ROC intend to develop their own analytic capabilities. However, according to some OIG officials, the ROC’s closure may impact the audit and investigative capabilities of some small and medium-sized OIGs who do not have the resources to develop independent data analytics or pay fees for a similar service. According to some OIG officials, the loss of the ROC’s analytical capabilities could also result in auditors and investigators working more staff hours to research the same types of linkages rather than verifying the information that the ROC could provide in a shorter time. Treasury officials stated that the Fiscal Service operations assist federal agencies—including OIG and other law enforcement agencies—in identifying, preventing, and recovering improper payments under existing authorities. However, as noted earlier, our work on the potential impact of the ROC’s sunset on the oversight community is on-going, and we have not independently compared the services of Fiscal Service operations to the ROC. We plan to issue a report on the ROC later this year. The DATA Act requires OMB to establish a 2-year pilot program to develop recommendations for standardizing financial data elements, eliminating unnecessary duplication, and reducing compliance costs for recipients of federal awards. Toward this end, OMB has partnered with the Department of Health and Human Services (HHS), the General Services Administration (GSA), and the Chief Acquisition Officers Council (CAOC). According to OMB staff, HHS is assisting OMB for grants- specific activities while GSA and the CAOC are doing so for contract- specific activities. Our work to date has centered on the grants-related part of the pilot. The pilot was launched this May with three activities: (1) a national dialogue on reducing the reporting burden faced by recipients of federal funds; (2) an online repository of common data elements; and (3) a new section on Grants.gov with information about the grants lifecycle. Conducting a national dialogue on reducing recipient reporting burden. A national dialogue is being conducted for federal contractors and grantees with a focus on sharing ideas for easing reporting burden, eliminating duplication, and standardizing processes. According to OMB and HHS officials, this online dialogue will be open on a public website through May 2017 and comments will be actively reviewed, incorporated, and addressed as appropriate.number of questions to federal award recipients in this dialogue, including the following: HHS, GSA, and CAOC have posed a If you could change one thing that would ease your reporting burden associated with your grants or sub-grants, what would it be (e.g., time, cost, resource burden)? If you have reporting requirements to the federal government, how are those met? If you could create a central reporting portal into which you could submit all required reports, what capabilities/functions would you include? Online repository of common grants-related data elements. The HHS DATA Act Program Management Office manages an online repository of agreed-upon standardized data elements, called the Common Data Element Repository (C-DER) Library, to be an authorized source for data elements and definitions used by the federal government in agency interactions with the public. The C-DER is designed to include data standards that have been approved through the implementation of the DATA Act. Specifically, as of July 16, 2015, the C-DER is populated with 112 data elements from a variety of sources.finalized by OMB and Treasury under the DATA Act on May 8, 2015, are included in the C-DER; however the remaining 12 that have been finalized since then are not yet included. A number of the terms included in the C-DER go beyond the data elements that are required to be standardized under the DATA Act, such as definitions for audit finding, auditee, auditor, and hospital. According to HHS officials, the C-DER was developed through an analysis of 1,000 data elements from 17 different sources. HHS officials stated that key findings that led to the creation of the C-DER were (1) lessons learned from the development of Uniform Grants Guidance that different communities, such as grants, acquisitions, and procurement, use terms and concepts differently; (2) that it is difficult for the public to access common definitions across these different communities; and (3) that data standards in and of themselves are not helpful unless they are used. The purpose of the C-DER is to reconcile these three findings and accommodate different data standards as they are developed under the act. Providing grants-related resources. The third component of the pilot is the launch of a portal that provides the public with grants resources and information on the grants lifecycle, known as the Grants Information Gateway (GIG). Available on Grants.gov, the GIG is intended to serve as a clearinghouse for information on the federal grants management process and lifecycle. Further, HHS officials stated that they intend to leverage Grants.gov and the GIG to improve the transparency of federal spending by educating the public and potential applicants for federal grants about federal grant-making. As part of our ongoing work on this pilot, we are reviewing past experiences and good practices on designing, implementing, and evaluating pilots; assessing whether the pilot’s design is likely to meet DATA Act requirements and objectives; and evaluating whether the pilot is managed in a way that will likely result in useful recommendations. We will report our findings to Congress next spring. We provided a draft of this statement to Treasury, Health and Human Services, Office of Management and Budget, the Chair of the Council of the Inspectors General on Integrity and Efficiency, and the Chair of the Recovery Accountability and Transparency Board. OMB staff and Treasury officials did not have comments on the recommendations. OMB staff, Treasury officials, HHS, the Recovery Board, and the CIGIE provided technical comments on the draft, which we incorporated as appropriate. In conclusion, given the complexity and government-wide scale of the activities required by the DATA Act, full and effective implementation will not occur without sustained commitment by the executive branch and continued oversight by Congress. We welcome the responsibility that the Congress has placed on us to assist in the oversight of the DATA Act. Toward that end, we look forward to continuing to monitor and assess the efforts of OMB, Treasury, and other federal agencies while standing ready to assist this and other committees in carrying out Congress’s key oversight role in the months and years to come. Chairman Hurd, Ranking Member Kelly, Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions you have. Questions about this testimony can be directed to J. Christopher Mihm, Managing Director, Strategic Issues at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement are Shari Brewster; Mark Canter; Jenny Chanley; Giny Cheong; Lon Chin; Peter Del Toro (Assistant Director); Kathleen Drennan (Analyst-in-Charge); Gary Engel; Robert Gebhart; Meafelia Gusukuma; Shirley Hwang (Analyst-in-Charge); Joah Iannotta; Charles Jones; Lauren Kirkpatrick; Michael LaForge; Jason Lyuke; Donna Miller; Laura Pacheco; Carl Ramirez; Paula Rascona; Brynn Rovito; Kiran Sreepada; James Sweetman, Jr.; Andrew Stephens; Carroll Warfield, Jr.; and David Watsula. Additional members of GAO’s DATA Act Working Group also contributed to the development of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The DATA Act directs OMB and Treasury to establish government-wide data standards by May 2015. The act also requires agencies to begin reporting financial spending data using these standards by May 2017 and to post spending data in machine- readable formats by May 2018. This statement is part of a series of products that GAO will provide the Congress as the DATA Act is implemented. This statement discusses four DATA Act implementation areas to date: (1) establishment of government-wide data standards; (2) OMB and Treasury's effort to establish a governance structure and obtain stakeholder input; (3) the status of the potential transfer of the ROC's assets to Treasury; and (4) the pilot program to reduce reporting burden. GAO reviewed the first 15 data elements finalized under the act; analyzed key documents, technical specifications and applicable guidance; interviewed OMB, Treasury, HHS, and other staff as well as officials from organizations representing non-federal stakeholders; and reviewed literature. Since the Digital Accountability and Transparency Act (DATA Act) became law in May 2014, the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have taken significant steps towards implementing key provisions. These steps include the release of 27 data standards, draft technical documentation, and implementation guidance to help federal agencies meet their responsibilities under the act. However, given the complexity and government-wide scale of activities required by the DATA Act, much more remains to be done. Data standards. OMB and Treasury have proposed standardizing 57 data elements for reporting under the act. They released 15 elements on May 8, 2015, a year after the passage of the act, and have since released 12 more. Eight of the first 15 were new elements required under the DATA Act; the balance were required under the Federal Funding Accountability and Transparency Act of 2006. GAO identified several issues that may impact the quality and ability to aggregate federal spending data. For example, GAO found: (1) the data standards may not provide a complete picture of spending by program unless OMB accelerates its efforts to produce an inventory of federal programs as required under the GPRA Modernization Act of 2010 (GPRAMA); (2) the data standards and elements may not yet represent all that are necessary to fully capture and reliably report on federal spending; and (3) the draft technical specifications GAO reviewed may result in the reporting of inconsistent information. GAO shared its observations with officials who are considering revisions and updating their technical documentation. Governance and stakeholder engagement. OMB and Treasury have made progress in initial implementation activities by developing structures for project management and data governance as well as for obtaining stakeholder input. However, GAO found that additional effort to address the whole lifecycle of standards development will be needed to ensure that the integrity of data standards is maintained over time. Establishing these policies and procedures now could provide an opportunity for OMB and Treasury to build on existing efforts to reach out to stakeholders by taking steps to foster effective two-way communication to help ensure that the concerns of interested parties are responded to and addressed as appropriate on an ongoing and timely basis. Recovery Operations Center (ROC). GAO's review of the potential transfer of the ROC's assets found that Treasury does not plan to assume these assets because of a number of impediments. Instead, Treasury has focused on facilitating information sharing between the ROC and Treasury's Do Not Pay initiative, which assists agencies in preventing improper payments. GAO has ongoing work on this issue and plans to issue a report later this year. Reporting burden pilot. The DATA Act requires OMB to establish a 2-year pilot program to develop recommendations for reducing reporting burden for recipients of federal awards. The pilot was launched this May with the initiation of a national dialogue on reducing reporting burden, building of an online repository of common grants-related data elements, and addition of grants-related resources on Grants.gov. GAO also has ongoing work focusing on this pilot. GAO recommends that OMB accelerate efforts to merge DATA Act purposes with the production of a federal program inventory under GPRAMA, and that OMB and Treasury (1) establish policies and processes for a governance structure to maintain the integrity of data standards over time and (2) enhance policies and procedures to provide for ongoing and effective two-way dialogue with stakeholders. OMB staff, Treasury officials, and others provided technical comments which GAO incorporated as appropriate.
IHS facilities and their associated CHS programs are located in 12 geographic areas, each overseen by an IHS area office led by an Area Director. Ten of the 12 areas include at least some IHS-operated facilities; these 10 areas oversee local CHS programs in 33 states. IHS headquarters sets CHS program policies and oversees the areas. Each IHS area contains multiple local CHS programs. The areas distribute funds to the local CHS programs in their areas, monitor the programs, and establish procedures and provide guidance and technical assistance to the programs. The CHS program is funded through annual appropriations and must Based on the operate within the limits of available appropriated funds.regulations that IHS has established for the CHS program, a number of requirements must be met in order for a service to be eligible for CHS payment.payment, local CHS programs must consider the following: Based on the requirements, before approving a service for Is the patient a member or descendent of a federally recognized tribe or someone with close ties to the tribe? To be eligible for CHS payment, the service must be for a patient who is a member or descendent of a federally recognized tribe or someone who maintains close economic and social ties with the tribe. Does the patient reside within the Tribal Contract Health Service Delivery Area (CHSDA)? For a service to be paid for with CHS funds, it must be for a patient who resides in the Tribal CHSDA. Unless otherwise established, the CHSDA encompasses the reservation, the counties that border the reservation, and other specified lands. Exceptions exist for students who are temporarily absent from their CHSDA during full-time study and individuals who are temporarily absent from the CHSDA for less than 180 days due to travel or employment. Are alternate health care resources available to the patient? Many users of IHS services are also eligible for other sources of payment for care, including Medicaid, Medicare, and private insurance. The CHS program is typically the payer of last resort. Therefore, before a service is approved for CHS payment, the patient must apply for and use all alternate resources that are available and accessible. Services from an IHS facility are also considered a resource, so CHS funds cannot be used for services reasonably accessible and available at IHS facilities. Did the CHS program receive timely notification of services provided from a non-IHS facility? In non-emergency cases, the local CHS program should be notified and the service approved for payment prior to the patient receiving care. In cases where the patient was not referred for care by an IHS provider, such as with emergency room services, the CHS program must be notified within 72 hours of when the service was delivered. Notification may be made by the individual, provider, hospital, or someone on behalf of the individual in order for the service to be eligible for CHS payment. The notification time is extended to 30 days for the elderly and disabled. Are the services considered medically necessary and listed as one of the established area medical or dental priorities? To be eligible for payment under the CHS program, the service must be considered medically necessary and listed as one of the established IHS area’s medical or dental priorities. A program committee that is part of the local CHS program evaluates the medical necessity of the service, for example, at a weekly meeting. IHS has established four broad medical priority levels of health care services eligible for payment and a fifth for excluded services that cannot be paid for with CHS program funds. Each area is required to establish priorities that are consistent with IHS’s medical priority levels and are adapted to the specific needs of the CHS programs in its area. CHS programs that are affiliated with IHS-operated facilities must assign a priority level to services based on the priority system established by their area offices. Funds permitting, these CHS programs first pay for the highest priority services and then for all or some of the lower priority services they fund. Our prior work has found that available CHS program funds have not been sufficient to pay for all eligible services. At some IHS facilities, the amount of CHS funding available was only sufficient to cover cases with the highest medical priority—Priority 1—emergent or acutely urgent care services that are necessary to prevent immediate death or serious impairment of health. (See table 1 for a description of the medical priority levels and related services.) After considering these questions, local CHS programs review each case based on the availability of funding and may defer or deny requests to pay for services when program funds are not available. If the CHS program determines that a service can be funded, it issues a purchase order for the service. In general, three entities are involved in the CHS payment process: (1) the local CHS program, (2) the provider, and (3) IHS’s fiscal intermediary (FI). The timing of the CHS program’s and the provider’s involvement depends on whether the service was prompted by a referral from an IHS provider prior to the date of service—called IHS referrals, or prompted by the patient seeking care without first obtaining a referral from an IHS provider—these are typically emergency services and called self- referrals. IHS referrals are cases in which an IHS-funded provider refers a patient for care to an external provider. The local CHS program receives the referral and evaluates it against the eligibility requirements. Once the CHS program receives the needed information to make its determination, it will: approve the service for payment and issue a purchase order to obligate the funds and send copies of the purchase order to the provider and to the FI; defer funding if it meets all the eligibility criteria, but funds are not deny the service. If the service is approved, the local CHS program typically works with an external provider to set up an appointment for the patient to receive the service and issues the purchase order to the provider—either before the service is provided or shortly after the service is provided. After performing the service, the external provider submits the purchase order along with the claim for payment to the FI. Once the FI receives the claim and purchase order from the external provider, it verifies the purchase order and patient data, evaluates whether alternate resources are available, and, if appropriate, makes the required payment. If there are any issues with the claim, such as missing information from the CHS program or provider, the FI will put the claim in a hold status until the issues are resolved. Self-referrals are typically emergency situations where the patient receives services from external providers without first obtaining a referral from an IHS-funded provider. After the services are delivered, the provider seeks approval from the CHS program for payment for the services. With self-referrals, the steps taken by the CHS program to evaluate the referral against the program’s eligibility requirements to determine whether the service is eligible for CHS payment do not begin until after the service is provided. In these cases, the local CHS program may have to communicate with the external provider, for example, requesting information about the services provided. Similar to IHS referrals, once the CHS program receives the needed information to make an eligibility determination, it will approve the service for payment and issue a purchase order to obligate the funds; defer funding the service; or deny the service. For approved self-referral services, once the FI receives the claim and purchase order from the external provider, it follows the same procedures for processing the payment as for IHS referrals. (See fig. 1 for an overview of the approval and payment processes for IHS referrals and self-referrals.) For services that are ultimately paid for under the CHS program, whether they are IHS or self-referrals, the CHS payment process consists of three main steps that encompass the time from the date a service is delivered to the date the provider is paid. 1. Local CHS program issues the purchase order. The time for this step can be measured by the length of time between when a service is provided and when the local CHS program issues the purchase order. (Sometimes the purchase order is issued before the service is provided, such as for some IHS referrals; in these cases this step has no effect on the time it takes to pay the provider.) 2. External provider submits a claim to IHS’s FI. The time for this step can be measured by the length of time between when the CHS program issues the purchase order and when the FI receives the claim. 3. The FI pays the claim. The time for this step can be measured by the length of time between when the FI receives a claim from an external provider and when the payment is made. PPACA made significant changes to the Medicaid program and included new health care coverage options that may benefit American Indians and Alaska Natives. In 2014, Medicaid eligibility will expand in states opting to participate, such that all individuals with incomes at or below 138 percent of the federal poverty level will be eligible for the program, including previously ineligible categories of individuals, such as childless adults. Also in 2014, health insurance exchanges will be available—health insurance marketplaces in which individuals and small businesses can compare, select, and purchase health coverage from participating carriers. For individuals obtaining insurance through the exchanges, PPACA provides premium tax credits for those meeting certain income requirements and cost-sharing exemptions for qualifying American Indians and Alaska Natives. Finally, in 2015, states may implement the new Basic Health Program option, under which the federal government will give states 95 percent of the premium tax credits and cost-sharing subsidies that would have been provided if the individuals had enrolled in the exchanges, to allow states to provide coverage for individuals with incomes between 138 and 200 percent of the federal poverty level. In previous work, we found that, after these changes are implemented many American Indians and Alaska Natives may gain new health care coverage options. For example, we estimated that more than half of American Indians and Alaska Natives may be eligible either for cost- sharing exemptions and premium tax credits for insurance obtained through the exchanges, or eligible for health care coverage through the new Basic Health Program or Medicaid, including those who are currently eligible for Medicaid but not enrolled, and those who will be newly eligible under 2014 eligibility rules. A significant proportion of American Indians and Alaska Natives reflected in this potential new enrollment live in IHS service areas. For CHS services delivered in fiscal year 2011, a majority of providers’ claims were paid within 6 months of the service delivery date, but some took much longer. More than one-third (38 percent) of claims processed by IHS-operated CHS programs were paid within 3 months after services were delivered. Another 35 percent were paid between 3 and 6 months of service delivery. The percentage of claims paid more than 6 months after service delivery was much smaller, with 19 percent of claims being paid between 6 months and 1 year after services were delivered, and about 8 percent paid more than 1 year after services were delivered. (See fig. 2.) The amount of time it took to pay providers was not the same across all IHS areas. The areas varied in the amount of time between the date a service was provided and the date the claim was paid, particularly with respect to the percentage of claims that were paid within 3 months and within 6 months of service delivery. For example, although more than one-third of claims IHS-wide were paid within 3 months or less of service delivery, across IHS areas this percentage ranged from 18 percent in Albuquerque to 48 percent in Billings. Similarly, the percentage of claims paid within 6 months or less of service delivery ranged from 59 percent to 80 percent, also in Albuquerque and Billings, respectively. There was less variation among IHS areas in the percentage of claims paid within 1 year or less of service delivery, ranging from 87 percent in Albuquerque to 94 percent in Nashville. For 8 of the10 IHS areas we reviewed, 90 percent or more of their claims were paid within 1 year of service delivery. (See fig. 3.) Among the three main steps in the payment process, the step that most often took the longest in the payment process was the first step—the time from date of service to the issuance of the purchase order. For services delivered in fiscal year 2011, purchase orders were issued in 1 month or less after services were delivered for 41 percent of claims. For another 40 percent of claims, purchase orders were issued more than 2 months after services were delivered; and in about half of these cases, the purchase order was issued more than 4 months after the service was delivered. In comparison, the second step—the time from the date the purchase order was issued to the date the FI received the claim from the provider—took 1 month or less for 61 percent of the services, and the third step—the time from the date the FI received the claim to the date payment was made—took 1 month or less for 83 percent of the claims. (See fig. 4.) IHS uses three measures to assess how long it takes to approve and then process payments to CHS providers. Two of these measures concern the first step in the payment process—the time it takes local CHS programs to approve payments to providers and issue purchase orders to them— but neither of these measures provides a clear or complete picture of the timeliness of these activities, which constitute the most time-consuming period within the provider payment process, according to our analyses. IHS also has a timeliness measure for the final step in the provider payment process—the time it takes IHS’s FI to make payment to providers once it receives claims from them. Descriptions of the three timeliness measures follow. Government Performance Results Act (GPRA) measure. The first of two measures that IHS uses to assess the timeliness of the first step in the provider payment process is the average time it takes for IHS to issue a purchase order after a service has been provided. IHS established this measure in fiscal year 2009 in response to GPRA, and has set annual targets for the measure since then. GPRA requires federal agencies to develop performance plans with annual goals and measures. (Hereafter, we refer to this as the GPRA measure.) For fiscal years 2009 and 2010, IHS set the target for the GPRA measure at 82 days and 78 days, respectively. IHS missed the target by 28 days in 2009 and by 4 days in 2010. IHS kept the target at 78 days in fiscal year 2011, then lowered it to 74 days in 2012 and met these targets in both years. For fiscal year 2013, IHS kept the target at 74 days, and an IHS official said it would remain there for fiscal year 2014. According to IHS officials, the basis for the GPRA measure’s target was a health care industry consultant’s report showing average times that other health insurers, including private insurers and Medicaid, took to pay claims. Although clear targets have been established for the GPRA measure, the way the measure is calculated does not result in a clear picture of whether the goal of the measure is being achieved. In our previous work, we found that successful performance metrics should demonstrate the degree to which desired results are achieved.goal of the GPRA measure is to decrease the average number of days from the provision of services to purchase order issuance. However, the GPRA measure does not provide a clear picture of the timeliness of purchase order issuance because it combines self-referrals with some IHS referrals when calculating the average time it takes for IHS to issue a purchase order, even though the timing of when purchase orders are issued relative to service delivery can be very different for the two referral types. The GPRA measure calculates the average time it takes to issue purchase orders for services for which the purchase order was issued after the service was provided. This includes all self-referrals, where none of the work to determine whether the service is eligible for CHS payment can be started before the service is delivered because IHS does not know about it until after the service has been delivered. However, it also includes some IHS referrals—referrals for which all of the work to determine whether the service is eligible for CHS payment generally is completed before the service is delivered. Some local CHS programs said they wait to issue purchase orders for IHS referrals until shortly after they confirm that the services were actually delivered. Officials from one local CHS program told us that for these referrals, it may take only one day from the date of service to issue the purchase order. Including these IHS referrals in the calculation of the GPRA measure gives an unclear picture of performance because the inclusion of IHS referrals lowers the overall GPRA average. IHS officials agreed that the calculation of the GPRA measure mixes IHS referrals with self-referrals, and CHS officials at one area office said the measure would be more useful if IHS and self- referrals were analyzed separately. However, IHS officials told us that because the agency’s claims data system does not include a data field that tracks referral type, it does not have a way to separate the two different types of referrals that would allow the agency to systematically determine the average time it takes to issue purchase orders by referral type. According to IHS, the IHS officials expressed varying opinions about the utility and quality of the GPRA measure. Some officials noted that before the measure was established, there was no timeliness performance measure for the CHS program. These officials told us that the measure has helped to identify local CHS programs that have implemented practices that help improve timeliness of payments. But officials also criticized the GPRA measure, noting that many of the factors that help to determine whether an area or local CHS program meets the target are not within the area’s or the program’s control, such as how quickly a program receives information from providers. The time it takes to make a decision about a claim. The other measure that IHS uses to assess the timeliness of the first step in the provider payment process is how long it takes from the time IHS is notified of a claim to when the agency makes a decision about it. Under a statutory provision, IHS must approve or deny the claim within 5 days of receiving “notification” of a provider’s claim for a service or accept the claim as valid. (Hereafter, we refer to this provision as the 5-day rule.) According to IHS officials, the agency has interpreted the rule’s clock as beginning once a claim is “clean,” or “completed,” meaning that all information necessary to determine whether the claim should be approved, deferred, or denied has been obtained. IHS officials told us, however, that it is in obtaining this information—including medical records to determine medical priority and the availability of alternate resources—that delays most typically occur. Thus, the 5-day rule’s clock does not begin until after completion of the part of the process in which IHS officials believe delays most typically occur. Although IHS officials said the agency’s interpretation of the 5-day rule currently is not included in any official written guidance, they said the agency plans to include an explanation of its interpretation of the rule in revisions to the CHS chapter of the Indian Health Manual, which officials said is expected to be completed by early 2014. The time it takes the FI to pay the claim. The third measure that IHS uses to assess the timeliness of the payment process focuses on the last step in the process—the length of time that the FI should take to process payments to providers once it receives claims from them. IHS’s contract with its FI specifies that at least 97 percent of clean claims are to be paid within 30 days of receiving the claim from the provider. Similar to the 5-day rule, the FI defines clean claims as those containing all required information, including the purchase order; passing all IHS and FI agreed- upon internal checks; and not requiring additional investigation by the FI. The FI issues monthly reports to IHS documenting its compliance with this provision. According to these reports, this target was met every month from January 2012 through July 2013. The complex processes for determining whether a service is eligible for CHS funding can affect the timeliness of provider payments and result in delays. Even after a local CHS program determines that a service is eligible for CHS funding, complexities in the payment process managed by IHS’s FI can result in delays. In addition, CHS officials reported that staffing shortages and limited funding contribute to delays in processing payments to providers. Local CHS programs also reported varying practices for assessing eligibility and approving CHS funding, which may contribute to variations in timeliness for provider payments. IHS’s process for determining whether services are eligible for CHS program funds is complex and different from processes used by other payers, which can affect the timeliness of provider payments. Unlike other payers that offer a defined set of benefits—including Medicare, Medicaid, and private insurers—the CHS program makes decisions about what care will be funded on a case-by-case basis, so that each time a referral for care is received by a local CHS program, it is evaluated against a number of eligibility requirements as well as against available funding. Evaluating a service against each of the service eligibility requirements involves multiple steps, some of which depend on the CHS program receiving information from providers, patients, and others, and delays can occur during the evaluation of some of these eligibility requirements, according to CHS officials. In some cases, making these eligibility determinations can be fairly involved, and can ultimately affect the amount of time it takes for a provider to receive payment after the service is delivered. The effects of this process on payment times is greater for self-referrals— situations in which the service was provided before it was approved for payment—because the entire process for determining whether the service was eligible for CHS payment does not begin until after the service was delivered. According to IHS officials, the agency developed the CHS program eligibility regulations in order to carefully manage and stretch limited CHS funding to provide the most critical services to the most patients. Two aspects of the process for determining eligibility for CHS program funding were frequently reported as resulting in payment delays: (1) determining whether a service meets the area’s medical priorities and (2) identifying all available alternate resources. Officials from 8 of 12 local CHS programs we interviewed reported payment delays related to determining whether a service met the area’s medical priorities, and officials from 8 of 12 local CHS programs reported payment delays related to identifying all available alternate resources. Determining medical priority can result in delays because local CHS programs must obtain from the provider medical records with sufficient detail to assess whether a service is medically necessary and falls within the established medical priorities. Officials reported that, while in some cases the necessary records have been provided relatively quickly (e.g., within a week) in other cases it has taken much longer. For example, some local CHS programs reported situations when it has taken weeks or months to obtain necessary medical documentation, and one program reported situations when it has taken as long as 1 to 2 years to receive this documentation. Program officials noted different reasons for these delays. For example, officials reported situations where providers have sent documentation to the wrong CHS program when the providers were unaware in which CHSDAs the patients resided. Another reason cited by local CHS program officials for delays in receipt of medical documentation included situations in which incomplete documentation was provided and the program needed to follow up with the provider. Officials reported a number of situations in which determining whether the patient has alternate resources to pay for the service, has resulted in delays. For example, some local CHS program officials told us that when they believe a patient is eligible for alternate resources—such as Medicaid—they have the patient apply for those resources, and will hold off on approving a service for CHS funding until the determination is made on the application. Officials from one local CHS program said that Medicaid determinations in their state can sometimes take months. In another example, officials from one local CHS program stated that for situations involving car accidents or a fall on private property, determining liability can take a long time, and availability of alternate resources cannot be determined until decisions on liability have been determined. In another example, officials from one CHS program said delays can occur when patients do not inform the CHS program of the alternate resources available to them, necessitating the CHS program doing the research itself. Program officials also reported some delays related to the other three aspects of the process for determining eligibility for CHS payment. For example, officials from one CHS program said determining whether a patient is a member of a federally recognized tribe can result in a delay if they have never seen the patient before, and must obtain documentation of that patient’s tribal affiliation. Similarly, officials from one CHS program said determining whether a patient resides in its CHSDA can result in a delay when the program needs to wait on documentation from patients confirming their addresses. Finally, one local CHS program reported that determining whether the program has been notified within required timeframes can result in a delay when an incorrect decision is made to deny the service, which is later overturned. The complexity of determining whether services delivered to American Indians and Alaska Natives are eligible for CHS funding can also result in misunderstandings in which providers think payments have been delayed, when in fact the services provided were not eligible for payment. For example, IHS officials told us that sometimes patients do not understand CHS rules and seek emergency care from external providers, expecting the CHS program to cover it, when they are in fact not eligible for CHS. The officials also said that providers will send claims to the CHS program, assuming the patient and service are eligible, and expect to be paid. IHS does not issue eligibility cards to beneficiaries that would indicate to external providers their eligibility for CHS services or information about which local CHS program is responsible for payment. In a previous GAO review of the CHS program, several providers noted that, in the absence of a process they can use prior to providing service to determine patient eligibility for the CHS program, they submit claims for payment to the CHS program for all patients who self-identify as American Indian or Alaska Native or as eligible for the CHS program. IHS officials said that they believe situations such as these—in which the provider will never be paid because the patient or service was not eligible, as opposed to situations in which the service is eligible but the payment process is prolonged—accounted for the majority of provider complaints about the timeliness of CHS payments. Local CHS program officials noted that providers’ lack of understanding of the complex CHS approval process was due in part to provider staff turnover or was exacerbated when the provider’s billing functions were located out of state, which could result in delays in providing information needed to determine eligibility. These officials noted that education of provider staff was an ongoing necessity for CHS programs. Some IHS officials also noted that providing such training took staff time away from processing referrals. Officials from a number of CHS programs noted that they meet regularly with some of their high-volume providers to reconcile specific outstanding cases, and that over time these meetings have helped improve providers’ understanding of the unique rules and procedures of the CHS program. However, officials also mentioned that turnover among provider staff often necessitated starting the process of educating providers again. Even after a service has been approved for CHS funding and a purchase order has been issued, delays can occur because of complexities in the last step in the payment process, which is managed by IHS’s FI. Officials said this can occur because the providers do not understand the CHS process used by the FI. For example, officials said some providers do not understand that after receiving a purchase order, they also need to submit the claim to the FI to be paid. Officials from local CHS programs and from the FI also reported examples where delays occurred because claims submitted by providers to the FI could not be matched to a corresponding purchase order in the FI’s system. According to FI officials, it will issue only one payment for each purchase order. However, some purchase orders are intended to cover multiple services, such as for a series of physical therapy treatments. FI officials reported that providers sometimes submit claims for services that pertain to only a portion of the services authorized on the purchase order. In these cases, the FI pays for those services and closes out the purchase order. When providers submit subsequent claims related to other services that were authorized on the original purchase order, the FI is unable to pay the provider because the purchase order was closed. The provider then must go back to the CHS program to request a new purchase order and payment to the provider is delayed until the new purchase order is issued and submitted to the FI. In addition to delays in payments to providers from issues matching claims and purchase orders, claims may be put on hold by the FI for other payment processing issues. One of the most common causes for claims being put on hold by the FI is when alternate resources have been confirmed, but the FI is waiting for information from the provider showing the amount paid by the other resources and the remaining amount that the provider is claiming from IHS. Local CHS program officials said insufficient CHS program staffing levels have affected their ability to issue timely purchase orders. IHS’s staffing standards model established a staffing ratio based on the annual number of purchase orders authorized for health care services by a facility. Some CHS program officials noted that their number of staff was below these standards. Further, local CHS program officials in programs that had a very small number of CHS staff (e.g., two or three) said that a vacancy or extended leave for even one staff person could affect the timeliness of issuing purchase orders—and one of these programs reported that related delays could be significant. Furthermore, IHS officials noted that, pursuant to agency practice, CHS funding has been used only to pay for services and not to increase staffing levels. As a result, recent increases in CHS funds have resulted in increased workloads, but staffing levels to manage the workloads have not increased. Staffing levels can affect the timeliness of payment for services, particularly for self-referrals where the entire process for determining eligibility for CHS payment does not begin until after the service is provided. Officials from a few CHS programs also noted that funding issues could result in delays issuing a purchase order authorizing CHS funding, which would delay payments. In our prior work, some providers told us that delays in receiving payment from CHS of several months, or in some cases years, tended to occur when the CHS program’s funding for the fiscal year had been depleted.noted, funding shortages affect the amount of time it takes to pay providers for self-referrals more than IHS referrals because the self- referral service is already provided before the program determines if funds are available while for an IHS referral, the service can be postponed until funds become available. We found variation in local CHS program practices for implementing CHS eligibility rules, which may contribute to the variation in timeliness of the provider payment process across IHS areas. IHS officials said they allow flexibility in local CHS program practices because each has a different set of circumstances to consider. These circumstances include challenges regarding CHS funding levels among areas, state Medicaid program procedures for verifying eligibility, providers’ familiarity with the CHS program, and the number of staff available to determine CHS eligibility for services. During our interviews, IHS area and local CHS program officials reported differences in practices that could contribute to variation in the amount of time overall it takes to pay providers. Examples of these differences include: Consideration of alternate resources. Local CHS programs varied in the actions they took while they were determining the extent of patients’ alternate resources. One practice IHS area office officials said some local CHS program staff used, in certain circumstances, was to issue purchase orders to providers before patients’ alternate resources were confirmed. In these cases, if the FI paid the claim before alternate resources were confirmed, the FI would seek to recover from the provider any overpayments for services covered by these alternate resources. In contrast, some IHS area and local CHS program officials told us they do not issue purchase orders authorizing CHS payment for services until the availability of all possible alternate resources has been determined. Officials in one IHS area noted that they do this to preserve their limited CHS funds and provide access to care to as many patients as possible. Officials in this area reported that they were not able to fund all Priority 1 cases and that issuing purchase orders and obligating CHS funds before alternate resources were confirmed could cause them to exhaust their CHS funds even earlier. Requests for information from providers. Officials from some local CHS programs reported that they set time limits for providers to submit medical documents and deny CHS funding if providers do not submit the documents within that time. These limits ranged from a week to 45 days, and some of these programs automatically issued a denial if the medical documentation was not provided either at the same time the program was notified that services had been provided or by the specified time limit. CHS officials said these denials could be reconsidered if sufficient medical documentation were subsequently provided. In contrast, officials from another CHS program reported that it does not have established time limits within which providers must submit medical documentation. New health care coverage options available to many IHS beneficiaries as a result of provisions in PPACA could provide IHS with an opportunity to simplify the complex eligibility rules of the CHS program. IHS has stated that its overall service goal is to elevate the health status of American Indians and Alaska Natives to the highest possible level. However, as we and others have reported, limits on available resources have affected the services available to American Indians and Alaska Natives through the CHS program. For example, although funding for the CHS program significantly increased in recent years, IHS has reported that at current funding levels, most programs are approving only medically emergent referrals (Priority 1) and less urgent, routine or preventive care is deferred or denied pending additional appropriations. According to IHS, limits on available funding for the CHS program have caused the agency to establish its complex requirements for determining eligibility for CHS funds—including reliance on a medical priority rating system and limiting eligibility to individuals who reside in CHSDAs. These mechanisms are intended to enhance IHS’s ability to stretch limited CHS dollars and extend services to more American Indians and Alaska Natives. As we previously reported, however, many American Indians and Alaska Natives may gain new health care coverage beginning in 2014 as a result of PPACA, which could alleviate some constraints on CHS program funds. If a better match is achieved between available funding and overall CHS program demand, IHS could have the opportunity to streamline eligibility requirements for the CHS program and to expand the services it pays for with CHS funds, assuming appropriation levels for the CHS program are maintained. Because the CHS program is generally the payer of last resort, if more American Indians and Alaska Natives gain new coverage, services that would have previously been paid for by the CHS program will be paid for by other payers. In addition, because some American Indians and Alaska Natives will have access to benefits packages through these other coverage options—benefits packages that may be more comprehensive than the IHS benefits available to them now—more may choose to obtain care outside of the IHS system entirely. This could help free up some CHS program funds, potentially creating a better match between available funding and overall program demand. Some uncertainty remains, however, about the extent to which American Indians and Alaska Natives will obtain new health care coverage when PPACA is fully implemented. For example, not all states may choose to expand their Medicaid programs. In addition, we have reported previously on the challenges American Indians and Alaska Natives may face enrolling in Medicaid and other public insurance programs. Some barriers are unique to the American Indian and Alaska Native population—such as individuals believing they should not have to apply for other public insurance programs because the federal government has a duty to provide them with health care as a result of treaties with Indian tribes. In our prior work, we recommended that IHS increase its direct outreach to American Indians and Alaska Natives who may be eligible for new coverage options to help ensure significant new enrollment in these options. The current CHS program’s eligibility requirements reflect the method that IHS has chosen to stretch its funding to ensure that the most critical health services can be provided to the maximum number of beneficiaries. However, determining eligibility for CHS funding—including the need to ascertain each time a referral is received whether the patient met residency requirements and the service met medical priorities—is inherently complex. As currently structured, it is highly unlikely that the CHS program will be as quick a payer as some other payers because of the cumbersome steps involved in determining eligibility for each service. PPACA will expand existing sources of health coverage and create new ones for American Indians and Alaska Natives, and this could affect the CHS program in a number of ways. In particular, if these changes significantly reduce the demand placed on CHS program funds, IHS may have the opportunity to not only pay for a greater range of services but also restructure the CHS program to include less stringent eligibility requirements. For example, increased availability of CHS funding due to increased access among American Indians and Alaska Natives to other sources of health care coverage options under PPACA could give IHS the opportunity to establish a set of defined benefits for IHS beneficiaries, which would alleviate the need for CHS programs and providers to carry out time-consuming medical priority determinations. The opportunity also may arise for IHS to make other changes, such as issuing a form of eligibility card to CHS-eligible patients to help providers understand when to send claims to IHS, and to which local CHS program a claim should be sent, helping improve the timeliness of provider payments. In the interim, while the changes from PPACA are taking effect, IHS has the opportunity to continue to make improvements to the CHS program, including how it assesses the timeliness of provider payments and how it aligns CHS program staffing levels with workloads, and to proactively consider ways to streamline CHS eligibility requirements. In an effort to ensure that IHS has meaningful information on the timeliness with which it issues purchase orders authorizing payment under the CHS program and to improve the timeliness of payments to providers, we recommend that the Secretary of HHS direct the Director of IHS to: modify IHS’s claims data system to separately track IHS referrals and self-referrals, revise the GPRA measure for the CHS program so that it distinguishes between these two types of referrals, and establish separate timeframe targets for these referral types; and improve the alignment between CHS staffing levels and workloads by revising its current practices, where appropriate, to allow available funds to be used to pay for CHS program staff. In addition, as HHS and IHS monitor the effect that new coverage options available to IHS beneficiaries through PPACA have on CHS program funds, we recommend that the Secretary of HHS direct the Director of IHS to proactively develop potential options to streamline program eligibility requirements. We provided a draft of this report to HHS for review and received written comments, which are reprinted in appendix I. In its comments, HHS concurred with two of our recommendations and did not concur with one recommendation. HHS concurred with our recommendation that IHS modify its claims data system to separately track IHS referrals and self-referrals, revise the GPRA measure for the CHS program so that it distinguishes between these two types of referrals, and establish separate timeframe targets for these referral types. HHS also concurred with our recommendation that as HHS and IHS monitor the effect that new coverage options available to IHS beneficiaries through PPACA have on CHS program funds, IHS proactively develop potential options to streamline program eligibility requirements. HHS agreed with the premise that Medicaid eligibility expansion and private insurance for more American Indians and Alaska Natives will reduce the demand for CHS services and noted that IHS will monitor the effects of new coverage on program funds and develop options to improve and streamline the CHS program processes. HHS did not concur with our recommendation that IHS improve the alignment between CHS staffing levels and workloads by revising its current practices, where appropriate, to allow available funds to be used to pay for CHS program staff. In its response, HHS stated its intent to continue to only use CHS appropriations to purchase health care services and not to fund program staff, noting that available CHS program funds have not been sufficient to pay for all services and that at some facilities, funding was only sufficient to cover cases with the highest medical priority. We acknowledge the difficult challenges and choices faced by CHS programs when program funds are not sufficient to pay for all needed services. However, IHS has noted the importance of the agency maintaining an adequate workforce and has established staffing standards for the CHS program. As we reported, some IHS officials noted that their number of staff was below the staffing ratio established in IHS’s staffing standards model, and local CHS program officials told us that insufficient CHS program staffing levels have affected their ability to issue timely purchase orders. Further, recent increases in CHS funding for services have resulted in increased workloads, while staffing levels to manage the workloads have not increased. For these reasons, we continue to believe that IHS should improve the alignment between CHS staffing levels and workloads, making use of all available funding, including CHS program funds, when appropriate, to do so. We are sending copies of this report to the Secretary of Health and Human Services, the Director of the Indian Health Service, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact name above, Gerardine Brennan, Assistant Director; George Bogart; Julianne Flowers; Natalie Herzog; Linda McIver; Laurie Pachter; and Michael Rose made key contributions to this report. Indian Health Service: Most American Indians and Alaska Natives Potentially Eligible for Expanded Health Coverage, but Action Needed to Increase Enrollment. GAO-13-553. Washington, D.C.: September 5, 2013. Indian Health Service: Capping Payment Rates for Nonhospital Services Could Save Millions of Dollars for Contract Health Services. GAO-13-272. Washington, D.C.: April 11, 2013. Medicaid Expansion: States’ Implementation of the Patient Protection and Affordable Care Act. GAO-12-821. Washington, D.C.: August 1, 2012. Indian Health Service: Action Needed to Ensure Equitable Allocation of Resources for the Contract Health Service Program. GAO-12-446. Washington, D.C.: June 15, 2012. Indian Health Service: Increased Oversight Needed to Ensure Accuracy of Data Used for Estimating Contract Health Service Need. GAO-11-767. Washington, D.C.: September 23, 2011. Indian Health Service: Updated Policies and Procedures and Increased Oversight Needed for Billings and Collections from Private Insurers. GAO-10-42R. Washington, D.C.: October 22, 2009. Medicare and Medicaid: CMS and State Efforts to Interact with the Indian Health Service and Indian Tribes. GAO-08-724. Washington, D.C.: July 11, 2008. Indian Health Service: Health Care Services Are Not Always Available to Native Americans. GAO-05-789. Washington, D.C.: August 31, 2005.
IHS provides health care to American Indians and Alaska Natives. When services are unavailable from IHS, IHS's CHS program may pay for care from external providers. GAO previously reported on challenges regarding the timeliness of CHS payments and the number of American Indians and Alaska Natives who may gain new health care coverage as a result of PPACA. PPACA mandated GAO to review the CHS program. This report examines (1) the length of time it takes external providers to receive payment from IHS after delivering CHS services; (2) the performance measures IHS has established for processing CHS provider payments; (3) the factors that affect the length of time it takes IHS to pay CHS providers; and (4) how new PPACA health care coverage options could affect the program. To conduct this work, GAO analyzed fiscal year 2011 CHS claims data, interviewed IHS officials, including officials in four IHS areas, and reviewed agency documents and statutes. For Indian Health Service (IHS) contract health services (CHS) delivered in fiscal year 2011, a majority of claims were paid within 6 months of the service delivery date, but some took much longer. Specifically, about 73 percent of claims were paid within 6 months of service delivery, while about 8 percent took more than 1 year. The CHS payment process consists of three main steps: (1) the local CHS program issues a purchase order to the provider authorizing payment (either before service delivery, or after, such as in emergency situations), (2) the provider submits a claim for payment, and (3) IHS pays the provider. GAO found that the first step took the longest--often taking more than 2 months. IHS uses three measures to assess the time it takes to approve and then process payments to CHS providers. Two of the measures concern the first step in the payment process (purchase order issuance) and the third concerns the final step (making the payment). One of the measures IHS uses to assess the timeliness of the first step is the average time it takes to issue a purchase order after a service has been delivered; IHS's current target for this measure is 74 days. However, the measure does not provide a clear picture of timeliness for this activity as it combines data for two different types of CHS services--those for which payment eligibility was determined prior to service delivery and those for which eligibility was determined after service delivery. IHS officials told GAO that when eligibility is determined prior to service delivery, it may take only one day from the date of service to issue the purchase order. Including this type of service in the calculation, therefore, lowers the overall average. The complexity of the CHS program affects the timeliness of provider payments. IHS program officials make decisions on what care will be funded on a case-by-case basis, evaluating each case against a number of eligibility requirements involving multiple steps. This process can lead to payment delays. Officials noted that delays also can occur when processing payments and that staffing shortages can affect the timeliness of payments. Some program officials noted that their staffing levels were below standards established by IHS. New coverage options in the Patient Protection and Affordable Care Act (PPACA) may provide an opportunity to simplify CHS eligibility requirements. PPACA made significant changes to the Medicaid program and included new health care coverage options that may benefit many American Indians and Alaska Natives beginning in 2014. IHS officials reported the agency developed the current CHS program eligibility requirements to manage CHS program funding constraints. In particular, some of the complexities of the program were designed to allow the program to operate within the constrained levels of program funding. With the availability of new coverage options under PPACA, some constraints on CHS program funds could be alleviated, providing IHS an opportunity to streamline service eligibility requirements and expand the range of services it pays for with CHS funds. GAO recommends that IHS revise an agency measure of the timeliness with which purchase orders are issued, use available funds as appropriate to improve the alignment between CHS staffing levels and workloads, and proactively develop potential options to streamline CHS eligibility requirements. The agency concurred with two recommendations, but did not concur with the recommendation to use available funds to improve CHS staffing levels. GAO believes the recommendation is valid as discussed in the report.
Managed care and fee-for-service (FFS) are two possible models that states use to deliver benefits under their Medicaid programs. Most states provide a combination of these two delivery models, which offer different financial incentives. Nationally, more than half of Medicaid beneficiaries are enrolled in a managed care plan. States contract with MCOs to provide a specific set of Medicaid-covered services to beneficiaries, and MCOs are expected to report encounter data to state Medicaid programs that allow the Medicaid administrators to track the services received by enrolled beneficiaries. The state pays the MCO a predetermined amount per beneficiary per month—known as capitation—and, in turn, the MCO pays providers for their services. According to CMS, by contracting with various types of Medicaid MCOs to deliver services, states can reduce program costs and better manage utilization of health care services. However, since MCOs receive a fixed amount per beneficiary regardless of the number of services used, we have noted in our prior work that there may be financial incentives for MCOs to limit access to services, potentially compromising quality of care and leading to underutilization of services. Historically, most Medicaid programs relied on a FFS delivery model. Under the FFS model, states pay providers directly for each service provided to a Medicaid beneficiary and the data included on a Medicaid FFS claim includes a specific amount for services delivered to a beneficiary. Certain states continue to use the FFS model to provide Medicaid services, such as behavioral health and dental care. We have noted in our prior work that, unlike managed care, the FFS model may give providers an incentive to use more services than necessary. Despite the fact that states have been required to submit encounter data to CMS since 1999, little is known about the utilization of services by Medicaid beneficiaries in MCOs. Historically, encounter data have been relatively incomplete and unreliable; thus, little is known about these data. At the behest of CMS, Mathematica Policy Research published a number of studies focused primarily on the usability and completeness of 2007- 2010 Medicaid encounter data, as reported in MAX. These studies first reported that encounter data were suitable for research purposes in 2012. CMS has provided guidance to states on methods to improve the completeness and accuracy of encounter data. In 2012, CMS released a protocol for validating Medicaid encounter data that states receive from MCOs. The protocol specifies a procedure for assessing the completeness and accuracy of encounter data that Medicaid MCOs are required to submit. Additionally, PPACA strengthened the requirement that Medicaid MCOs provide encounter data to states by withholding federal matching payments from states that do not report encounter data to CMS in a timely manner. The service utilization patterns of beneficiaries enrolled in Medicaid managed care plans can vary substantially and be related to a variety of factors, including the characteristics of beneficiaries and the scope of state Medicaid benefits offered. Beneficiary participation in managed care: States vary in the populations enrolled in managed care plans. States that enroll their most medically needy beneficiaries into managed care plans are likely to have higher service utilization. Conversely, states that enroll broader, generally healthier populations—such as children—into managed care plans are likely to have a larger pool of beneficiaries and potentially lower service utilization. The amount, duration, and scope of services covered by MCOs: Consistent with federal requirements, a state may determine the amount, duration, and the scope of benefits covered in their Medicaid programs. Thus, variations in service utilization patterns could reflect states’ benefit choices that are independent of their service delivery choices. Variation in Medicaid managed care payments: Medicaid MCO payments to providers for specific services vary substantially across states and this variation could affect the service utilization of beneficiaries. Specifically, we previously reported that in 23 states where we compared MCO and private insurance payments for E/M services, managed care payments were 31 to 65 percent lower in 18 states. Access to Providers: Access to providers who serve beneficiaries enrolled in Medicaid managed care plans can vary substantially within a state, such as between urban and rural areas, and also across states. Geographic variation in provider access, which can be driven by the breadth of an MCO’s network and the availability of providers in a given geographic area, can affect the type and amount of services used by beneficiaries. Based on our analysis of encounter data, the number of professional services utilized by adult and child beneficiaries per year in the 19 selected states ranged widely, with adult beneficiaries typically receiving more services. States also varied in how adult and child service utilization for professional services were distributed across service categories, and by whether beneficiaries were enrolled in comprehensive managed care plans for all of 2010 or part of the year. A detailed, interactive display of the data used to support our findings is available at http://www.gao.gov/products/GAO-15-481. For the 19 selected states, the number of services per beneficiary per year for adults ranged from about 13 to 55 services per beneficiary per year. (See fig. 1.) Services used by adult beneficiaries included E/M services, such as office visits and emergency room and critical care services; procedural services, such as surgery and ophthalmology; ancillary services, such as pathology and lab services and anesthesiology; and other professional services, such as oxygen therapy and hospital-mandated on-call service. Service utilization levels for adult beneficiaries are affected by many factors, including the extent to which they receive services on a FFS basis. Among the states in our analysis, the percentage of professional services that adult beneficiaries received on a FFS basis ranged from 0 to about 11 percent, with a median of 1 percent. Service utilization among adults was concentrated primarily in the ancillary and E/M categories. Specifically, ancillary services were the largest category in all but one state and accounted for 53 percent, on average, of all services utilized by adult beneficiaries across selected states. E/M services made up the second largest category (27 percent), followed by procedural services (15 percent) and, lastly, other professional services (4 percent). However, states varied considerably in how service utilization was distributed within service categories, as was shown in figure 1. Ancillary: Of total services, adult per beneficiary utilization of ancillary services ranged from 37 percent in Rhode Island to 65 percent in Washington and Illinois—a difference of about 28 percentage points. Pathology/lab services accounted for 63 percent, on average, of all ancillary service utilization across selected states. E/M: Of total services, adult per beneficiary utilization of E/M services ranged from 19 percent in Connecticut to 38 percent in Rhode Island—a difference of 19 percentage points. Office visits accounted for 68 percent on average, of E/M service utilization, while emergency room and critical care services accounted for 16 percent, on average. Procedural: Of total services, adult per beneficiary utilization of procedural services ranged from 8 percent in Illinois to 23 percent in Indiana—a difference of about 15 percentage points. Surgical services accounted for the largest portion—36 percent, on average— of all procedural service utilization across selected states. Other professional services: Of total services, adult per beneficiary utilization of other professional services ranged from 1 percent in Illinois, Kentucky, Nebraska, and Washington to 15 percent in Arizona—a difference of about 15 percentage points. For slightly more than half of the selected states, total service utilization among adults was higher for partial-year beneficiaries—those in comprehensive managed care plans for less than the full-year of 2010. Specifically, in 11 of the 19 states, the number of services utilized per year ranged from 2 to 78 percent higher for partial-year beneficiaries than for full-year beneficiaries. In the remaining 8 states, service utilization for partial-year beneficiaries was 3 to 15 percent lower than for full-year beneficiaries. Of the states that had comparatively higher service utilization for partial-year beneficiaries, there were generally no major differences in service utilization among partial-year beneficiaries based on the length of their enrollment. Specifically, partial-year beneficiaries who were enrolled for 1 to 3 months, 4 to 6 months, or 7 to 11 months generally had similarly high utilization rates. Further, we found that partial-year adult beneficiaries utilized more procedural and ancillary services than full-year beneficiaries in about two- thirds of the selected states. Among those states, partial-year adult beneficiaries used 19 and 26 percent more of these services, respectively, than full-year adult beneficiaries. (See fig. 2.) The utilization of professional services by children was generally lower than adults in selected states. In the 19 selected states, the number of services per beneficiary per year for children ranged from about 6 to 16. (See fig. 3.) Services used by child beneficiaries included E/M services, such as office visits and emergency room and critical care services; procedural services, such as surgery and ophthalmology; ancillary services, such as pathology and lab services and anesthesiology; and other professional services, such as oxygen therapy and hospital- mandated on-call service. Service utilization for child beneficiaries is affected by many factors, including the extent to which beneficiaries receive services on a FFS basis. Among selected states, the percentage of professional services that child beneficiaries received on a FFS basis ranged from 0 to about 29 percent, with a median of 9 percent. In contrast to adults, for which service utilization consisted mostly of ancillary services, utilization for children was distributed more evenly across service categories. For example, on average, E/M services were utilized most commonly by children (37 percent of services), followed by procedural services (33 percent), ancillary services (24 percent), and other professional services (5 percent). However, considerable state variation existed within service categories, as was shown in figure 3. E/M: Of total services, child per beneficiary utilization of E/M services ranged from 29 percent in Minnesota to 45 percent in Georgia and Rhode Island—a difference of 16 percentage points. On average, office visits (58 percent) and preventive visits (22 percent) comprised most E/M service utilization across selected states. Procedural: Of total services, child per beneficiary utilization of procedural services ranged from 25 percent in Arizona to 41 percent in Oregon and Texas—a difference of 16 percentage points. On average, immunizations and injections (60 percent) made up the majority of procedural service utilization across selected states. Ancillary: Of total services, child per beneficiary utilization of ancillary services ranged from 17 percent in Oregon to 36 percent in Illinois—a difference of 19 percentage points. On average, pathology and lab services (63 percent) made up the majority of ancillary service utilization across selected states. Other professional services: Of total services, child per beneficiary utilization of other professional services ranged from 1 percent in Georgia, Illinois, and New York to 21 percent in Arizona—a difference of 20 percentage points. Total service utilization among children was higher for partial-year beneficiaries—those enrolled in comprehensive managed care for less than the full year of 2010—than full-year beneficiaries for almost every selected state. For example, for all but one state, the number of services utilized per year was 4 to 44 percent higher for partial-year child beneficiaries than for full-year child beneficiaries. In the remaining state, the number of services utilized per year for partial-year child beneficiaries was 5 percent less than for full-year child beneficiaries. Further, partial-year child beneficiaries utilized more E/M and procedural services than full-year child beneficiaries across all selected states; specifically, partial-year child beneficiaries utilized 19 percent more E/M services and 22 percent more procedural services than full-year child beneficiaries. (See fig. 4.) Among selected states with higher utilization for partial-year child beneficiaries, most experienced the highest utilization for child beneficiaries who were enrolled for 1 to 3 months as compared with 4 to 6 months or 7 to 11 months. When compared with full-year child beneficiaries, partial-year child beneficiaries enrolled for 1 to 3 months utilized, on average, significantly more E/M and procedural services (61 percent and 34 percent, respectively). Furthermore, the increased utilization among child beneficiaries enrolled for 1 to 3 months was particularly pronounced for certain E/M, procedural, and ancillary services. We found the following examples: Inpatient visits: Across all selected states, utilization of inpatient visits ranged from 1.4 to over 15 times greater for child beneficiaries enrolled for 1 to 3 months than for full-year child beneficiaries. Preventive services: For all but one selected state, utilization of preventive visits ranged from 1.5 to almost 4 times greater for child beneficiaries enrolled for 1 to 3 months than for full-year child beneficiaries. Emergency room and critical care: For all but one selected state, utilization of emergency room and critical care services ranged from 1.2 to almost 3 times greater among child beneficiaries enrolled for 1 to 3 months than for full-year child beneficiaries. Surgery: For all but one selected state, utilization of surgery ranged from 1 to 2.5 times greater for child beneficiaries enrolled for 1 to 3 months than for full-year child beneficiaries. Radiology: Across all selected states, utilization of radiology ranged from 1.2 to 2.5 times greater for child beneficiaries enrolled for 1 to 3 months than for full-year child beneficiaries. We provided the Secretary of Health and Human Services with a draft of this report. The Department of Health and Human Services provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix IV. We examined service utilization patterns for Medicaid beneficiaries enrolled in comprehensive managed care plans using Medicaid Analytic eXtract (MAX) encounter data for calendar year 2010, the most recent year for which encounter data were available for the majority of states at the time we began our analyses. Our analysis consisted of the following three steps: (1) state selection, which included assessing data reliability for our selected states; (2) beneficiary and service identification; and (3) utilization calculation. Lastly, we present limitations of this study and technical comments that we received from 13 of the 19 selected states. Step 1: State Selection To assess the reliability and usability of the MAX data for our purposes, we reviewed related documentation and studies that assessed the reliability of or analyzed MAX data, and we interviewed officials from the Centers for Medicare & Medicaid Services (CMS) and its contractor responsible for processing the MAX data (Mathematica Policy Research, We determined that 19 states reported data that were reliable for Inc.).our purposes: Arizona, Connecticut, Delaware, Florida, Georgia, Illinois, Indiana, Kentucky, Michigan, Minnesota, Nebraska, New Mexico, New York, Oregon, Rhode Island, Tennessee, Texas, Virginia, and Washington. We excluded the remaining 31 states and the District of Columbia because we determined their data were unreliable or not usable for our purposes for one or more of the following reasons: (1) no adults or children were enrolled in comprehensive managed care plans, according to MAX (13 states); (2) MAX data were unavailable at the time we began our analysis (11 states); (3) the data were unreliable, such as if fewer than 30 percent of beneficiaries used at least one service—one of the thresholds established by Mathematica when evaluating the completeness and usability of MAX data (6 states); and (4) services were not reported using a standard coding convention, namely the Health Care Common Procedural Coding System (HCPCS) (2 states). (See table 1.) Step 2: Beneficiary and Service Identification Based on eligibility information in the MAX Person Summary file, we restricted our study to adults and children who (1) were eligible to receive full Medicaid benefits and were enrolled for any given month during calendar year 2010, and (2) did not have other sources of health coverage during the calendar year in addition to Medicaid, such as coverage from Medicare or private insurance.95 percent of the adults and children in comprehensive managed care among the 19 states in our analysis in 2010. We used the MAX Other Services file to identify professional services used by beneficiaries while they were enrolled in a comprehensive managed care plan. We, in large part, used the Health Care Cost Institute’s methodology for grouping professional services based on a range of HCPCS codes. These codes are used by providers to bill for professional services. We grouped the professional services in our analysis into four broad categories. (See table 2.) We excluded dental and behavioral health services from our analysis because these services may be contracted out by managed care organizations (MCO) and provided on a fee-for-service (FFS) basis. Additionally, Mathematica reported concerns regarding the quality of managed care behavioral health data. Step 3: Utilization Calculation For each service provided to each beneficiary described in steps 1 and 2 above, we calculated the number of services per beneficiary per year. This is defined as the number of services that beneficiaries enrolled in comprehensive managed care plans used in a year (includes users and nonusers enrolled in comprehensive managed care plans within the state). We presented service utilization patterns for adults and children by state, by service category, and by the length of beneficiary enrollment—in particular, whether beneficiaries were enrolled in a comprehensive managed care plan for a full or partial year. We then further grouped partial-year beneficiaries into monthly increments—1-3, 4-6, and 7-11 months—to determine whether there were differences in utilization patterns by the varying lengths of enrollment. In addition to services used by beneficiaries enrolled in a comprehensive managed care plan, we also calculated the extent to which the beneficiaries in our analysis received professional services paid on a FFS basis while they were enrolled in a comprehensive managed care plan. See http://www.gao.gov/products/GAO-15-481 for further detail on these measures and the FFS data. The results we present are based on data reported to CMS by the 19 states in our analysis. We did not independently verify whether the individual MCOs in these states submitted complete and accurate data on the enrollment and services for beneficiaries enrolled in comprehensive managed care. To better understand the factors that may affect service utilization, we asked representatives from each state to comment on the accuracy and completeness of the state’s 2010 managed care data submitted to CMS. Thirteen of the 19 selected states responded to our request. These states met our criteria for inclusion, as well as the minimum threshold for the number of services per adult or child beneficiary used by Mathematica to assess the completeness of each state’s data. Nevertheless, of these 13 states, 4 states indicated that they may have submitted incomplete enrollment or encounter data in 2010, citing a variety of reasons. For example, 1 state indicated its Medicaid managed care program was in the process of major changes and service data were likely not complete. Another state indicated that, in calendar year 2010, some of the state’s managed care data were not reported due to quality problems. Officials from the remaining 9 states noted their results either seemed reasonable based on their knowledge of their state’s Medicaid managed care program or that the managed care data they submitted to CMS for 2010 was believed to be accurate. The results we present for the 19 states in our analysis are not representative of all states and their managed care programs, nor do our results draw any conclusions regarding whether the level of service utilization identified is appropriate. There are a number of state-specific factors—such as differences in beneficiary health status and provider supply—that could contribute to variation in service utilization across the states. For example, officials from 1 state noted that the state’s MCOs were limited to certain geographical areas of the state. As such, geographic variation in provider access, which can be driven by the breadth of an MCO’s network and the availability of providers in a given geographic area, can affect the type and amount of services used by beneficiaries in Medicaid managed care. The tables below provide the number of services per beneficiary per year for adults and children by state, service category, and length of enrollment. Based on our analysis of encounter data, the number of professional services utilized per beneficiary per year reported by the 19 selected states for adults enrolled in comprehensive managed care plans in 2010 ranged from about 13 to 55; the range for children was generally lower, from about 6 to 16. States varied in how adult and child utilization of professional services were distributed across service categories. In addition, service utilization for both adults and children varied by whether beneficiaries were enrolled in comprehensive managed care plans for all of 2010 or part of the year. In particular, for nearly all states in our analysis, partial-year child beneficiaries utilized significantly more services overall than those enrolled for the full year. In addition to the contact named above, William Black, Assistant Director; Christine Brudevold, Assistant Director; Ramsey Asaly; Stella Chiang; Greg Dybalski; Sandra George; Drew Long; Jessica Morris; and Vikki Porter made key contributions to this report.
Medicaid, a federal-state health financing program for low-income and medically needy individuals, covered 65 million beneficiaries at an estimated cost of $508 billion in fiscal year 2014. More than half of Medicaid beneficiaries are enrolled in managed care plans, a health care delivery model where states contract with managed care organizations to provide covered services for a set cost. Historically, states have submitted relatively unreliable managed care service utilization data, also known as encounter data, to the Centers for Medicare & Medicaid Services, the federal agency that oversees Medicaid. However, recent evidence suggests that encounter data may be improving. Information on beneficiaries' service utilization could serve as a baseline for future analyses of utilization trends over time. GAO was asked to examine the level of services provided to these beneficiaries. In this report, GAO describes what encounter data indicate about the service utilization of Medicaid beneficiaries in managed care plans. To do this work, GAO analyzed state-reported data included in CMS's 2010 Medicaid Analytic eXtract data and determined that 19 states had data that were reliable for its purposes, but excluded the remaining 31 states and the District of Columbia. For these 19 states, GAO calculated service utilization rates for adult and child beneficiaries enrolled in comprehensive managed care plans by state, service category, and length of enrollment. GAO received technical comments on a draft of this report from HHS and incorporated them as appropriate. Based on GAO's analysis of 2010 encounter data reported by 19 states, the number of professional services utilized by adult beneficiaries ranged from about 13 to 55. For children, the number of professional services utilized per beneficiary was lower, ranging from about 6 to 16 among the 19 states. Professional services included four categories of services: (1) evaluation and management (E/M) services, such as office visits and emergency room and critical care services; (2) procedural services, such as surgery and ophthalmology; (3) ancillary services, such as pathology and lab services; and (4) other professional services, such as oxygen therapy. States varied considerably in how service utilization was distributed within service categories. For example, of total services, adult per beneficiary utilization of ancillary services ranged from 37 percent in Rhode Island to 65 percent in Washington and Illinois; and child per beneficiary utilization of E/M services ranged from 29 percent in Minnesota to 45 percent in Georgia and Rhode Island. Service utilization for both adult and child beneficiaries also varied by the length of enrollment. When compared with beneficiaries enrolled for a full year, total service utilization for adults was 2 to 78 percent higher for partial-year beneficiaries—those enrolled in a comprehensive managed care plan for less than the full year—in slightly more than half of selected states. For children in all but one selected state, service utilization was 4 to 44 percent higher for partial-year beneficiaries compared with full-year beneficiaries.
As I have stated in other testimony, Medicare as currently structured is fiscally unsustainable. While many people have focused on the improvement in the HI trust fund’s shorter-range solvency status, the real news is that we now have a more realistic view of Medicare’s long-term financial condition and the outlook is much bleaker. A consensus has emerged that previous program spending projections have been based on overly optimistic assumptions and that actual spending will grow faster than has been assumed. First, let me talk about how we measure Medicare’s fiscal health. In the past, Medicare’s financial status has generally been gauged by the projected solvency of the HI trust fund, which covers primarily inpatient hospital care and is financed by payroll taxes. Looked at this way, Medicare—more precisely, Medicare’s Hospital Insurance trust fund—is described as solvent through 2029. However, even from the perspective of HI trust fund solvency, the estimated exhaustion date of 2029 does not mean that we can or should wait until then to take action. In fact, delay in addressing the HI trust fund imbalance means that the actions needed will be larger and more disruptive. Taking action today to restore solvency to the HI trust fund for the next 75 years would require benefit cuts of 37 percent or tax increases of 60 percent, or some combination of the two. While these actions would not be easy or painless, postponing action until 2029 would require more than doubling of the payroll tax or cutting benefits by more than half to maintain solvency. (See fig. 1.) Given that in the long-term, Medicare cost growth is now projected to grow at 1 percentage point faster than GDP, HI’s financial condition is expected to continue to worsen after the 75-year period. By 2075, HI’s annual financing shortfall—the difference between program income and benefit costs—will reach 7.35 percent of taxable payroll. This means that if no action is taken this year, shifting the 75-year horizon out one year to 2076—a large deficit year—and dropping 2001—a surplus year—would yield a higher actuarial deficit, all other things being equal. Moreover, HI trust fund solvency does not mean the program is financially healthy. Under the Trustees’ 2001 intermediate estimates, HI outlays are projected to exceed HI tax revenues beginning in 2016, the same year in which Social Security outlays are expected to exceed tax revenues. (See fig. 2.) As the baby boom generation retires and the Medicare-eligible population swells, the imbalance between outlays and revenues will increase dramatically. Thus, in 15 years the HI trust fund will begin to experience a growing annual cash deficit. At that point, the HI program must redeem Treasury securities acquired during years of cash surplus. Treasury, in turn, must obtain cash for those redeemed securities either through increased taxes, spending cuts, increased borrowing, retiring less debt, or some combination thereof. Finally, HI trust fund solvency does not measure the growing cost of the Part B SMI component of Medicare, which covers outpatient services and is financed through general revenues and beneficiary premiums. Part B accounts for somewhat more than 40 percent of Medicare spending and is expected to account for a growing share of total program dollars. As the Trustees noted in this year’s report, a rapidly growing share of general revenues and substantial increases in beneficiary premiums will be required to cover part B expenditures. Clearly, it is total program spending—both Part A and Part B—relative to the entire federal budget and national economy that matters. This total spending approach is a much more realistic way of looking at the combined Medicare program’s sustainability. In contrast, the historical measure of HI trust fund solvency cannot tell us whether the program is sustainable over the long haul. Worse, it can serve to distort perceptions about the timing, scope, and magnitude of our Medicare challenge. These figures reflect a worsening of the long-term outlook. Last year a technical panel advising the Medicare Trustees recommended assuming that future per-beneficiary costs for both HI and SMI eventually will grow at a rate 1 percentage point above GDP growth—about 1 percentage point higher than had previously been assumed. That recommendation—which was consistent with a similar change CBO had made to its Medicare and Medicaid long-term cost growth assumptions—was adopted by the Trustees. In their new estimates published on March 19, 2001, the Trustees adopted the technical panel’s long-term cost growth recommendation. The Trustees note in their report that this new assumption substantially raises the long-term cost estimates for both HI and SMI. In their view, incorporating the technical panel’s recommendation yields program spending estimates that represent a more realistic assessment of likely long-term program cost growth. Under the old assumption (the Trustees’ 2000 best estimate intermediate assumptions), total Medicare spending consumed 5 percent of GDP by 2063. Under the new assumption (the Trustees’ 2001 best estimate intermediate assumptions), this occurs almost 30 years sooner in 2035— and by 2075 Medicare consumes over 8 percent of GDP, compared with 5.3 percent under the old assumption. The difference clearly demonstrates the dramatic implications of a 1-percentage point increase in annual Medicare spending over time. (See fig. 3) In part the progressive absorption of a greater share of the nation’s resources for health care, as with Social Security, is a reflection of the rising share of the population that is elderly. Both programs face demographic conditions that require action now to avoid burdening future generations with the program’s rising costs. Like Social Security, Medicare’s financial condition is directly affected by the relative size of the populations of covered workers and beneficiaries. Historically, this relationship has been favorable. In the near future, however, the covered worker-to-retiree ratio will change in ways that threaten the financial solvency and sustainability of this important national program. In 1970 there were 4.6 workers per HI beneficiary. Today there are about 4, and in 2030, this ratio will decline to only 2.3 workers per HI beneficiary. (See fig. 4.) Unlike Social Security, however, Medicare growth rates reflect not only a burgeoning beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. Increases in the number and quality of health care services have been fueled by the explosive growth of medical technology. Moreover, the actual costs of health care consumption are not transparent. Third-party payers generally insulate consumers from the cost of health care decisions. All of these factors contribute to making Medicare a much greater and more complex fiscal challenge than even Social Security. When viewed from the perspective of the federal budget and the economy, the growth in health care spending will become increasingly unsustainable over the longer term. Figure 5 shows the sum of the future expected HI cash deficit and the expected general fund contribution to SMI as a share of federal income taxes under the Trustees 2001 intermediate estimates. SMI has received contributions from the general fund since the inception of the program. This general revenue contribution is projected to grow from about 5 percent of federal personal and corporate income taxes in 2000 to 13 percent by 2030. Beginning in 2016, use of general fund revenues will be required to pay benefits as the HI trust fund redeems its Treasury securities. Assuming general fund revenues are used to pay benefits after the trust fund is exhausted, by 2030 the HI program alone would consume more than 6 percent of income tax revenue. On a combined basis, Medicare’s draw on general revenues would grow from 5.4 percent of income taxes today to nearly 20 percent in 2030 and 45 percent by 2070. Figure 6 reinforces the need to look beyond the HI program. HI is only the first layer in this figure. The middle layer adds the SMI program, which is expected to grow faster than HI in the near future. By the end of the 75- year projection period, SMI will represent almost half of total estimated Medicare costs. To get a more complete picture of the future federal health care entitlement burden, Medicaid is added. Medicare and the federal portion of Medicaid together will grow to 14.5 percent of GDP from today’s 3.5 percent. Taken together, the two major government health programs— Medicare and Medicaid—represent an unsustainable burden on future generations. In addition, this figure does not reflect the taxpayer burden of state and local Medicaid expenditures. A recent statement by the National Governors Association argues that increased Medicaid spending has already made it difficult for states to increase funding for other priorities. Our long-term simulations show that to move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Assuming, for example, that Congress and the President adhere to the often-stated goal of saving the Social Security surpluses, our long-term simulations show a world by 2030 in which Social Security, Medicare, and Medicaid absorb most of the available revenues within the federal budget. Under this scenario, these programs would require more than three-quarters of total federal revenue even without adding a Medicare prescription drug benefit. (See fig. 7.) Revenue as a share of GDP declines from its 2000 level of 20.6 percent due to unspecified permanent policy actions. In this display, policy changes are allocated equally between revenue reductions and spending increases. The “Save the Social Security Surpluses” simulation can only be run through 2056 due to the elimination of the capital stock. This scenario contemplates saving surpluses for 20 years—an unprecedented period of surpluses in our history—and retiring publicly held debt. Alone, however, even saving all Social Security surpluses would not be enough to avoid encumbering the budget with unsustainable costs from these entitlement programs. Little room would be left for other federal spending priorities such as national defense, education, and law enforcement. Absent changes in the structure of Medicare and Social Security, sometime during the 2040s government would do nothing but mail checks to the elderly and their health care providers. Accordingly, substantive reform of the Medicare and Social Security programs remains critical to recapturing our future fiscal flexibility. Demographics argue for early action to address Medicare’s fiscal imbalances. Ample time is required to phase in the reforms needed to put this program on a more sustainable footing before the baby boomers retire. In addition, timely action to bring costs down pays large fiscal dividends for the program and the budget. The high projected growth of Medicare in the coming years means that the earlier reform begins, the greater the savings will be as a result of the effects of compounding. Beyond reforming the Medicare program itself, maintaining an overall sustainable fiscal policy and strong economy is vital to enhancing our nation’s future capacity to afford paying benefits in the face of an aging society. Today’s decisions can have wide-ranging effects on our ability to afford tomorrow’s commitments. As I have testified before, you can think of the budget choices you face as a portfolio of fiscal options balancing today’s unmet needs with tomorrow’s fiscal challenges. At the one end— with the lowest risk to the long-range fiscal position—is reducing publicly held debt. At the other end—offering the greatest risk—is increasing entitlement spending without fundamental program reform. Reducing publicly held debt helps lift future fiscal burdens by freeing up budgetary resources encumbered for interest payments, which currently represent about 12 cents of every federal dollar spent, and by enhancing the pool of economic resources available for private investment and long- term economic growth. This is particularly crucial in view of the known fiscal pressures that will begin bearing down on future budgets in about 10 years as the baby boomers start to retire. However, as noted above, debt reduction is not enough. Our long-term simulations illustrate that, absent entitlement reform, large and persistent deficits will return. Despite common agreement that, without reform, future program costs will consume growing shares of the federal budget, there is also a mounting consensus that Medicare’s benefit package should be expanded to cover prescription drugs, which will add billions to the program’s cost. This places added pressure on policymakers to consider proposals that could fundamentally reform Medicare. Our previous work provides, I believe, some considerations that are relevant to deliberations regarding the potential addition of a prescription drug benefit and Medicare reform options that would inject competitive mechanisms to help control costs. In addition, our reviews of HCFA offer lessons for improving Medicare’s management. Implementing necessary reforms that address Medicare’s financial imbalance and meet the needs of beneficiaries will not be easy. We must have a Medicare agency that is ready and able to meet these 21st century challenges. Among the major policy challenges facing the Congress today is how to reconcile Medicare’s unsustainable long-range financial condition with the growing demand for an expensive new benefit—namely, coverage for prescription drugs. It is a given that prescription drugs play a far greater role in health care now than when Medicare was created. Today, Medicare beneficiaries tend to need and use more drugs than other Americans. However, because adding a benefit of such potential magnitude could further erode the program’s already unsustainable financial condition, you face difficult choices about design and implementation options that will have a significant impact on beneficiaries, the program, and the marketplace. Let’s examine the current status regarding Medicare beneficiaries and drug coverage. About a third of Medicare beneficiaries have no coverage for prescription drugs. Some beneficiaries with the lowest incomes receive coverage through Medicaid. Some beneficiaries receive drug coverage through former employers, some can join Medicare+Choice plans that offer drug benefits, and some have supplemental Medigap coverage that pays for drugs. However, significant gaps remain. For example, Medicare+Choice plans offering drug benefits are not available everywhere and generally do not provide catastrophic coverage. Medigap plans are expensive and have caps that significantly constrain the protection they offer. Thus, beneficiaries with modest incomes and high drug expenditures are most vulnerable to these coverage gaps. Overall, the nation’s spending on prescription drugs has been increasing about twice as fast as spending on other health care services, and it is expected to keep growing. Recent estimates show that national per-person spending for prescription drugs will increase at an average annual rate exceeding 10 percent until at least 2010. As the cost of drug coverage has been increasing, employers and Medicare+Choice plans have been cutting back on prescription drug benefits by raising enrollees’ cost-sharing, charging higher copayments for more expensive drugs, or eliminating the benefit altogether. It is not news that adding a prescription drug benefit to Medicare will be costly. However, the cost consequences of a Medicare drug benefit will depend on choices made about its design—including the benefit’s scope and financing mechanism. For instance, a Medicare prescription drug benefit could be designed to provide coverage for all beneficiaries, coverage only for beneficiaries with extraordinary drug expenses, coverage only for low-income beneficiaries. Policymakers would need to determine how costs would be shared between taxpayers and beneficiaries through premiums, deductibles, and copayments and whether subsidies would be available to low-income, non-Medicaid eligible individuals. Design decisions would also affect the extent to which a new pharmaceutical benefit might shift to Medicare portions of the out-of- pocket costs now borne by beneficiaries as well as those costs now paid by Medicaid, Medigap, or employer plans covering prescription drugs for retirees. Clearly, the details of a prescription drug benefit’s implementation would have a significant impact on both beneficiaries and program spending. Experience suggests that some combination of enhanced access to discounted prices, targeted subsidies, and measures to make beneficiaries more aware of costs may be needed. Any option would need to balance concerns about Medicare sustainability with the need to address what will likely be a growing hardship for some beneficiaries in obtaining prescription drugs. The financial prognosis for Medicare clearly calls for meaningful spending reforms to help ensure that the program is sustainable over the long haul. The importance of such reforms will be heightened if financial pressures on Medicare are increased by the addition of new benefits, such as coverage for prescription drugs. Some leading reform proposals envision that Medicare could achieve savings by adapting some of the competitive elements embodied in the Federal Employees Health Benefits Program. Specifically, these proposals would move Medicare towards a model in which health plans compete on the basis of benefits offered and costs to the government and beneficiaries, making the price of health care more transparent. Currently, Medicare follows a complex formula to set payment rates for Medicare+Choice plans, and plans compete primarily on the richness of their benefit packages. Medicare permits plans to earn a reasonable profit, equal to the amount they can earn from a commercial contract. Efficient plans that keep costs below the fixed payment amount can use the “savings” to enhance their benefit packages, thus attracting additional members and gaining market share. Under this arrangement, competition among Medicare plans may produce advantages for beneficiaries, but the government reaps no savings. In contrast, a competitive premium approach offers certain advantages. Instead of having the government administratively set a payment amount and letting plans decide—subject to some minimum requirements—the benefits they will offer, plans would set their own premiums and offer at least a required minimum Medicare benefit package. Under these proposals, Medicare costs would be more transparent: beneficiaries could better see what they and the government were paying for in connection with health care expenditures. Beneficiaries would generally pay a portion of the premium and Medicare would pay the rest. Plans operating at lower cost could reduce premiums, attract beneficiaries, and increase market share. Beneficiaries who joined these plans would enjoy lower out-of- pocket expenses. Unlike today’s Medicare+Choice program, the competitive premium approach provides the potential for taxpayers to benefit from the competitive forces. As beneficiaries migrated to lower- cost plans, the average government payment would fall. Experience with the Medicare+Choice program reminds us that competition in Medicare has its limits. First, not all geographic areas are able to support multiple health plans. Medicare health plans historically have had difficulty operating efficiently in rural areas because of a sparseness of both beneficiaries and providers. In 2000, 21 percent of rural beneficiaries had access to a Medicare+Choice plan, compared to 97 percent of urban beneficiaries. Second, separating winners from losers is a basic function of competition. Thus, under a competitive premium approach, not all plans would thrive, requiring that provisions be made to protect beneficiaries enrolled in less successful plans. The extraordinary challenge of developing and implementing Medicare reforms should not be underestimated. Our look at health care spending projections shows that, with respect to Medicare reform, small implementation problems can have huge consequences. To be effective, a good program design will need to be coupled with competent program management. Consistent with that view, questions are being raised about the ability of CMS to administer the Medicare program effectively. Our reviews of Medicare program activities confirm the legitimacy of these concerns. In our companion statement today, we discuss not only the Medicare agency’s performance record but also areas where constraints have limited the agency’s achievements. We also identify challenges the agency faces in seeking to meet expectations for the future. As the Congress and the Administration focus on current Medicare management issues, our review of HCFA suggests several lessons: Managing for results is fundamental to an agency’s ability to set meaningful goals for performance, measure performance against those goals, and hold managers accountable for their results. Our work shows that HCFA has faltered in adopting a results-based approach to agency management, leaving the agency in a weakened position for assuming upcoming responsibilities. In some instances, the agency may not have the tools it needs because it has not been given explicit statutory authority. For example, the agency has sought explicit statutory authority to use full and open competition to select claims administration contractors. The agency believes that without such statutory authority it is at a disadvantage in selecting the best performers to carry out Medicare claims administration and customer service functions. To be effective, any agency must be equipped with the full complement of management tools it needs to get the job done. A high-performance organization demands a workforce with, among other things, up-to-date skills to enhance the agency’s value to its customers and ensure that it is equipped to achieve its mission. HCFA began workforce planning efforts that continue today in an effort to identify areas in which staff skills are not well matched to the agency’s evolving mission. In addition, CMS recently reorganized its structure to be more responsive to its customers. It is important that CMS continue to reevaluate its skill needs and organizational structure as new demands are placed on the agency. Data-driven information is essential to assess the budgetary impact of policy changes and distinguish between desirable and undesirable consequences. Ideally, the agency that runs Medicare should have the ability to monitor the effects of Medicare reforms, if enacted—such as adding a drug benefit or reshaping the program’s design. However, HCFA was unable to make timely assessments, largely because its information systems were not up to the task. The status of these systems remains the same, leaving CMS unprepared to determine, within reasonable time frames, the appropriateness of services provided and program expenditures. The need for timely, accurate, and useful information is particularly important in a program where small rate changes developed from faulty estimates can mean billions of dollars in overpayments or underpayments. An agency’s capacity should be commensurate with its responsibilities. As the Congress continues to modify Medicare, CMS’ responsibilities will grow substantially. HCFA’s tasks increased enormously with the enactment of landmark Medicare legislation in 1997 and the modifications to that legislation in 1999 and 2000. In addition to the growth in Medicare responsibilities, the agency that administers this program is also responsible for other large health insurance programs and activities. As the agency’s mission has grown, however, its administrative dollars have been stretched thinner. Adequate resources are vital to support the kind of oversight and stewardship activities that Americans have come to count on—inspection of nursing homes and laboratories, certification of Medicare providers, collection and analysis of critical health care data, to name a few. Shortchanging this agency’s administrative budget will put the agency’s ability to handle upcoming reforms at serious risk. In short, because Medicare’s future will play such a significant role in the future of the American economy, we cannot afford to settle for anything less than a world-class organization to run the program. However, achieving such a goal will require a clear recognition of the fundamental importance of efficient and effective day-to-day operations. In determining how to reform the Medicare program, much is at stake— not only the future of Medicare itself but also assuring the nation’s future fiscal flexibility to pursue other important national goals and programs. I feel that the greatest risk lies in doing nothing to improve the Medicare program’s long-term sustainability. It is my hope that we will think about the unprecedented challenge facing future generations in our aging society. Engaging in a comprehensive effort to reform the Medicare program and put it on a sustainable path for the future would help fulfill this generation’s stewardship responsibility to succeeding generations. It would also help to preserve some capacity for future generations to make their own choices for what role they want the federal government to play.
Although the short-term outlook of Medicare's hospital insurance trust fund improved in the last year, Medicare's long-term prospects have worsened. The Medicare Trustee's latest projections, released in March, use more realistic assumptions about health care spending in the years ahead. These latest projections call into question the program's long-term financial health. The Congressional Budget Office also increased its long-term estimates of Medicare spending. The slowdown in Medicare spending growth in recent years appears to have ended. In the first eight months of fiscal year 2001, Medicare spending was 7.5 percent higher than a year earlier. This testimony discusses several fundamental challenges to Medicare reform. Without meaningful entitlement reform, GAO's long-term budget simulations show that an aging population and rising health care spending will eventually drive the country back into deficit and debt. The addition of a prescription drug benefits would boost spending projections even further. Properly structured reform to promote competition among health plans could make Medicare beneficiaries more cost conscious. The continued importance of traditional Medicare underscores the need to base adjustments to provider payments on hard evidence rather than on anecdotal information. Similarly, reforms in the management of the Medicare program should ensure that adequate resources accompany increased expectations about performance and accountability. Ultimately, broader health care reforms will be needed to balance health care spending with other societal priorities.
The Base Closure Community Redevelopment and Homeless Assistance Act of 1994 (Redevelopment Act), which amended the BRAC statute, established the homeless assistance process for properties on military bases approved for closure after October 25, 1994. The key participants in the current process include DOD’s Office of Economic Adjustment, the military departments, HUD, the LRAs, and the homeless assistance providers. Because DOD and HUD both have significant roles under the BRAC statute, they jointly promulgated the regulations governing BRAC homeless assistance. As a result, DOD and HUD collaborate in providing guidance on the BRAC homeless assistance process. DOD Office of Economic Adjustment. Within DOD, the Office of Economic Adjustment—a field activity under the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics— assists communities by providing technical and financial assistance in planning and carrying out adjustment strategies in response to significant defense actions including base closures. The Office of Economic Adjustment has been delegated authority by the Secretary of Defense to recognize the LRA for each base closed under BRAC. It also provides planning-grant funds to those LRAs for which it determines base closure will cause direct and significant adverse consequences. An Office of Economic Adjustment project manager is to be assigned to each of these bases as a facilitator and catalyst to the community’s planning process. Military Departments. The Secretary of Defense delegated to the Secretaries of the military departments—Army, Navy, and Air Force— the disposal authority for bases closed under BRAC, including the authority to manage surplus-property disposals such as homeless assistance conveyances. Each military department assigns a project manager to its bases closed under BRAC. HUD. At HUD headquarters, the Office of Community Planning and Development, Office of Special Needs Assistance Programs, carries out HUD’s BRAC process responsibilities. HUD field offices provide technical assistance to LRAs and homeless assistance providers throughout the planning process. HUD’s role is to review the redevelopment plan and homeless assistance submission that the LRA submits to HUD and DOD. HUD may also negotiate and consult with the LRA before or during its preparation of its plan. LRAs. The LRA is any authority or instrumentality established by state or local government and recognized by the Secretary of Defense through the Office of Economic Adjustment as the entity responsible for developing the redevelopment plan or for directing implementation of the redevelopment plan. LRAs are required to perform certain steps to allow the community input during its deliberations. The communities in the vicinity of a base are defined by BRAC regulation as the political jurisdiction or jurisdictions, other than the state, that include the LRA for the base. Homeless assistance providers. Homeless assistance providers, also called “representatives of the homeless,” may include state or local government agencies or private nonprofit organizations that provide or propose to provide assistance to homeless persons and families. Homeless assistance providers seek buildings and properties that may provide supportive services, job and skills training, employment programs, shelter, transitional housing, permanent housing, food and clothing banks, treatment facilities, or other activities that meet an identified need of the homeless or fill a gap in the local Continuum of Care. Under the BRAC statute as revised by the Redevelopment Act and its implementing regulations, LRAs are required to accept and consider notices of interest from homeless assistance providers. In this process, the LRA prepares a redevelopment plan after consulting with homeless assistance providers and other community groups affected by the base closure, and HUD assesses the plan to determine whether it appropriately balances the needs of the community for economic and other development with the needs of the homeless. Subsequent to HUD approval and other procedural steps, DOD may transfer properties for homeless assistance purposes. Figure 1 shows the homeless assistance process for BRAC surplus property under the Redevelopment Act. Pursuant to the BRAC statute as amended by the Redevelopment Act, the military departments are required to determine whether other DOD components or federal agencies have a use for property at the BRAC base, then notify the LRAs of any surplus property available for reuse and publish that information in the Federal Register. The LRA then must advertise the surplus property availability in a newspaper of general circulation within the vicinity of the base. The advertisement must include the period, required to be between 90 and 180 days following the advertisement, during which it will receive notices of interest from homeless assistance providers. The LRA must also conduct outreach with homeless assistance providers, including holding a workshop and tour of the properties. When the LRA completes its outreach process, it has up to 270 days to generate a redevelopment plan and a homeless assistance submission. The LRA must consider the notices of interest and determine which, if any, to support with some combination of buildings, property, or funding. If the LRA decides to support a notice of interest, it develops legally binding agreements to implement no-cost homeless assistance, which may differ substantially from the initial notice. The LRA then submits these agreements as part of the homeless assistance submission to HUD and the military department. If HUD approves the base redevelopment plan, including the homeless assistance submission, it will notify the LRA and the military department. The military department—which during the redevelopment planning conducts an environmental impact analysis of the base prior to disposal— is required to give the redevelopment plan, including the homeless assistance recommendations, substantial deference in making property disposal decisions. Once the military department completes its environmental impact analysis and makes its record of decision, it transfers surplus buildings and properties in accordance with the record of decision, and may transfer properties for homeless assistance either to the LRA or directly to the homeless assistance providers. Pursuant to the BRAC statute and its implementing regulations, LRAs may convey to homeless assistance providers on-base property or buildings, off-base property or buildings, or funding in lieu of property. An on-base conveyance may include undeveloped land, buildings to be demolished in order to develop new structures, or entire buildings or space within a building to provide assistance to those experiencing homelessness. Such conveyances must be for no cost. Additionally, the legally binding agreements must include a provision that, if a homeless assistance provider ceases to provide services to the homeless, the property will revert to the LRA. If this were to occur, the LRA must take appropriate action to secure, to the maximum extent practicable, another qualified homeless assistance provider to use the property to assist the homeless. If the LRA is unable to find a qualified provider to use the property, it will own the property without any further requirement to use the property to assist the homeless. As part of the planning process, the LRA may propose alternative properties off base or financial assistance if those options would be more compatible with the LRA’s proposed redevelopment plan for the base. Off-base properties or buildings may include undeveloped land or excess buildings owned by the local government. Funding may originate from bonds, a percentage of sales from on-base property to private developers, or through the issuance of forgivable loans, among other options. In these cases where the LRA is providing off-base property or funding, the legally binding agreements do not need to include a provision that the conveyance revert to the LRA if a homeless assistance provider ceases to provide services to the homeless. Thirty-nine of the 125 bases closed as a result of BRAC 2005 that had surplus property provided a variety of homeless assistance in response to notices of interest submitted by homeless assistance providers. An additional 12 bases’ LRAs received notices of interest that did not result in a legally binding agreement, the reasons for which varied. However, neither HUD nor DOD require that the status of conveyances be tracked after legally binding agreements are reached, which limits the departments’ ability to assess the homeless assistance program’s effectiveness. Of the 125 bases with surplus property closed during BRAC 2005, 39 received notices of interest from homeless assistance providers that were approved for assistance with 75 homeless assistance providers, 12 bases received notices of interest that did not result in any assistance, and 74 Figure 2 shows the geographic bases received no notices of interest.distribution of the properties. The 39 bases’ LRAs that provided assistance did so in a variety of ways, as seen in figure 3, below. On-Base Property. Twenty-two bases’ LRAs provided assistance through on-base property conveyances, granting specific existing property to homeless assistance providers. This property included nearly 50 parcels of both vacated military surplus buildings and plots of undeveloped land. For instance, at General Mitchell Air Reserve Station in Milwaukee, Wisconsin, a homeless assistance provider requested a 56,000 square foot warehouse on the base for emergency food storage and office space. The provider was selected to receive homeless assistance and HUD approved the redevelopment plan in December 2008. The warehouse was conveyed to the provider in July 2010. Off-Base Property. Three bases’ LRAs provided assistance through off-base property conveyances, granting either a specific existing building, a new building to be constructed, or a piece of undeveloped land at a location that was not part of the former base. For example, according to the homeless assistance provider, the LRA at the Schroeder Hall U.S. Army Reserve Center in Long Beach, California, granted a 10-year no-cost lease of off-base property to the provider, with an option to purchase the property for a nominal fee at the conclusion of the lease. The site is located near the former base, and will house a psychiatric clinic for mentally ill people experiencing homelessness. Funding. Seven bases’ LRAs provided assistance through funding. The assistance provided totaled over $29 million. The amount provided ranged from $4,000 at Marshall U.S. Army Reserve Center in Marshall, Texas, for providing supportive services and temporary homeless housing assistance to $9.5 million at Fort McPherson in Atlanta, Georgia, for the construction and operation of 125 units of permanent supportive housing offsite. A combination of different types of assistance. Seven bases’ LRAs provided assistance through a combination of on-base property, off-base property, or funding. For example, the LRA at Truman Olson U.S. Army Reserve Center in Madison, Wisconsin, offered one provider off-base property and two providers funding of up to $410,000 in forgivable loans. Notices of interest requesting property in BRAC 2005 were submitted by 150 homeless assistance providers, 75 of whom negotiated a legally binding agreement for assistance. Those that negotiated an agreement were to receive on-base property, off-base property, funding, or a combination, as seen in figure 4. According to our analysis of the notices of interest and legally binding agreements, of the 75 providers that negotiated an agreement, 44 providers (59 percent) negotiated legally binding agreements that matched their initial notice of interest, whereas 31 providers (41 percent) negotiated an agreement that differed from their initial notice. For example, a homeless assistance provider submitted a notice of interest requesting one of three specific on-base buildings at Fort Monmouth, New Jersey. The LRA accepted the notice of interest and offered the provider alternative on-base buildings. However, according to the provider, these alternative buildings differed from the three that the provider requested in the notice of interest and were not suitable for the provider’s intended use. The provider was then offered funding in lieu of on-base property, which the provider accepted. Table 1 provides an example of three different types of outcomes that occurred between assistance requested and that to be received by homeless assistance providers. Of the 75 providers that submitted notices of interest for a homeless assistance conveyance but did not ultimately sign a legally binding agreement, we found six common reasons why the notices were not approved. These reasons—identified by the LRAs in the homeless assistance submissions and subsequently summarized by HUD in each base’s memorandum of decision—included: Organizational Capacity. Under BRAC regulations, providers are required to submit, among other things, “a description of the financial plan, the organization, and the organizational capacity of the representative of the homeless to carry out the program.” Some providers lacked organizational capacity to carry out their proposed program. For instance, at Cambridge Memorial U.S. Army Reserve Center in Cambridge, Minnesota, a provider submitted a notice of interest seeking to construct 16 on-site housing units. However, the provider stated that it could not afford to begin construction without a large grant from the City of Cambridge. The city was unable to finance such a large grant, so the LRA rejected the provider on the basis that it lacked the organizational capacity to implement its proposal. HUD concurred with the LRA’s determination in the memorandum of decision. Withdrawn by Provider. The provider may unilaterally withdraw from the process at any time, even after it has been selected by the LRA to receive assistance. For example, at Finnell U.S. Army Reserve Center / Army Maintenance Support Activity 51 in Tuscaloosa, Alabama, a consortium of homeless assistance providers submitted a notice of interest to use the entire on-site property for social services, meeting facilities, and apartment buildings for transitional housing. However, because the LRA offered part of the requested property and the providers were unsure of how they would get funding for the development and ongoing maintenance of that offered property, the consortium told us it withdrew its notice. Instead, the Army stated it is in discussion with the City of Tuscaloosa for potential negotiated sale. Ineligible for Homeless Assistance. BRAC regulation also requires providers to demonstrate that their proposals describe the uses to which the property will be put, which must involve specific homeless assistance activities or other activities that will “meet an identified need of the homeless.” Not all provider proposals addressed this requirement. For instance, at Schroeder Hall U.S. Army Reserve Center in Long Beach, California, a provider proposed the construction of 100 units of low-income housing. The LRA concluded that the proposal was focused on low-income families rather than people currently experiencing homelessness and rejected the proposal. HUD concurred in the memorandum of decision. Redundancy. Under BRAC regulations, an LRA must provide an explanation of why it rejected a particular notice of interest and a description of the impact of the proposed homeless assistance program on the community. The most common reason cited by the LRAs was that the group the provider hoped to assist was already being accommodated by other providers in the area, and that further redundancy of services could have a negative effect on the community. For instance, at Fort McPherson in Atlanta, Georgia, a provider submitted a notice of interest proposing the on-base development of 200 units of supportive housing and associated services. Although both the LRA and HUD determined that the proposal was viable, there were already a number of supportive housing projects being accommodated as part of the redevelopment. To avoid overconcentration of supportive housing in the area of the former base, the LRA rejected the proposal, and HUD concurred. Incomplete Plan. Providers rejected for incomplete plans either failed to include required information in their notices of interest or failed to respond to HUD or LRA requests to provide this information after the initial notices had been submitted. For instance, at Walter Reed Army Medical Center in Washington, D.C., a provider submitted a notice of interest requesting emergency, transitional, and permanent housing but failed to include required documents. The LRA requested more information, but the provider did not respond and thus the LRA rejected the proposal. HUD concurred in the memorandum of decision. Unsuitable Site. The LRA may determine that the site selected by the provider would not be fit for the plan proposed. For instance, at Germantown Memorial U.S. Army Reserve Center in Philadelphia, Pennsylvania, a provider proposed renovating several on-site surplus buildings to create 48 units of permanent housing for homeless seniors. The LRA cited several reasons in its rejection of the proposal, including that the provider had proposed housing seniors in the organizational maintenance shop and that the site suffered from hazardous waste contamination, sinkholes, and poor drainage, rendering the former base unfit for human habitation. HUD concurred with the LRA’s reasoning. Table 2 illustrates reasons that providers’ notices of interest did not result in legally binding agreements as well as the frequency for each reason. In five cases, providers withdrew their notices of interest but received property, funds, or other assistance from the LRA outside of the BRAC process. For instance, at Waukegan Armed Forces Reserve Center in Waukegan, Illinois, we were told by a homeless assistance provider that it had withdrawn from the process in exchange for 15 units of city property at a different site, to be conveyed by 2015. According to the provider, it signed a memorandum of understanding with the LRA. Because the exchange happened outside of the official BRAC process, HUD did not review the memorandum of understanding or comment on the arrangement. This arrangement allowed the provider to circumvent the HUD review, which, according to the LRA, allowed greater flexibility in the makeup of the plan and also the timing of the property conveyance. However, because this arrangement is not subject to HUD review, the provider may not have legal recourse if the arrangement falls through unless a binding agreement was signed by both parties. HUD officials agreed that these five cases of provider withdrawal would not be recorded as assistance officially provided under the BRAC homeless assistance process, although they added that the existence of the BRAC homeless assistance process is what allowed the providers to negotiate with the LRA. We found that neither HUD nor DOD requires tracking of the long-term conveyance status of those properties that were awarded through legally binding agreements to determine the effectiveness of the program. Through our analysis of data collected from DOD, LRAs, and homeless assistance providers, we found that as of October 2014, 27 of the 75 providers with legally binding agreements have received their homeless assistance conveyance. According to a HUD general counsel official, HUD has no oversight over the homeless assistance program after it approves the redevelopment plans. Consequently, HUD does not know whether providers are receiving different assistance than what was approved in the redevelopment plan. For example, HUD approved a legally binding agreement at Fort Monmouth, New Jersey, in which a provider was to receive an emergency shelter on-base, but instead the provider told us it is going to receive a smaller shelter off-base. HUD also does not know whether providers are using the conveyed properties as stated in their plans. For example, a provider we spoke with stated that because the types of homeless assistance grants and funding options have changed since it submitted its notice of interest in 2006, it would have to change the services identified in its original plan to get funding. According to HUD officials, the LRAs have incentives to ensure that the homeless assistance providers adhere to the agreed-upon plans, because the property can revert to the LRA at no cost if the provider does not follow the terms of the legally binding agreement. However, once HUD approves a homeless assistance submission, it has no further contact with that LRA. HUD has no mechanism for recording whether the property was conveyed under the terms approved in the homeless assistance submission, or whether the property was conveyed at all. Moreover, we found instances where a provider that signed a legally binding agreement for a conveyance withdrew from the process before homeless assistance could be provided. For example, we spoke to a provider with an agreement for 39 units of supportive housing that chose to withdraw prior to receiving the property. While the BRAC statute requires LRAs to take actions to try to secure another provider to take over, HUD does not track whether this happens. Moreover, DOD officials told us they also do not have oversight over the properties after conveyance to homeless assistance providers, adding that, in their view, this does not fall under DOD’s responsibilities. According to the officials, the BRAC homeless assistance process was designed for DOD to dispose of the property and then be removed from the process. However, until DOD conveys the property either directly to the homeless assistance provider or to the LRA to then convey to the providers, DOD officials stated that they might be in the best position to know the status of the conveyances and share that information with HUD. Additionally, DOD’s Office of Economic Adjustment and the military services already assign project managers to communicate with and provide advice to LRAs. As part of their duties, DOD officials stated that these project managers could periodically relay information on the closure back to HUD, including the conveyance status of the property. Neither HUD nor DOD’s jointly issued BRAC regulations include specific requirements to track the long-term status of BRAC homeless assistance conveyances. While the BRAC statute does not require that the data be tracked, it also does not prohibit it. However, tracking this information would conform with recommended and accepted government practices. Standards for Internal Control in the Federal Government states that managers are responsible for providing reliable, useful, and timely information for transparency and accountability of programs and their operations. In addition, the Title V homeless assistance program tracks long-term conveyance status information. Similar to the BRAC statute, the statute establishing the Title V homeless assistance program also does not require or prohibit the tracking of federal buildings and property given to assist the homeless. However, unlike the BRAC homeless assistance process, under the Title V homeless assistance program the program administrator (Department of Health and Human Services) developed policies and procedures to perform compliance oversight and to ensure that the provider uses the property according to the terms in the approved application, in part because if a provider is not implementing or is unable to implement the program consistent with the approved application, the property title may revert back to the federal government. To accomplish this oversight, providers are required to submit annual utilization reports, and the Department of Health and Human Services is to conduct site visits of the properties at least once every 5 years. Although the Title V program differs from the BRAC program because of the federal government’s reversionary interest in homeless assistance property,officials from HUD stated that HUD and DOD could use Title V program oversight as a model, and added that it would be a good idea for HUD and DOD to know whether the property ultimately is used for homeless assistance. HUD officials also stated that the homeless assistance program could be improved if HUD was required to track data over time regarding the status of the conveyances. DOD officials added that the military services track and could share with HUD the status of properties not yet conveyed or directly conveyed from DOD to the homeless assistance provider. However, the DOD officials stated they do not know the status of the properties once conveyed, at which point it would be more efficient for the LRAs to directly report to HUD. By not requiring the tracking of the status of homeless assistance conveyances, neither HUD or DOD know the effectiveness of the program, to what extent properties are actually being conveyed to the homeless assistance providers, the extent to which the providers are using the properties for their intended use and, in the event of a provider dropping out, the extent to which LRAs are making sufficient efforts to find a replacement provider. Without these data, DOD and HUD lack insight into the effectiveness of the homeless assistance program. In addition, they remain unable to identify additional areas to consider in reviewing redevelopment plans or adjustments that may be needed in processes or procedures in the future should additional BRAC rounds take place. The process for conveying BRAC surplus property increased the potential for addressing homelessness in communities. However, we found that insufficient and unclear information added to the length of time it took the various parties to complete the necessary documentation and reviews and jeopardized the overall success of the program by potentially limiting participation in the process or by creating unfulfilled expectations for the program participants. Furthermore, while homeless assistance providers and LRA officials stated that they appreciated the advice they received from HUD headquarters staff, there was a backlog and delays in reviewing redevelopment plans due to the small size of the HUD review team. Homeless assistance providers told us that the BRAC homeless assistance program provided the overall benefit of a no-cost property conveyance or financial assistance to support local homeless assistance efforts, and at 11 of the 12 bases we contacted where homeless assistance providers received assistance, providers shared other perspectives on why they thought the program was beneficial. For example, some providers said that the BRAC homeless assistance process elevated awareness of homelessness issues for those making the decisions concerning the conveyance of BRAC surplus property. Other providers said they may not have had access to this type of no-cost conveyance without the BRAC process putting them as a lead contender for the property. Although some communities worked with homeless assistance providers before the BRAC homeless assistance process began, providers also mentioned that the opportunity for homeless assistance providers to receive property through this process created an additional forum for the community to discuss the needs of the homeless and identify ways to address those needs with the identified surplus property. For example, at the Inspector/Instructor Facility at West Trenton Marine Corps Reserve Center, New Jersey, LRA and homeless assistance officials engaged in ongoing negotiations about the future use of the property. LRA officials said they initially rejected the provider’s notice of interest, but because of the BRAC homeless assistance process, HUD ultimately awarded the property to the provider. The provider told us that the property is being renovated to provide housing in the main building, as shown in figure 5. The provider also said future renovations will include room to make ancillary services available on site for those experiencing homelessness, such as a diaper bank that will provide diapers, wipes, and other infant supplies and an auto-maintenance job-training facility. We also found examples where homeless assistance providers worked together in consortiums to improve their chances of receiving homeless assistance in the BRAC process. In our analysis of HUD homeless assistance decision documents, we found examples at 17 of 51 bases where homeless assistance providers formed consortiums to pool their resources to express interest in the property as well as coordinate their service efforts to assist those experiencing homelessness. For instance, at one base we visited, six providers formed a consortium to expedite timelines by having one review schedule and a single representative to coordinate with the LRA. Further, officials told us that the consortium brought together providers with varied expertise in the homeless assistance process, including providers with homeless assistance and property-development experience. Officials from LRAs and homeless assistance providers also told us that the flexibility of LRAs to offer various types of assistance to homeless assistance providers—the conveyance of on-base property, an alternative off-base site, money in lieu of property, or some combination thereof— allowed for assistance options that might be a better fit than conveyance of the BRAC site in some circumstances, as explained in the following examples. Onizuka Air Force Base, California: According to an LRA official, two notices of interest were initially rejected because they requested a no- cost property conveyance for not only homeless housing but also affordable housing, which is not allowed under BRAC statute. Those providers whose notices of interest were rejected also told us that because the base was so far from public transportation and other resources, the proposed project might not be eligible for tax credits and other financial assistance needed to complete the project. Rather than awarding on-base property, the LRA stated it sold an alternative property to the providers and awarded them $8.2 million in local housing funds to assist with land and construction costs. One provider said construction for its portion of the site is under way, as shown in figure 6, and the other provider said it planned to start construction by the end of 2014. George L. Richey U.S. Army Reserve Center, California: According to officials from the LRA, one notice of interest was initially rejected because it included a request for a no-cost property conveyance to be used for both affordable and homeless housing. Although the site is close to public transportation, LRA officials said that the site is located in an industrial area, where groceries, clinics, and other social services might not be as accessible to those receiving assistance. Given the center’s proximity to the county jail and another law- enforcement office, LRA officials said they provided a public benefit conveyance to the Santa Clara Sherriff’s office for use as an emergency- response training and readiness center, as shown in figure 7. In exchange, LRA officials told us that they offered to sell the homeless assistance provider an alternate site and provide a forgivable loan of $1,590,000, with the caveat that a certain percentage of the planned units be reserved specifically for homeless housing. Officials from the homeless assistance provider and the LRA told us that as of September 2014, the details of the off-base proposal continue to be negotiated. Fort Gillem, Georgia: According to an LRA official, two providers combined their proposal for on-base property. The official told us that one provider decided it lacked the capacity to support building a new shelter, and the other provider opted to support the other providers’ homeless efforts in lieu of building a new shelter itself. After both groups’ interest in the on-base location waned, the official said that the LRA offered an alternative to both providers that would include the transfer of $900,000 from the eventual sale of Fort Gillem to support building a new, larger shelter adjacent to one provider’s existing facility that would be operated jointly by both providers. Providers said they planned to start construction by the end of 2014. LRAs and homeless assistance providers we spoke with told us they did not have sufficient and clear information and guidance at several steps in the BRAC homeless assistance process. We found that a lack of sufficient and clear information added to the length of time it took the various parties to complete the necessary documentation to move the process forward and jeopardized the success of the program by limiting participation or by creating unfulfilled expectations for the program participants. According to Standards for Internal Control in the Federal Government, information should be communicated in a form and within a time frame that enables personnel to carry out their responsibilities efficiently.steps of the BRAC homeless assistance process. Specifically, we found limited information in the following four HUD and DOD guidance to the LRAs does not clearly specify what information the LRAs should provide to homeless assistance providers on the condition of the property while conducting workshops and tours to help providers develop their notices of interest. HUD and DOD guidance provided to homeless assistance providers is not clear on what information is necessary to include in completing their notices of interest. HUD and DOD guidance provided to the LRAs does not provide sufficient detail on what information needs to be included in developing their legally binding agreements. HUD and DOD guidance to the LRAs is not clear on the various alternatives available to homeless assistance providers instead of on- base property conveyances. First, during required workshops and property tours, providers said that LRAs gave limited information to them on the condition of the property. According to homeless assistance providers and LRA officials we interviewed, some BRAC properties were in need of repairs, such as utility upgrades and hazardous-material remediation, to comply with the most recent building codes and to make them appropriate for homeless assistance reuse (see fig. 8). For instance, HUD and DOD regulations require that LRAs must conduct at least one workshop where homeless assistance providers have an opportunity to, among other things, tour the buildings and properties available either on or off the base. However, we found that the level of detail and property access that LRAs granted to providers varied. As a result, some providers withdrew from the process after they obtained more information about the condition of the property and determined it was no longer a feasible project. For example, one homeless assistance provider told us she was not allowed to leave the bus during the property tour and was unable to physically inspect the premises prior to submitting a notice of interest. Another told us she was not allowed to inspect the property until after the legally binding agreement was drafted. After identifying the needed repairs, including utility updates and addressing Americans with Disabilities Act requirements, the provider said she eventually pulled out of the agreement due to the cost and extent of rehabilitation needed. Some homeless assistance providers we interviewed suggested that details on what and when property condition information will be provided, such as sharing it on a website, might be helpful. According to DOD officials, the properties may have been inhabited by military personnel during the time of the tours and thus could not be physically inspected. Additionally, the LRAs might not have completed a facilities survey or infrastructure inspection to provide property information by the time the tours for providers were held. However, the DOD officials stated that it was important for the providers to receive additional information about the property condition so that they could make an educated decision regarding submitting a notice of interest or signing a legally binding agreement. Additionally, LRA officials at one base we contacted said it would have been helpful to have known more about the property condition earlier in the process to better evaluate how those details could affect the overall redevelopment plan. Second, homeless assistance providers said that they did not receive clear information on the full extent of what to include in their notices of interest, which contributed to providers being removed from consideration for BRAC homeless assistance properties as well as LRAs being granted extensions to submit redevelopment plans to HUD. HUD and DOD regulations require that notices of interest describe (1) the proposed homeless assistance program and supportive services to be provided on the property such as job and skills training, employment programs, shelters, transitional housing, or treatment facilities; (2) the need for the program; (3) the extent to which the program is or will be coordinated with other homeless assistance programs in the communities in the vicinity of the base; (4) information about the physical requirements necessary to carry out the program, including a description of the buildings and property at the base that are necessary to carry out the program; (5) the financial plan, the organization, and the organizational capacity of the homeless assistance provider to carry out the program; and (6) an assessment of the time required to start carrying out the program. These regulations on notices of interest notwithstanding, among the 75 providers whose notices of interest were rejected, we identified 17 examples where the LRA and HUD agreed that the notices of interest were incomplete and providers said they needed more shared and specific guidance on what to include. While the regulations provide general information about what should be included, not all participants in the BRAC process were aware of the regulations. For example, a provider that submitted a notice of interest for property at Fort McPherson told us that it did not receive any additional guidance on what to include. Instead, the provider stated that the lack of guidance and familiarity with the regulations led it to look for alternative guidance and found an online example of a notice of interest from another base closure in Philadelphia. Some providers suggested that a template or additional examples of notices of interest would have provided clarity. Other times, HUD officials told us that the LRAs would ask that notices include much more than was required by the regulations, such as a list of all homeless assistance programs conducted and audited financial statements for the previous 5 years, which made it more difficult for providers to submit complete notices of interest. For example, one notice of interest for Walter Reed Army Medical Center lacked information regarding the number of units, the supportive services to be offered, and financing for the project. HUD officials stated that a template on notices of interest might make it easier for providers to know what was required to be included and help prevent confusion concerning LRA requests for additional information that is not required. LRA officials said they often requested additional time for providers to submit supplementary information to complete the notices of interest. We found that 88 percent of LRAs (45 out of 51) requested extensions from the Office of Economic Adjustment to submit their redevelopment plan to HUD, and some requested multiple extensions (see fig. 9). According to HUD and DOD regulations, LRAs have 270 days after the deadline for receipt of notices of interest to submit their redevelopment plans to HUD. However, the extensions resulted in LRAs taking an average of 654 days to submit their redevelopment plans. HUD officials agreed that these extensions further delayed the HUD review process and conveyance of homeless assistance. Third, we found that HUD guidance and regulations did not provide detailed information to LRAs and homeless assistance providers on the acceptable terms of legally binding agreements. In addition, although BRAC homeless assistance regulations provide a few specific requirements for legally binding agreements, they do not provide detailed guidance on what terms will constitute an acceptable agreement under the process. For example, the regulations require legally binding agreements to include a process for negotiating alternative arrangements for homeless assistance in the event environmental analysis deems the property unsuitable for its intended use, and also require the inclusion of a reverter clause whereby on-base property that ceases to be used for homeless assistance reverts back to the LRA or other entity. However, there is limited other guidance on what terms or types of arrangements are or are not acceptable. For example, there is no standard information on appropriate no-cost conveyance lease terms or time frames for conveyances of the property. While there are general criteria for HUD’s review of the redevelopment plans as a whole, a HUD general counsel official stated that, other than a few provisions required by regulation, there are no specific criteria for the review of legally binding agreements, and instead the official uses professional judgment to assess the sufficiency of the agreements. The limited information and specificity in the regulations contributed to delays in approving redevelopment plans, as a HUD general counsel official stated that the legally binding agreements typically required revisions before she could approve them, and addressing the revisions required additional time. The HUD general counsel official told us that she requested revisions to approximately 80 percent of legally binding agreements received, and in some cases multiple revised drafts were needed prior to HUD approval. Requested revisions included, but were not limited to, requiring more specificity related to the proposed property location, such as the building numbers for on- base property; the number of individuals and type of population to be served; and the type of housing assistance to be provided—that is, permanent, supportive, or transitional. Some homeless assistance providers we spoke with noted the length of the HUD review of legally binding agreements as contributing to the longer duration of the overall HUD review process. For example, one homeless assistance provider told us that HUD’s review and approval of the agreement was the slowest part of the BRAC homeless assistance process. Another homeless assistance provider we interviewed stated that it took 2 years for HUD to approve the legally binding agreement. DOD officials we spoke with suggested that having more standardized information might help HUD’s review process, and they suggested a standard template could be beneficial. In some cases, the HUD official responsible for reviewing redevelopment plans told us she approved redevelopment plans without signed legally binding agreements. Instead, HUD would accept a consent letter from the homeless assistance providers stating that they reviewed and agreed to the terms of the agreement as written. However, because the legally binding agreements were not signed, HUD officials stated that LRAs could subsequently alter the terms after HUD approval, which could affect the final conveyance and ultimately affect the feasibility of the homeless assistance to be provided. Further, although HUD regulations do not require that the agreements be signed prior to HUD’s approval of LRA redevelopment plans, HUD does not have information available, such as through a website, to clarify the implications of unsigned agreements for the parties involved in the process. We found examples of situations in which the LRA changed the terms of the agreements resulting in the provider considering withdrawing from the process or the terms not meeting the provider’s expectations about the time frame for the assistance to be provided. For example, one homeless assistance provider told us that it may have to withdraw from accepting a property conveyance because the LRA had changed the terms of the unsigned legally binding agreement from a 49-year lease to a year-to-year lease, and this would have prevented the provider from guaranteeing continuity of homeless assistance operations. Another homeless assistance provider we spoke with stated that the LRA tried to change the terms of the legally binding agreement, which was signed by the provider but not the LRA, and told this provider that it might not receive the homeless assistance conveyance for up to 25 years. Fourth, although alternatives to conveyances of on-base properties were viewed as a benefit to the process, not all LRAs or homeless assistance providers we spoke with were aware of the permissible alternatives. According to the BRAC statute, conveyances for the assistance of the homeless may be made at no cost, but DOD is required to seek consideration for certain other types of conveyances. For this reason, HUD requires that the proposed use of the property provided under a homeless assistance conveyance be limited to authorized homeless assistance programs, and may only provide minimal or incidental benefit to other groups. Organizations serving other populations—such as persons with disabilities or of low-income—that are not also homeless cannot receive no-cost homeless assistance conveyances. However, some homeless assistance providers we interviewed told us that the best options to provide homeless assistance often include mixed uses of the property, including options for low-income housing or other revenue- generating efforts that could be used to fund the proposed homeless assistance, in addition to the homeless assistance itself. LRAs may offer homeless assistance conveyances at no cost in conjunction with other types of conveyances that are made at reduced or market cost. This enables homeless assistance providers to develop the property for mixed- use. Some LRAs we spoke with offered alternatives to accommodate these mixed-use efforts, such as financial assistance or off-base properties, or allowing the sale of property for affordable housing alongside the no-cost homeless assistance conveyance for mixed-use development. For example, an LRA official from the Sergeant J.W. Kilmer U.S. Army Reserve Center in Edison, New Jersey (see fig. 10) stated that the legally binding agreement between the LRA and Monarch Housing Associates provides for the sale of undeveloped land on-base to the homeless assistance collaborative, with 75 percent of the land sold for affordable housing at a cost of $975,000 and 25 percent of the land provided for free as part of the no-cost homeless assistance conveyance. Additionally, in San Jose, California, the LRA for the George L. Richey U.S. Army Reserve Center offered to sell off-base property to Charities Housing Development Corporation for $6,750,000, with part of the purchase price as a forgivable loan of $1,590,000 from the county, to be used for housing for persons experiencing homelessness and of low-income. BRAC regulations require that the LRAs assess the balance of economic redevelopment and other development needs of the communities in the vicinity of the installation with the needs of the homeless in those communities, and explain how their redevelopment plans address that balance.offer homeless assistance providers alternatives to conveyances of on- base property, but because the BRAC homeless assistance process is primarily focused on the reuse of surplus federal facilities, LRAs are not required to offer alternatives to on-base property. As a result, not all homeless assistance providers we interviewed were offered these alternatives. For example, one homeless assistance provider told us it would have liked to receive financial assistance or off-base property, but In an effort to accommodate this balance, LRAs may choose to the LRA did not offer either alternative, and 8 years later the provider is waiting to receive on-base property. Another homeless assistance provider stated that it did not know that financial assistance was an option until later in the process, and that knowledge of financial assistance as a permissible alternative would have assisted in shortening the time frame to receive homeless assistance. At another base, the leader of a coalition of five homeless assistance providers told us that the providers may withdraw from the process because they require a mixed-use option, which the LRA has not offered. Their withdrawal would leave the LRA needing to spend additional time trying to find other providers for property designated for homeless assistance on which, according to a DOD official, the Army has already spent approximately $1 million in caretaker costs. Furthermore, one LRA we spoke with also was not aware that homeless assistance conveyances could be offered in conjunction with other types of conveyances. Specifically, the LRA told us that it would have been easier to accommodate homeless assistance if the BRAC regulations allowed for sale of property for affordable housing in addition to homeless assistance conveyances. Based on our review of BRAC homeless assistance regulations, we found that the regulations did not provide detailed information on alternatives to on-base property. Specifically, the regulations do not describe which combinations of money or property or both are acceptable homeless assistance arrangements, although the regulations appear to contemplate that homeless assistance can be provided in a variety of ways, requiring, among other things, “a description of how buildings, property, funding, and/or services on or off the installation will be used to fill some of the gaps in the current continuum of care system.” Additionally, there are not sources of publicly available information, such as a website or pamphlet, to disseminate this information. Without providing clear and sufficient information on the condition of the property to be shared during workshops and tours, required elements for notices of interest, acceptable terms of legally binding agreements, and legal alternatives to on-base property, it will be difficult for LRAs and homeless assistance providers to have the knowledge necessary to make an informed decision about the BRAC homeless assistance process, which, in turn, may negatively affect the time frame and feasibility of the proposed homeless assistance. LRA officials we interviewed expressed appreciation for advice from HUD staff as they navigated through the BRAC homeless assistance process, but we found that the limited number of HUD staff dedicated to the review of redevelopment plans slowed the process. At many interviews we conducted with LRAs, LRA officials expressed appreciation, with one LRA official stating that she reached out to the HUD headquarters staff multiple times while compiling the redevelopment plan and their assistance was very helpful. The official also noted that HUD field office staff assisted the LRA in identifying where homeless assistance providers were located in the aftermath of a natural disaster. Another LRA official told us that the LRA interacted directly with the HUD headquarters BRAC Coordinator throughout the process and that the coordinator was very willing to help and provided tailored service repeatedly. Another LRA official stated that HUD officials traveled to advise the LRA on the redevelopment process in person. However, HUD did not have enough resources dedicated to meet the 60- day deadline established in the BRAC statute for reviewing the surge of LRA redevelopment plans, which added to the delay in implementing the BRAC homeless assistance provision. During the 2005 BRAC homeless assistance process, it took HUD an average of 666 days, ranging from 8 to 1,777 days, to approve the 51 redevelopment plans that included notices of interest for homeless assistance (see fig. 11). The BRAC statute required that HUD complete an initial review within 60 days of receipt of the redevelopment plan. HUD and DOD regulations construed this as requiring that HUD complete that review within 60 days after receipt of a completed plan. HUD requested additional information from 45 of 51 plans with interest from homeless assistance providers in order to consider the plans complete. HUD officials stated that this interpretation of the BRAC statute enabled HUD and the LRAs to communicate further about the requirements for the redevelopment plan submission. Fifteen of the 51 redevelopment plans were approved within the statutory deadlines, as construed by HUD and DOD. However, even working from the dates on which HUD considered the LRAs’ redevelopment plans complete, or, in the case of plans for which HUD issued a preliminary adverse determination, from the resubmission date, HUD took, on average, 151 days longer than allowed by statute to review the redevelopment plans. According to Standards for Internal Control in the Federal Government, an agency’s organizational structure should provide a framework to achieve agency objectives, including compliance with applicable laws and regulations and the effective and efficient use of agency resources. However, HUD did not effectively dedicate resources for reviewing the surge of LRA redevelopment plans to meet its 60-day time frame for review of the plans. According to HUD officials, two HUD headquarters staff members were assigned to review 125 LRA redevelopment plans, 51 of which had notices of interest from homeless assistance providers. HUD staff suggested that additional, temporary staff at the headquarters level and increased involvement of field staff could potentially expedite the review times, although they stated additional funding would be required. However, HUD has not fully developed options to address reviewing the surge of plans. Both DOD and HUD officials told us that, for some redevelopment plans, HUD’s review time was lengthened because DOD directed HUD to prioritize the review of plans of bases ready to be conveyed. In doing so, HUD delayed the review of plans which other bases’ LRAs may have submitted earlier. However, HUD officials added that DOD’s prioritization of their review partially contributed to the delays and the underlying source was their insufficient number of dedicated staff resources. GAO/AIMD-00-21.3.1. Without sufficient staff resources dedicated to the review of LRA redevelopment plans and homeless assistance submissions, HUD was not typically able to meet the 60-day deadline set forth in the BRAC statute, and the BRAC homeless assistance process was delayed. During interviews, homeless assistance providers, LRA officials, and military officials provided examples to us of how the length of the HUD review contributed to the longer time frame for the process, affected their ability to move forward, or required additional effort to manage. For example, when asked about challenges encountered in the BRAC homeless assistance process, one DOD official responded that the HUD review process took 4 years. A homeless assistance provider stated that it took HUD approximately 3 years to approve the redevelopment plan. An LRA told us that due to delays in the HUD review process, the LRA could not move forward with design guidelines or zoning regulations, which slowed the overall redevelopment process. Another LRA stated that it cost the city $55,000 in staff and incidental costs while it awaited HUD’s review. In addition, since the BRAC homeless assistance process often spanned several years, multiple parties told us they experienced staff turnover and had to reeducate existing staff and brief new staff on the process, which took additional time and effort. Without a means to ensure that sufficient staff resources are dedicated to HUD’s review process, it will be difficult for HUD to provide reasonable assurance that the delays experienced during BRAC 2005 will not be repeated in the event of future BRAC rounds, potentially hindering the effectiveness of the homeless assistance process as established and ultimately the redevelopment of the closed base. BRAC 2005 was the largest and costliest BRAC round in DOD history, and its closure of 125 military bases with surplus property affected the economies of the surrounding communities. While each community faced uncertainty regarding the loss of local business and jobs, the BRAC statute also offered an opportunity for homeless assistance providers to receive no-cost property conveyances and help address local homelessness needs. With 75 providers expected to receive nearly 50 parcels of property and over $29 million in assistance, the 2005 BRAC homeless assistance program offered benefits. However, HUD and DOD have no requirement to track whether those plans are executed as agreed upon, whether the actual property is conveyed to the homeless assistance providers, whether homeless assistance providers are implementing the program consistent with the approved agreements, or whether the conveyance reverts back to the LRA at no cost if a provider drops out of the agreement. Without a requirement to track the status of all BRAC homeless assistance conveyances, it will be difficult for HUD and DOD to identify the overall effectiveness of those conveyances on its homeless assistance goals and determine whether program changes are needed to improve the process in any future BRAC round. The BRAC homeless assistance process provides needed assistance to the homeless population across the nation, but the BRAC regulations state it must be balanced against the redevelopment activities of the community. Delays in redevelopment as the communities consider the homeless assistance program can cost DOD and the LRA wasted financial resources, create unfulfilled expectations for program participants, and ultimately jeopardize the success of the BRAC homeless assistance program by impeding the time frame and feasibility for homeless individuals and families to receive assistance and for the ultimate redevelopment of the closed base. Limited and unclear information from HUD and DOD to homeless assistance providers and LRAs on what should be included in tours of on-base property, notices of interest, and legally binding agreements has contributed to delays in submitting notices and even resulted in withdrawals by homeless assistance providers from the process. Similarly, limited information from HUD and DOD to LRA decision makers on alternatives to on-base property conveyances has contributed to timeline extensions and additional costs to DOD to maintain the properties to be conveyed. Moreover, we found that the surge in HUD’s responsibilities when a BRAC round is announced results in resource challenges for the department. Although many LRAs we spoke with agreed that HUD provided expertise and advice on the homeless assistance process, with few dedicated resources HUD did not provide a timely review of homeless assistance submissions and redevelopment plans. Homeless assistance providers benefited from obtaining no-cost property or financial assistance, and awareness of homelessness issues was elevated among local community leaders during the BRAC 2005 round, but the challenges of limited information and dedicated HUD resources emerged that hampered the timeliness and success of the program. As Congress considers whether to authorize another BRAC round, efforts by HUD and DOD to address these challenges would help to minimize delays and improve the effectiveness of the program. We recommend the following six actions: To help determine the effectiveness of BRAC homeless assistance conveyances, the Secretaries of Housing and Urban Development and Defense should update the BRAC homeless assistance regulations to require that conveyance statuses be tracked. These regulatory updates could include requiring DOD to track and share disposal actions with HUD and requiring HUD to track the status following disposal, such as type of assistance received by providers and potential withdrawals by providers. To assist homeless assistance providers and LRAs in completing the steps of the BRAC homeless assistance process within required time frames, to provide additional information to reduce unfulfilled expectations about the decisions made in executing the homeless assistance agreements, and to promote a greater dissemination of this information, the Secretaries of Housing and Urban Development and Defense, for each of the following four elements, should update the BRAC homeless assistance regulations; establish information-sharing mechanisms, such as a website or informational pamphlets; or develop templates to include specific guidance that clearly identifies the information that should be provided to homeless assistance providers during tours of on-base property, such as the condition of the property; information for homeless assistance providers to use for preparing their notices of interest; guidance for legally binding agreements and clarification on the implications of unsigned agreements; and specific information on legal alternatives to providing on-base property, including acceptable alternative options such as financial assistance or off-base property in lieu of on-base property, information about rules of sale for on-base property conveyed to homeless assistance providers, and under what circumstances it is permissible to sell property for affordable housing alongside the no-cost homeless assistance conveyance. To help improve the timeliness of the HUD review process, the Secretary of Housing Urban Development should develop options to address the use of staff resources dedicated to the reviews of bases during a BRAC round, such as assigning temporary headquarters staff or utilizing current field HUD staff. We provided a draft of this report to HUD and DOD for review and comment. In written comments, HUD generally concurred with all six of the recommendations, including five that would need to be jointly implemented with DOD, and identified some actions it intends to take to address them. DOD partially concurred with three of the joint recommendations and did not concur with the remaining two joint recommendations. HUD’s and DOD’s comments are summarized below and reprinted in their entirety in appendixes III and IV, respectively. HUD and DOD also provided technical comments, which we incorporated as appropriate. HUD generally concurred and DOD partially concurred with the first recommendation to update the BRAC homeless assistance regulations to require that conveyance statuses be tracked, which could include requiring DOD to track and share disposal actions with HUD and requiring HUD to track the status following disposal. HUD stated that it is willing to update the BRAC homeless assistance regulations to track the conveyances of property for homeless assistance, but noted that it will require DOD agreement to do so because the regulations are joint. In its response, DOD stated that while it concurs in the value of tracking homeless assistance and other conveyances, it can do so without any change to existing regulations. DOD did not identify any actions it will take on how to track the homeless assistance conveyances in the absence of a regulatory update, and also did not indicate that it would work with HUD to update the regulations. Moreover, DOD did not explain how program staff would know to track the conveyance status in the absence of guidance requiring them to do so. As we noted in the report, both departments need to be involved in tracking the conveyance status; DOD is in the best position to know the status of the conveyances prior to the property disposal, and HUD is in the best position to communicate with the LRAs to know the status of the conveyances following property disposal. Also as noted in the report, HUD and DOD officials stated that they saw value in tracking the conveyance statuses. By updating the regulations, both departments can jointly commit to tracking long-term conveyance status information and, in turn, providing timely and useful information about the BRAC homeless assistance program. We continue to believe that updating the BRAC homeless assistance regulations to require the tracking of conveyances of property for homeless assistance will provide HUD and DOD with better insight into the effectiveness of the BRAC homeless assistance program and help identify adjustments that may be needed to improve program processes or procedures to be used in any future BRAC rounds. HUD generally concurred and DOD partially concurred with the second recommendation to update the BRAC homeless assistance regulations, establish information-sharing mechanisms, or develop templates to include specific guidance that clearly identifies the information that should be provided to homeless assistance providers during tours of on-base property, such as the condition of the property. HUD stated that it will update its BRAC guidebook, website, and presentations to provide clarifying information for homeless assistance providers regarding what information should be included during tours of on-base property. HUD also noted in its response that this will require DOD and military department agreement to implement and that the provision of information about the condition of on-base property and access to that property is under the purview of the military department. DOD stated that while it already provides generic information about the property, the LRAs and interested homeless assistance providers can undertake facility assessments following the tours. However, DOD did not provide additional detail or explanation about how it would provide information about the condition of the property or access to it. As we stated in the report, we found that the level of detail and property access that LRAs granted to providers varied. As a result, some providers withdrew from the process after they obtained more information about the condition of the property and determined it was no longer a feasible project. These withdrawals left the LRAs needing to spend additional time trying to find other providers for property designated for homeless assistance and jeopardized the success of the BRAC homeless assistance program by impeding the feasibility of individuals and families experiencing homelessness to receive assistance. We also noted that, while the LRAs might not have completed a facilities survey or infrastructure inspection to provide property information by the time the tours for providers were held, some homeless assistance providers we interviewed suggested that details on when this information would be provided might be helpful. We continue to believe that specific guidance is needed to help ensure that information regarding tours of on-base property—such as property condition or, in the case that the information is not available prior to the tours, details on when information about property condition might be available—is provided to homeless assistance providers, thus helping to ensure they have the knowledge necessary to make an informed decision about the BRAC homeless assistance process, including the time frame and feasibility of the proposed homeless assistance. HUD generally concurred and DOD did not concur with the third recommendation to update the BRAC homeless assistance regulations, establish information-sharing mechanisms, or develop templates to include information for homeless assistance providers to use in preparing their notices of interest. HUD stated that it will update its BRAC guidebook, website, and presentations to provide clarifying information for homeless assistance providers to use in preparing their notices of interest. HUD also stated that it considered the current regulations and BRAC guidebook sufficient to inform providers as long as LRAs did not place additional requirements, which may create an undue burden for providers. In its response, DOD stated that the existing regulatory guidance is adequate for providers’ expressions of interest, given that these expressions evolve as the redevelopment planning effort proceeds and they learn more about the property—the process of which the second recommendation above is intended to expedite. As we noted in the report, while the regulations relevant to the third recommendation provide general information about what should be included in homeless assistance providers’ notices of interest, not all participants in the BRAC process were aware of the regulations, and LRAs sometimes requested additional information that was not required. Homeless assistance providers we interviewed told us that they did not receive clear information on the full extent of what to include in their notices of interest, which contributed to providers being removed from consideration for BRAC homeless assistance properties. Among the 75 providers whose notices of interest were rejected, we identified 17 examples where the LRA and HUD agreed that the notices of interest were incomplete. Providers said they needed more shared and specific information on what to include, and some providers suggested that a template or additional examples of notices of interest would have provided clarity. Additionally, LRA officials told us they often requested additional time to allow providers to complete the notices of interest, contributing to extensions in the process that resulted in LRAs taking an average of 654 days to submit their redevelopment plans—more than twice the 270-day deadline given to LRAs to submit their plans following the end of the period for receipt of notices of interest. Furthermore, nothing in the recommendation requires that regulations be changed if the departments do not wish to do so; rather, at the departments’ discretion, we recommended that the departments update the regulations, establish information-sharing mechanisms such as a website or informational pamphlets, or develop templates. We believe that HUD’s proposed action to update its BRAC guidebook, website, and presentations, if implemented, is responsive to the recommendation and can help ensure that homeless assistance providers are able to complete their notices of interest with the required information necessary for consideration, which could assist LRAs in submitting their redevelopment plans on time and ultimately accelerate the BRAC homeless assistance and base redevelopment process. We continue to believe that DOD should work with HUD to implement the joint recommendation. HUD concurred and DOD partially concurred with the fourth recommendation to update the BRAC homeless assistance regulations, establish information-sharing mechanisms, or develop templates to include guidance for legally binding agreements and clarification on the implications of unsigned agreements. HUD stated that it will update its BRAC guidebook, website, and presentations to provide clarifying information for homeless assistance providers to use in preparing legally binding agreements and on the implications of unsigned agreements. DOD did not commit to taking any actions to provide this information and instead noted that any action should ensure that a legally binding agreement does not bind DOD to disposal actions it is unable to carry out. Nothing in the recommendation requires DOD to sign an agreement it cannot carry out. DOD further noted that the purpose of the legally binding agreement is to provide remedies and recourse for the LRA and provider in carrying out an accommodation following property disposal. We agree that legally binding agreements can provide recourse, but we found that some agreements were being approved prior to being signed and that providers did not know that unsigned agreements would limit their recourse in the process. For example, we found that the LRA could subsequently alter unsigned agreements, potentially affecting the final conveyance and ultimately the feasibility of the homeless assistance to be provided. In addition, during the course of our review we found that the limited information and specificity in the regulations contributed to delays in approving base redevelopment plans, as a HUD general counsel official told us that she requested revisions to approximately 80 percent of legally binding agreements received and addressing the revisions required additional time. DOD officials told us that having more standardized information, such as a standard template, could help HUD’s review process—consistent with what we are recommending. We believe that HUD’s proposed actions to clarify information regarding legally binding agreements and the implications of unsigned agreements, if implemented, are responsive to the recommendation and could help facilitate the timeliness of HUD’s review and provide additional awareness for homeless assistance providers regarding the timeliness and feasibility of the proposed homeless assistance. We continue to believe that DOD should take similar actions to provide additional information and help ensure the expeditious redevelopment of bases closed under BRAC. HUD generally concurred and DOD did not concur with the fifth recommendation to update the BRAC homeless assistance regulations, establish information-sharing mechanisms, or develop templates to include specific information on legal alternatives to providing on-base property, including acceptable alternative options such as financial assistance or off-base property in lieu of on-base property, information about rules of sale for on-base property conveyed to homeless assistance providers, and under what circumstances it is permissible to sell property for affordable housing alongside the no-cost homeless assistance conveyance. HUD stated that it will update its BRAC guidebook, website, and presentations to clarify that the use of off-base property and financial assistance are acceptable alternate means of homeless assistance accommodation in base redevelopment plans and to include examples of alternatives to on-base property that have been approved to date. HUD also stated that this will require DOD and military department agreement to implement. DOD did not concur with the recommendation. In its response, DOD stated that providers may only be considered through specific expressions of interest in surplus BRAC property, and these suggested alternatives may only be considered within the context of what is legally permissible given the specific circumstances at each installation. Nothing in the recommendation suggests that DOD identify alternatives that are not legally permissible or indicates that all alternatives should be offered in every circumstance; rather, we found that when alternatives were being considered, all parties lacked information about which types of information were legally permissible. While providers may only express interest in surplus BRAC property, we found that LRAs may offer providers alternatives to conveyances of on-base property to better balance economic redevelopment and the needs of the homeless in those communities. The BRAC regulations contemplate that homeless assistance can be provided in a variety of ways, but we found that the regulations did not provide detailed information on alternatives to on-base property and sources of publicly available information, such as a website or pamphlet, to disseminate this information were not available. Further, DOD noted in its response that HUD may provide examples of alternatives to on-base property that have been approved to date as part of a local accommodation to offer examples for LRAs and providers. DOD’s suggestion is consistent with HUD’s response. We believe that this proposed action—along with HUD’s plan to update its BRAC guidance, website, and presentations—may provide LRAs and homeless assistance providers with additional feasible options for homeless assistance through the BRAC process. Finally, HUD generally concurred with the sixth recommendation to develop options to address the use of staff resources dedicated to the reviews of bases during a BRAC round, such as assigning temporary headquarters staff or utilizing current field HUD staff. HUD stated that it temporarily assigned headquarters staff and utilized field office staff during the 2005 round of BRAC. However, as we noted in our report, HUD’s efforts to provide staff resources during the 2005 round of BRAC were insufficient, resulting in HUD typically being unable to meet the 60- day deadline set forth in the BRAC statute and the BRAC homeless assistance process being delayed. HUD also stated that, in the event of another BRAC round the size of 2005, it would encourage Congress to allocate funding for appropriate temporary staff resources to assist the department in meeting important timelines. However, even without dedicated funding, we believe that HUD should consider ways to temporarily reorganize or dedicate additional staff members to meet its 60-day time frame for review of the redevelopment plans. By providing sufficient staff resources, HUD may be able to minimize the delays it experienced during BRAC 2005, ultimately improving the timeliness and effectiveness of future BRAC rounds and expediting base redevelopment. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of Housing and Urban Development. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of our review were to address (1) the types of assistance provided to homeless assistance providers as part of the 2005 base realignment and closure (BRAC) round and the extent to which the Departments of Defense (DOD) and Housing and Urban Development (HUD) track implementation of the agreements reached and (2) any benefits and challenges encountered as DOD, HUD, and the local redevelopment authorities (LRA) addressed provisions for homeless assistance as a result of BRAC 2005. To define the scope for both objectives, we first identified total bases closed with surplus property and then classified those bases as (1) receiving notices of interest and implementing homeless assistance, (2) receiving notices of interest but not implementing homeless assistance, and (3) not receiving notices of interest. Specifically, to identify the scope of our review of 125 bases closed with surplus property during the 2005 BRAC round, we obtained comprehensive lists of base closures with surplus property from DOD and HUD. We then reconciled the lists and requested follow-up information from each department, as applicable. We also interviewed agency officials at DOD and HUD knowledgeable about the data to help ensure that the lists were complete, and we determined they were complete and sufficient to use for our engagement. To classify the bases into these three categories, two analysts independently reviewed an internal HUD document used to track the BRAC 2005 homeless assistance process. From our review, we identified 39 bases that received notices of interest from homeless assistance providers that were approved to implement homeless assistance, 12 bases that received notices of interest that were not approved for assistance, and 74 bases that did not receive notices of interest. The HUD tracking document identified bases that did and did not implement homeless assistance; however, HUD staff did not formally track the number of bases that received notices of interest but did not result in legally binding agreements. To classify these bases, two analysts reviewed the HUD tracking document for comments that suggested this categorization. From these comments, the two analysts independently identified a set of bases that received notices of interest but did not result in legally binding agreements and reconciled them as appropriate. Additionally, for those bases that the HUD tracking document identified as implementing homeless assistance, we further reviewed the redevelopment plans and HUD approval documents. From these reviews, we found five instances in which the homeless assistance provider withdrew its notice of interest in exchange for homeless assistance from the community outside of the BRAC process. HUD’s tracking document identified two of these instances as the base implementing homeless assistance and three of these instances as the base not implementing homeless assistance. Because the homeless assistance providers at these five bases withdrew their notices of interest for assistance and HUD did not review a legally binding agreement between the provider and LRA, we categorized these five instances as bases that received notices of interest but did not implement homeless assistance. Moreover, we also selected a nonprobability sample of three bases that did not implement homeless assistance to determine whether additional bases received notices of interest that were not identified in the comments in HUD’s tracking document. We selected bases that represented a range of size, geography, and military service. Ultimately, from the three bases we selected that did not receive homeless assistance, we did not find any that received notices of interest. While the results of our review of HUD’s tracking document, redevelopment plans, and judgmental sample indicated that there are 12 bases that received notices of interest but did not implement homeless assistance, it is possible that more bases received notices of interest but did not ultimately convey property or provide other types of assistance to homeless assistance providers. To determine the types of assistance provided to homeless assistance providers for those 51 LRAs that received interest from homeless assistance providers, we reviewed the LRAs’ homeless assistance submissions as part of their redevelopment plans to HUD and identified and analyzed information. Specifically, we identified and analyzed the number of notices of interest received, the number and types of properties requested by the homeless assistance providers, the number and types of properties agreed to be given from the LRA to the homeless assistance providers, and the number and description of the conveyances of property or money that have occurred. We also analyzed the redevelopment plans to determine reasons why some notices of interest were not approved. To evaluate the extent to which the DOD and HUD tracked the implementation of the agreements, we interviewed DOD and HUD officials regarding their roles in tracking the assistance provided to homeless assistance providers and compared these roles to criteria for management control activities in Standards for Internal Control in the Federal Government. For the 39 bases with approved conveyances, we collected data as of October 2014 from DOD’s military departments, the LRAs, and homeless assistance providers about the types of properties or money conveyed and the status of that conveyance. We also reviewed the tracking requirements for the Title V homeless assistance program to determine whether any applicable comparisons could be made to the tracking requirements for the BRAC homeless assistance program. To evaluate the benefits and challenges encountered as DOD, HUD, and the LRAs addressed provisions for homeless assistance, we collected documentary and testimonial evidence through a two-part approach. First, we collected data from the redevelopment plans from all 51 LRAs that received interest from homeless assistance providers. Specifically, we identified requests for extensions and completion dates for required process steps, such as the homeless assistance submission to HUD and approval by HUD. We then analyzed these data and compared the actual timelines to the required timelines in the BRAC statute and regulations. Second, to gather in-depth information from a sample of the bases with surplus property closed as a result of BRAC 2005, we conducted semistructured interviews regarding benefits and challenges of the homeless assistance process with the LRAs and homeless assistance providers from 23 closed bases. We attempted to contact all homeless assistance providers affiliated with those base closures; however, we were unable to reach all providers and ultimately contacted 54 providers associated with the 23 base closures.locations—Georgia, New Jersey, Pennsylvania, and California—and we contacted 12 of these sites via phone. For those bases we visited, we also interviewed the HUD field office representatives and project managers from the military departments and DOD’s Office of Economic Adjustment—DOD’s primary organization providing assistance to We visited 11 of these sites in four communities affected by BRAC. We selected bases to reflect a range of factors including the number of homeless notices of interest received, base size, geographical representation, types of homeless assistance conveyance provided, and to include representation from each military service. Ultimately, we contacted 12 bases that received notices of interest and implemented homeless assistance, 9 bases that received notices of interest and did not implement homeless assistance, and 2 bases that did not receive notices of interest. These interviews provide examples of benefits and challenges faced by each individual party, but information obtained is not generalizable to all parties involved in the homeless assistance process. Further, we compared information about the challenges cited with criteria for information and communications in See table 3 Standards for Internal Control in the Federal Government.for a summary of bases and number of associated homeless assistance providers we contacted. In order to assess the reliability of the data presented in this report, we corroborated the data in HUD’s tracking document with the data in the LRAs’ redevelopment plans and interviewed knowledgeable agency officials at HUD regarding data reliability, including any limitations of HUD’s data related to its completeness and accuracy. After assessing the data, we determined that the data were sufficiently reliable for the purposes of determining the types of assistance provided to homeless assistance providers and the challenges encountered in addressing the homeless assistance provisions, and we discuss our findings in the report. We conducted this performance audit from April 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since the Defense Base Closure and Realignment (BRAC) Act of 1990 was enacted, there have been two distinct authorities governing the homeless assistance process on military bases closed under BRAC. From the enactment of the BRAC statute in 1990 until October 1994, Title V of the McKinney-Vento Homeless Assistance Act—which allows certain excess, surplus, unutilized, and underutilized federal property to be used to provide assistance to the homeless—applied with slight modifications to BRAC closures. For properties on military bases approved for closure after October 25, 1994, amendments to the BRAC statute made in the Base Closure Community Redevelopment and Homeless Assistance Act of 1994 (Redevelopment Act) govern the homeless assistance process. The Redevelopment Act aimed to revise and improve the process for disposing of buildings and property at bases closed under BRAC. According to the Department of Housing and Urban Development (HUD), many individuals involved in military base reuse at the time had concluded that Title V did not adequately address the multiple interests related to large parcels of surplus federal properties such as military bases. Title V of the McKinney-Vento Homeless Assistance Act designated the General Services Administration, Department of Health and Human Services, and HUD to administer the homeless assistance program, with the General Services Administration delegating its property disposal authority to the Department of Defense (DOD) for bases closed under BRAC. Under the Title V homeless assistance program, DOD was required to submit a description of its vacant base-closure properties to HUD. HUD would then determine whether any property was suitable for use to assist the homeless. HUD would publish its determination in the Federal Register, at which time qualified homeless assistance providers could apply for and receive the requested property. Following transfer of the property to the homeless assistance provider, the Department of Health and Human Services was to perform compliance oversight and ensure that the grantee was using the property according to the terms in the approved application. In contrast to the Title V homeless assistance program, the BRAC statute as amended by the Redevelopment Act is overseen by HUD and DOD. In general, the Redevelopment Act replaced the Title V homeless assistance program with a community-based process, in which the local redevelopment authority (LRA) prepares a redevelopment plan after consulting with homeless assistance providers and other community groups affected by the base closure, and HUD ensures that the plan appropriately balances the needs of the community for economic and other development with the needs of the homeless. Subsequent to HUD approval and other procedural steps, DOD may transfer properties for homeless assistance purposes. The number of bases at which property or funding for homeless assistance were provided has varied for each round of base closures, as seen in figure 12. Under the earlier BRAC rounds, which were governed by the Title V homeless assistance program, properties at nine bases were transferred to homeless assistance providers, including properties at seven bases in the 1991 BRAC round and at two bases in the 1993 BRAC round. Under the Redevelopment Act homeless assistance program, properties were transferred or offers of financial assistance were made to homeless assistance providers at 92 bases, including at 53 bases in the 1995 BRAC round and at 39 bases in the 2005 BRAC round. In addition to the contact named above, Laura Durland (Assistant Director), Emily Biskup, Grace Coleman, Chris Cronin, Lorraine Ettaro, Erica Miles, Silvia Porres, Jodie Sandel, Amie Steele, Erik Wilkins- McKee, and Michael Willems made key contributions to this report. Federal Real Property: More Useful Information to Providers Could Improve the Homeless Assistance Program. GAO-14-739. Washington, D.C.: September 30, 2014. DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program. GAO-14-577. Washington, D.C.: September 19, 2014. Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth. GAO-13-436. Washington, D.C.: May 14, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Veteran Homelessness: VA and HUD Are Working to Improve Data on Supportive Housing Program. GAO-12-726. Washington, D.C.: June 26, 2012. Homelessness: Fragmentation and Overlap in Programs Highlight the Need to Identify, Assess, and Reduce Inefficiencies. GAO-12-491. Washington, D.C.: May 10, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Homeless Women Veterans: Actions Needed to Ensure Safe and Appropriate Housing. GAO-12-182. Washington, D.C.: December 23, 2011. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Rural Homelessness: Better Collaboration by HHS and HUD Could Improve Delivery of Services in Rural Areas. GAO-10-724. Washington, D.C.: July 20, 2010. Homelessness: A Common Vocabulary Could Help Agencies Collaborate and Collect More Consistent Data. GAO-10-702. Washington, D.C.: June 30, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Homelessness: Information on Administrative Costs for HUD’s Emergency Shelter Grants Program. GAO-10-491. Washington, D.C.: May 20, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004.
The 2005 BRAC round resulted in 125 closed bases with over 73,000 acres of surplus property available. The Defense Base Closure and Realignment Act, as amended, requires DOD and HUD to assist communities in determining the best reuse of land and facilities, balancing needs of the local economy with those of homeless individuals and families. GAO was mandated to review the extent to which DOD and HUD implemented the homeless assistance provisions while disposing of BRAC surplus property. This report addresses (1) the assistance provided as a result of BRAC 2005 and the extent to which DOD and HUD track its implementation and (2) any benefits and challenges encountered as DOD, HUD, and LRAs addressed homeless assistance provisions. GAO reviewed homeless assistance plans; interviewed DOD and HUD officials; and interviewed LRAs and homeless assistance providers from a nongeneralizable sample of 23 closed bases, selected based on size, geography, and types of assistance provided. A variety of homeless assistance was provided as a result of the 2005 round of base realignments and closures (BRAC), but the Departments of Defense (DOD) and Housing and Urban Development (HUD) do not require homeless assistance conveyance data to be tracked. Of the 125 large and small bases closed with surplus property, local redevelopment authorities (LRA) at 39 bases agreed to provide homeless assistance to 75 providers. If implemented, these agreements would provide nearly 50 parcels of property and over $29 million in total assistance. As of October 2014, GAO found that 27 of the 75 providers with agreements had received their property or monetary conveyances. However, DOD and HUD do not require tracking of the status of the homeless assistance conveyances. In contrast, the program administrator of the Title V homeless assistance program, which oversees conveyances for non-BRAC properties, developed policies to perform oversight in part because the government retains an interest in Title V properties. Without tracking the status of the conveyances, neither DOD nor HUD know the extent to which properties are actually being conveyed; the extent to which the providers are using the properties for their intended use; the extent to which LRAs are making sufficient efforts to find a replacement provider in the event of a provider dropping out; and ultimately the effectiveness of the homeless assistance program. BRAC surplus property benefited homeless assistance efforts, but limited information and dedicated HUD resources contributed to challenges in the timeliness and feasibility of assistance provided. Homeless assistance providers GAO interviewed said that, among other things, the BRAC homeless assistance program provided the overall benefit of a no-cost property conveyance or financial assistance to support local homeless assistance efforts. However, LRAs and providers GAO interviewed also stated that they did not have sufficient and clear information from DOD and HUD regarding four steps of the homeless assistance process: (1) what information LRAs should give providers during property tours and workshops, (2) what information to include in providers' notices of interest about properties, (3) what information to include in developing legally binding agreements for conveying assistance, and (4) what alternatives are available to on-base property conveyances. For example, during required property tours and workshops, LRAs were unaware of what information to give and gave providers limited property condition information, which led to some providers withdrawing after they identified the cost of needed repairs. Without detailed information on these four steps, LRAs and providers may not have the knowledge necessary to make informed decisions. LRA officials also stated that they appreciated advice from HUD staff on the BRAC process. However, GAO found that HUD did not have enough resources dedicated to meet the 60-day review deadline in the BRAC statue for reviewing LRA redevelopment plans. According to HUD, two staff were assigned to review the plans, taking an average of 151 days longer than allowed to approve redevelopment plans with homeless assistance. However, HUD has not developed options to address reviewing the surge of plans in any future BRAC rounds. Without a means to ensure that needed staff resources are dedicated to HUD's review process, it will be difficult for HUD to provide reasonable assurance that the delays experienced during the BRAC 2005 round will not be repeated. GAO recommends that DOD and HUD track conveyance status and provide clear information on four steps of the homeless assistance process. HUD generally concurred, and DOD either partially concurred or did not concur with these recommendations, stating its existing guidance is sufficient. GAO believes these recommendations are still valid as discussed in the report. GAO also recommends that HUD address staff resources during a BRAC round, and HUD generally concurred.
The financial statements and accompanying notes present fairly, in all material respects, in conformity with U.S. generally accepted accounting principles, the Foundation’s financial position as of September 30, 2004, and 2003, and the results of its activities and its cash flows for the fiscal years then ended. However, material misstatements may nevertheless occur in information reported by the Foundation on its financial status to its Board of Directors and others as a result of the material weakness in internal control over financial reporting described in this report. As discussed in a later section of this report and in Note 12 to the financial statements, the Foundation continues to experience increasing difficulties in meeting its financial obligations. The Foundation’s continuing financial difficulties and deteriorating financial condition raise substantial doubt, for the third consecutive year, about its ability to continue as a going concern. The financial statements have been prepared under the assumption that the Foundation would continue as a going concern, and do not include any adjustments that would need to be made if the Foundation were to cease operations. Because of the material weakness in internal control discussed below, the Foundation did not maintain effective internal control over financial reporting (including safeguarding assets) or compliance with laws and regulations, and thus did not provide reasonable assurance that losses, misstatements, and noncompliance with laws material in relation to the financial statements would be prevented or detected on a timely basis. Our opinion is based on criteria established in our Standards for Internal Control in the Federal Government. The deteriorating financial condition of the Foundation led to further deterioration in its control over its financial reporting process during fiscal year 2004, impeding its ability to prepare timely and accurate financial statements. The lack of an individual with accounting and financial management expertise taking responsibility for the Foundation’s financial operations during the period, brought about by the Foundation’s lack of funds, prevented it from fulfilling this and other key financial operations, and contributed to its inability to maintain current and accurate financial records. We reported on this matter during our audit of the Foundation’s fiscal year 2003 financial statements. The Foundation’s Director of Finance and Administration resigned his paid position at the Foundation and became Treasurer of the Congressional Award Board of Directors, an unpaid position, during fiscal year 2003. He continued to perform, on a limited and voluntary basis, some of the duties associated with his former position during the first three quarters of fiscal year 2004, as the continued shortage of funds precluded the Foundation from hiring a replacement. This resulted in the Foundation continuing to be unable to fulfill its financial reporting responsibilities, particularly with respect to preparing timely and accurate financial statements. For example, because the Foundation did not always record transactions in its general ledger as they occurred during the year, numerous entries had to be made to the general ledger as late as 12 months after fiscal year-end. These entries were ultimately prepared and recorded by a part-time bookkeeper hired 6 months after the end of the fiscal year. However, these entries were not adequately reviewed by Foundation management to ensure their completeness and accuracy. This resulted in the need for management to make material adjustments to correct errors we identified during our audit. Additionally, the Foundation continued to lack appropriate written procedures for making closing entries in its financial records and for preparing complete and accurate financial statements. At the conclusion of our audit of the Foundation’s fiscal year 2003 financial statements, we stressed to the Foundation’s management the importance of documenting the Foundation’s financial reporting policies and procedures, and further stressed that the policies and procedures should detail such functions as the monthly closing procedures, preparation of the financial statements, and review of financial data by management. The continued lack of written policies and procedures contributed to the errors we identified during our audit of the Foundation’s fiscal year 2004 financial statements. The Foundation was ultimately able to produce financial statements that were fairly stated in all material respects for fiscal years 2004 and 2003. However, the process was long and laborious, due to the lack of 1) appropriate written policies and procedures and 2) routine maintenance of the Foundation’s financial books and records by personnel experienced in accounting and financial management. As a result, material corrections were required between the first draft of the financial statements and the final version. Additionally, the Foundation’s continued lack of an effective financial reporting process forced us for the second consecutive year to notify its congressional oversight committees that we would be unable to meet our May 15, 2005, statutorily mandated audit reporting date. Consequently, the Foundation’s weakness in internal control over its financial reporting process resulted in its inability to prepare reliable financial statements on time and to produce financial information to support management decision making. This is especially critical in light of the Foundation’s precarious financial condition—when accurate and timely financial information is of utmost importance to make prudent and informed operational decisions. Foundation management asserted that its internal control during the period were not effective over financial reporting or compliance with laws and regulations based on criteria established under Standards for Internal Control in the Federal Government. In making its assertion, Foundation management stated the need to improve control over financial reporting and compliance with laws and regulations. Although the weakness did not materially affect the final fiscal year 2004 financial statements as adjusted for misstatements identified by the audit process, this deficiency in internal control may adversely affect any decision by management that is based, in whole or in part, on information that is inaccurate because of the deficiencies. Unaudited financial information reported by the Foundation may also contain misstatements resulting from these deficiencies. Our tests for compliance with relevant provisions of laws and regulations disclosed one area of material noncompliance that is reportable under U.S. generally accepted government auditing standards. This concerns the Foundation’s ability to ensure that it has appropriate procedures for fiscal control and fund accounting and that its financial operations are administered by personnel with expertise in accounting and financial management. Specifically, section 104(c)(1) of the Congressional Award Act, as amended (2 U.S.C. § 804(c)(1)), requires the Director, in consultation with the Congressional Award Board, to “ensure that appropriate procedures for fiscal control and fund accounting are established for the financial operations of the Congressional Award Program, and that such operations are administered by personnel with expertise in accounting and financial management.” The Comptroller General is required by section 104(c)(2)(A) of the Congressional Award Act, as amended (2 U.S.C. § 804(c)(2)(A)), to (1) annually determine whether the Director has substantially complied with the requirement to have appropriate procedures for fiscal control and fund accounting for the financial operations of the Congressional Award Program and to have personnel with expertise in accounting and financial management to administer the financial operations, and (2) report the findings in the annual audit report. We reported a material internal control weakness in financial reporting-- due in part, to a lack of written policies and procedures--in our audit report covering fiscal year 2003. For calendar year 2004, the Foundation still did not have appropriate written fiscal procedures for its financial operations. Additionally, the Foundation recorded entries for only half of the calendar year, leaving many of the financial transactions of the Foundation unrecorded during 2004. For 2004, because the Foundation did not have appropriate fiscal procedures and did not have an individual with expertise in accounting and financial management to routinely administer the procedures and account for the financial operations of the Foundation, we determined that the Director did not substantially complied with the requirements in section 104(c)(1) of the Congressional Award Act, as amended (2 U.S.C. § 804(c)(1)). Under the requirements of section 104(c)(2)(B) of the Congressional Award Act, as amended (2 U.S.C. § 804(c)(2)(B)), if the Director fails to comply with the requirements of section 104(c)(1) of the Act, the Director is to prepare, pursuant to section 108 of the Act, for the orderly cessation of the activities of the Board. The Foundation’s Board Chairman stated that during fiscal year 2005, its Board elected several new Board Members and the Foundation hired an accountant to focus on improving financial management. The newly elected Treasurer and Audit Committee Chair are working with the National Office staff to improve internal control over financial reporting and develop written fiscal policies and procedures for financial operations and reporting. Additionally, the accountant is to help ensure the accurate and timely accounting and reporting of financial information occurs. Except as noted above, our tests for compliance with selected provisions of laws and regulations for fiscal year 2004 disclosed no other instances of noncompliance that would be reportable under U.S. generally accepted government auditing standards. However, the objective of our audit was not to provide an opinion on overall compliance with laws and regulations. Accordingly, we do not express such an opinion. The Foundation incurred losses (decreases in net assets) of almost $168,000 and $6,000 in fiscal years 2004 and 2003, respectively. Although the Foundation’s expenses decreased by over $166,000 between fiscal years 2003 and 2004, revenues decreased even more--by about $290,000, largely attributable to a nearly $334,000 decline in contributions. Net assets as of September 30, 2004 were approximately $42,000. During fiscal year 2002, the Foundation borrowed $100,000, the maximum amount allowable against its revolving line of credit, due to ongoing cash flow problems associated with its daily operations. This debt, partially secured by a $50,000 certificate of deposit, remained outstanding at September 30, 2004. Note 12 to the financial statements acknowledges the Foundation’s increasing difficulties in meeting its financial obligations. While the Foundation has taken steps to decrease its expenditures and liabilities, those steps may not be sufficient to allow it to continue operations. For example, accounts payable at September 30, 2004, were approximately $135,500, with 86 percent of that amount representing unpaid balances owed to vendors from expenses incurred in fiscal year 2002. The Foundation was able to negotiate with certain of its vendors to cancel nearly $39,000 in liabilities to these vendors subsequent to the end of the fiscal year. However, unaudited financial data compiled by the Foundation as of September 30, 2005, showed that its financial condition has not improved. This raises substantial doubt about the Foundation’s ability to continue as a going concern, absent a means of generating additional funding. As discussed earlier, during fiscal year 2003, the Director of Finance and Administration resigned his position at the Foundation and became Treasurer of the Congressional Award Board of Directors. This move was in part because of the Foundation’s deteriorating financial condition. In another effort to keep expenses to a minimum, the Foundation reduced its staff by over one-half during fiscal year 2004. Additionally, during the second half of fiscal year 2004, the Foundation’s Board directed the National Director to reduce his pay by 50 percent in order to further control Foundation expenses. The National Director retired as of September 30, 2004, and the Foundation promoted the Program Director to serve as the Acting National Director. In its plan to deal with its deteriorating financial condition and increase its revenues, the Foundation modified its approach to fundraising by holding more frequent but smaller and less expensive fundraising events than in the past. However, these smaller fundraisers did not increase contributions which, as noted above, decreased by $334,000 or 54 percent between fiscal years 2003 and 2004. To further improve fundraising efforts, the Foundation stated that its Board created a Congressional Liaison Committee, Development Committee, and Program Committee during fiscal year 2005. The newly elected Development Chairperson is leading fundraising initiatives in the corporate community, including pursuing grant opportunities, and the Foundation continues to work with professional fundraisers to more actively involve congressional members. At present, the Foundation is prohibited from receiving federal funds, but is permitted to receive certain in-kind and indirect resources, as explained in Note 5 to the financial statements. The Foundation has attempted, but has been unsuccessful, in securing federal funding through a direct appropriation. On July 14, 2005, the Senate passed S. 335 to reauthorize the Congressional Award Board, which terminated on October 1, 2004, until October 1, 2009. The bill was received in the House of Representatives and was referred to the Committee on Education and the Workforce on July 19, 2005. Subsequently, on September 22, 2005, H.R. 3867, which is identical to S. 335, was introduced in the House to reauthorize the Board and was also referred to the Committee on Education and the Workforce. The ultimate outcome of the reauthorization efforts was unknown at the date of our report. The Foundation’s management is responsible for preparing the annual financial statements in conformity with U.S. generally accepted accounting principles; establishing, maintaining, and assessing the Foundation’s internal control to provide reasonable assurance that the Foundation’s control objectives are met; and complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the financial statements are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles and (2) management maintained effective internal control, the objectives of which are the following. Financial reporting–transactions are properly recorded, processed, and summarized to permit the preparation of financial statements, in conformity with U.S. generally accepted accounting principles, and assets are safeguarded against loss from unauthorized acquisition, use, or disposition. Compliance with laws and regulations–transactions are executed in accordance with laws and regulations that could have a direct and material effect on the financial statements. We are also responsible for testing compliance with selected provisions of laws and regulations that have a direct and material effect on the financial statements. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the financial statements; assessed the accounting principles used and significant estimates made evaluated the overall presentation of the financial statements and notes; read unaudited financial information for the Foundation for fiscal year obtained an understanding of the internal control related to financial reporting (including safeguarding assets) and compliance with laws and regulations; tested relevant internal control over financial reporting and compliance and evaluated the design and operating effectiveness of internal control; and tested compliance with selected provisions of the Congressional Award Act, as amended. We did not evaluate internal control relevant to operating objectives, such as controls relevant to ensuring efficient operations. We limited our internal control testing to controls over financial reporting and compliance. We did not test compliance with all laws and regulations applicable to the Foundation. We limited our tests of compliance to those provisions of laws and regulations that we deemed to have a direct and material effect on the financial statements for the fiscal year ended September 30, 2004. We caution that noncompliance may occur and not be detected by our tests and that such testing may not be sufficient for other purposes. We performed our work in accordance with U.S. generally accepted government auditing standards. In commenting on a draft of this report, the Foundation discussed past and ongoing efforts to obtain reauthorization for the Foundation, as well as its efforts to improve its financial condition through increases in revenues and reductions in expenses to support the growth in the Congressional Award program. The Foundation also discussed its efforts to improve its financial management. The Foundation noted that in fiscal year 2004, a bill to reauthorize the Foundation and provide it authority to receive direct federal appropriations of $750,000 of matched funds annually through 2009 was passed unanimously by the Senate. However, the bill did not pass the House of Representatives. Since then, the Foundation has continued to seek reauthorization of the program, excluding the provision for federal appropriations. The Foundation noted that legislation reauthorizing the Foundation again passed the Senate in 2005, and is currently being considered by the House of Representatives. The Foundation also noted its efforts to increase revenues. With newly appointed Foundation Board Members and fundraising consultants, the Foundation stated it had developed ways to recruit new donors and keep current and former donors informed and engaged. In order to raise awareness and funding for the program, the Foundation holds events with members of Congress in Washington, D.C. Several events were held in fiscal year 2004 and similar fund-raising events continued in fiscal year 2005. In addition, the Foundation noted that it continued to hold its annual Congressional Award Golf Classic during 2005 as another fundraising event. At the same time, the Foundation noted that it continues to keep its expenses down. The Foundation noted that in fiscal year 2004, it had reduced operating expenses to less than $595,000 (down from about $760,000 in fiscal year 2003) and reduced its staff by 50 percent. The Foundation stated that it currently has only five full-time employees and four unpaid interns to oversee program activity in all 50 states. With the program continuing to grow, the Foundation stated that it is using new methods to operate the program at very little cost. By utilizing the Web site and online tools, the Foundation stated that it is able to communicate with new and current participants, parents, volunteers, congressional offices, and donors electronically, which minimizes printing, postal, and travel expenses. The Foundation also emphasized its efforts to improve its financial management, noting that the newly elected Treasurer and Audit Committee Chair are working with the Foundation’s National Office staff to improve internal control over financial reporting and develop written fiscal policies and procedures for financial operations and reporting. The Foundation noted that the accountant it hired in 2005 will help ensure that accurate and timely accounting and reporting of financial information occurs. Congressional Award Fellowship Trust (note 4) Equipment, furniture, and fixtures, net Accounts payable (note 9) Accrued payroll, related taxes, and leave (250,133) Temporarily restricted (note 6) Permanently restricted (note 4) The accompanying notes are an integral part of these financial statements. Changes in unrestricted net assets: Operating revenue and other support Contributions - In-kind (note 5) Interest and dividends applied to current operations Net assets released from restrictions (note 6) Total operating revenue and other support Operating expenses (note 11) Salaries, benefits, and payroll taxes Program, promotion, and travel (18,548) Unrealized investment gains not applied to current operations Realized investment (losses) not applied to current operations (1,669) (7,642) (Decrease) increase in unrestricted net assets (3,785) Changes in temporarily restricted net assets: Net assets released from restrictions (note 6) (164,171) (164,394) (Decrease) increase in temporarily restricted net assets (164,171) (164,394) (Decrease) increase in net assets (167,956) (5,990) The accompanying notes are an integral part of these financial statements. Cash flows from operating activities: Cash received from program activities (340,527) (579,918) (88,017) (276,583) Net cash provided/(used) from operating activities (12,973) Cash flows from investing activities: (146,996) Proceeds from sale of investments Net cash provided/(used) in investing activities (12,448) Cash flows from financing activities: (2,283) Net cash provided/(used) in financing activities (2,283) Net (Decrease) increase in cash (2,590) (5,242) Reconciliation of change in net assets to net cash provided/(used) from operating activities ($167,956) ($5,990) Adjustments to reconcile change in net assets to net cash used/provided from operating activities: Investment (losses) not applied to operations (5,388) (53,406) Decrease (increase) in contributions receivable (Increase) decrease in accounts receivable (242) (Increase) decrease in prepaid expenses (1,068) (Increase) decrease in Board of Directors prepaid expense (4,620) Decrease (increase) in investments (money funds/equity securities) (2,040) (Decrease) increase in accounts payable (14,840) (124,318) Increase (Decrease) in accrued payroll, related taxes, and leave (25,102) (Increase) decrease in Gold Award Ceremony (945) (Increase) decrease in program, promotion, and travel (1,393) Decrease (increase) in Plan 403(b) Net cash provided/(used) from operating activities ($12,973) The accompanying notes are an integral part of these financial statements. For the Fiscal Years Ended September 30, 2004, and 2003 The Congressional Award Foundation (the Foundation) was formed in 1979 under Public Law 96-114 and is a private, nonprofit, tax-exempt organization under Section 501(c)(3) of the Internal Revenue code established to promote initiative, achievement, and excellence among young people in the areas of public service, personal development, physical fitness, and expedition. New program participants totaled over 2,700 in fiscal year 2004. During fiscal year 2004, there were over 17,000 participants registered in the Foundation Award’s program. Certificates and medals were awarded to 2,205 participants during fiscal year 2004. In October 1999, the President signed Public Law 106-63, section 1(d) of which reauthorized the Congressional Award Foundation through September 30, 2004. The financial statements are prepared on the accrual basis of accounting in conformity with U.S. generally accepted accounting principles applicable to not-for- profit organizations. The Foundation considers funds held in its checking account and all highly liquid investments with an original maturity of 3 months or less to be cash equivalents. Money market funds held in the Foundation’s Congressional Award Trust (the Trust) are not considered cash equivalents for financial statement reporting purposes. The Declaration of Trust of the Congressional Award Trust was amended, with the consent of the original declarants of the Trust and the Trustees, effective December 2003. Among other changes, the Amended Trust Declaration removes the restriction on the use of endowment donations. The Trustees may now apply any trust funds for the benefit of the Foundation. Unconditional promises to give are recorded as revenue when the promises are made. Contributions receivable to be collected within less than one year are measured at net realizable value. D. Equipment, Furniture and Fixtures, and Related Depreciation Equipment, furniture, and fixtures are stated at cost. Depreciation of furniture and equipment is computed using the straight-line method over estimated useful lives of 5 to 10 years. Leasehold improvements are amortized over the lesser of their For the Fiscal Years Ended September 30, 2004, and 2003 estimated useful lives or the remaining life of the lease. Expenditures for major additions and betterments are capitalized; expenditures for maintenance and repairs are charged to expense when incurred. Upon retirement or disposal of assets, the cost and accumulated depreciation are eliminated from the accounts and the resulting gain or loss is included in revenue or expense, as appropriate. Investments consist of equity securities, money market funds, and a $50,000 certificate of deposit and are stated at market value. F. Classification of Net Assets The net assets of the Foundation are reported as follows: Unrestricted net assets represent the portion of expendable funds that are available for the general support of the Foundation. Temporarily restricted net assets represent amounts that are specifically restricted by donors or grantors for specific programs or future periods. Permanently restricted net assets result from donor-imposed restrictions stipulating that the resources donated are maintained permanently. Contribution revenue is recognized when received or promised and recorded as temporarily restricted if the funds are received with donor or grantor stipulations that limit the use of the donated assets to a particular purpose or for specific periods. When a stipulated time restriction ends or purpose of the restriction is met, temporarily restricted net assets are reclassified to unrestricted net assets and reported in the statement of activities as net assets released from restrictions. H. Functional Allocation of Expenses The costs of providing the various programs and other activities have been summarized on a functional basis as described in note 11. Accordingly, certain costs have been allocated among the programs and supporting services benefitedThe preparation of financial statements in conformity with U.S. generally accepted accounting principles requires management to make estimates and assumptions that affect certain reported amounts and disclosures. Accordingly, actual results could differ from those estimates. Note 3. Contributions Receivable At September 30, 2004, and 2003, promises to give totaled $60,573 and $160,021, respectively, of which $0 and $160,000, respectively, were due within 1 year. All amounts have subsequently been collected. At September 30, 2004, and 2003, $31,626 and $195,798, respectively, were temporarily restricted by donors for future periods. For fiscal year 2003, the promises to give were a result of the “Charter for Youth” fundraising initiative. Charter for Youth benefactors are requested to contribute a minimum of $100,000 per year for 3 consecutive years for the direct support of The Congressional Award and its initiatives for participant recruitment and awardees recognition. Charter for Youth members have the opportunity to participate in Congressional Award events, and receive recognition as benefactors at the national and regional events and meetings. Note 4. Unrestricted and Permanently Restricted Net Assets The Congressional Award Fellowship Trust (the Trust Fund) was established in 1990 to benefit the charitable and educational purposes of the Foundation. The Trust Fund has received $264,457 of contributions since 1990, which were designated as permanently restricted by the donors when the donations were originally made. In accordance with the terms of the 1990 Trust Agreement (the Agreement), the Foundation was permitted to use all Trust Fund income for the benefit of the charitable and educational purposes of the Foundation. Trust Fund income represents the value of the Trust Fund’s assets (including interest and dividends earned and realized and unrealized gains and losses on Trust Fund investments) in excess of the aggregate amount received as endowment donations. Proceeds from investments can only be used in operations with approval of the Foundation’s Board. The agreement describes endowment donations as the aggregate fair market value (as of the contribution date) of all donations to the Trust Fund. As defined by the agreement, this represents the amount of the Trust Fund’s assets that the Foundation could not use or distribute. During the fiscal year ending September 30, 2004, the trust conditions changed. The Declaration of Trust of the Congressional Award Trust was amended, with the consent of the original declarants of the Trust and the Trustees, effective December 2003. Among other changes, the Amended Trust Declaration removes the permanent restriction on the use of endowment donations. The Trustees must approve any Trust Fund amounts for unrestricted use by the Foundation. Also, during the fiscal year ended September 30, 2004, the Trustee’s authorized and the Foundation’s Board approved the use of $34,915 of the Trust Fund to support 2004 operations. At September 30, 2004, and 2003, the Trust Fund’s investments at fair value consisted of the following: For the Fiscal Years Ended September 30, 2004, and 2003 Activity in the Trust Fund for the fiscal years ended September 30, 2004, and 2003 was as follows: Net realized gains (losses) (1,669) (7,642) Investments transferred to current operations (55,092) Investment earnings applied to current operations Net change in Trust Fund investments (34,915) Trust Fund investments, beginning of year Trust Fund investments, end of year The value of the Trust Fund at September 30, 2003 dropped below the permanently restricted balance by $33,991. During fiscal year 2004, the Foundation received in-kind (non-cash) contributions from donors, which are accounted for as contribution revenue and as current period operating expenses. The in-kind contributions received were for professional services relating to support of activities of the Foundation. The value of the in-kind contributions was $94,596 for fiscal year 2004 and $33,367 for fiscal year 2003. In 2004, legal activities included several one-time matters including amendment of the Trust Agreement and securing a state ruling for Trust exemption. Professional services: Legal Web-hosting In addition, Section 7(c) of Public Law 101-525, the Congressional Award Amendments of 1990, provided that "the Board may benefit from in-kind and indirect resources provided by the Offices of Members of Congress or the Congress." Resources so provided include use of office space, office furniture, and certain utilities. In addition, section 102 of the Congressional Award Act, as amended, provides that the United States Mint may charge the United States Mint Public Enterprise Fund for the cost of striking Congressional Award Medals. The costs of these resources cannot be readily determined and, thus, are not included in the financial statements. Note 7. Employee Retirement Plan For the benefit of its employees, the Foundation participates in a voluntary 403(b) tax- For the Fiscal Years Ended September 30, 2004, and 2003 deferred annuity plan, which was activated on August 27, 1993. Under the plan, the Foundation may, but is not required to, make employer contributions to the plan. There was no contribution to the plan in 2004 or 2003. Note 8. Line of Credit The Foundation has a $100,000 revolving line of credit with its bank that bears interest at 6 percent per annum. The line of credit is partially secured by the Foundation’s investment in a $50,000 certificate of deposit held by the same bank. At September 30, 2004 and 2003, the outstanding balance on the line of credit was $100,000. Note 9. Accounts Payable The accounts payable balance of $135,503 at September 30, 2004, is comprised of $116,635 attributable to goods and services received in fiscal year 2002, and the remainder attributable to goods and services received in fiscal years 2003 and 2004. The accounts payable balance at September 30, 2003, was $150,343. Subsequent to the end of fiscal year 2004, there was approximately $39,000 in accounts payable that were cancelled and converted to an “in-kind” donation. See subsequent events note 13. Note 10. Related Party Activities During fiscal year 2004, an ex-officio director of the Board provided pro bono legal services to the Foundation. The value of legal services has been included in the in-kind contributions and professional fees line items (see note 5). In addition, a director of the Board served as portfolio manager with the brokerage firm responsible for managing the Congressional Award Fellowship Trust account during fiscal years 2004 and 2003. During March 2004, the Foundation entered into an agreement with a professional fundraiser. Also in 2004, the spouse of this professional fundraiser was elected to the Board of Directors of the Foundation. The professional fundraiser was retained on a 10% commission basis. Expenses incurred by the Foundation during fiscal year 2004 to the related party totaled $ 9,756. Note 11. Expenses by Functional Classification The Foundation has presented its operating expenses by natural classification in the accompanying Statements of Activities for the fiscal years ending September 30, 2004, For the Fiscal Years Ended September 30, 2004, and 2003 and 2003. Presented below are the Foundation's expenses by functional classification for the fiscal years ended September 30, 2004, and 2003. Note 12. The Foundation’s Ability to Continue as a Going Concern The Congressional Award Foundation is dependent on contributions to fund its operations and, to a far lesser extent, other revenues, interest, and dividends. The Foundation incurred decreases in net assets of $167,956 and $5,990 in fiscal years 2004 and 2003, respectively. As a result, the Foundation continues to experience difficulty in meeting its obligations. The Foundation has taken steps to substantially decrease administrative expenses, and has implemented numerous initiatives to increase fundraising revenue. The Foundation’s ability to continue as a going concern is dependent on increasing revenues. Revenues have been impacted by the fact that the Foundation has not been reauthorized by the Congress. The Foundation has taken all actions necessary to seek reauthorization of the program. Legislation has passed the Senate and is being considered by the House of Representatives. While the Foundation has taken steps to decrease its expenses, those steps may not be sufficient to enable it to continue operations. Unaudited financial data compiled by the Foundation as of September 30, 2005, showed that the Foundation’s financial condition has not improved. The continuing deterioration in the Foundation’s financial condition raises substantial doubt about its ability to continue as a going concern. During fiscal year 2005, the Board elected several new Members and the Foundation hired an accountant to focus on improving financial management. The newly elected Treasurer and Audit Committee Chair are working with the National Office staff to improve internal control over financial reporting and develop written fiscal policies and procedures for financial operations and reporting. The accountant is expected to provide accurate and timely accounting and reporting. To improve fundraising efforts, the Board created a Congressional Liaison Committee, Development Committee, and Program Committee during fiscal year 2005. The newly elected Development Chairperson is leading fundraising initiatives in the corporate community, including pursuing grant opportunities, and the Foundation continues to work with professional fundraisers to more actively involve congressional members. These events should generate funds from new donors and provide opportunities to maintain relations with current Foundation supporters. Note 13. Subsequent Events On July 14, 2005, the Senate passed S. 335 to reauthorize the Congressional Award Board, which terminated on October 1, 2004, until October 1, 2009. The bill was received in the House of Representative and was referred to the Committee on Education and the Workforce on July 19, 2005. Subsequently, on September 22, 2005, H.R. 3867, which is identical to S. 335, was introduced in the House to reauthorize the Board and was also referred to the Committee on Education and the Workforce. the Foundation negotiated cancellation of Subsequent approximately $39,000 of its liabilities with vendors. The vendors offered these balances owed as “in-kind” contributions to the Foundation. On October 1, 2005, the Foundation appointed a new Treasurer and Audit Committee Chair. The new Treasurer is currently with the Willard Group and the new Audit Committee Chair is currently the Senior Director of Corporate Finance at McDonald’s. Both positions are voluntary.
This report presents our opinion on the financial statements of the Congressional Award Foundation for the fiscal years ended September 30, 2004, and 2003. These financial statements are the responsibility of the Congressional Award Foundation. This report also presents (1) our opinion on the effectiveness of the Foundation's related internal control as of September 30, 2004, and (2) our conclusion on the Foundation's compliance in fiscal year 2004 with selected provisions of laws and regulations we tested. We conducted our audit pursuant to section 107 of the Congressional Award Act, as amended (2 U.S.C. 807), and in accordance with U.S. generally accepted government auditing standards. We have audited the statements of financial position of the Congressional Award Foundation (the Foundation) as of September 30, 2004, and 2003, and the related statements of activities and statements of cash flows for the fiscal years then ended. We found (1) the financial statements are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles, although substantial doubt exists about the Foundation's ability to continue as a going concern; (2) the Foundation did not have effective internal control over financial reporting (including safeguarding assets) and compliance with laws and regulations; and (3) a reportable noncompliance with one of the laws and regulations we tested.
Energy oversees a nationwide network of 40 contractor-operated industrial sites and research laboratories that have historically employed more than 600,000 workers in the production and testing of nuclear weapons. In implementing EEOICPA, the President acknowledged that it had been Energy’s past policy to encourage and assist its contractors in opposing workers’ claims for state workers’ compensation benefits based on illnesses said to be caused by exposure to toxic substances at Energy facilities. Under the new law, workers or their survivors could apply for assistance from Energy in pursuing state workers’ compensation benefits, and if they received a positive determination from Energy, the agency would direct its contractors to not contest the workers’ compensation claims or awards. Energy’s rules to implement the new program became effective in September 2002, and the agency began to process the applications it had been accepting since July 2001, when the law took effect. Energy’s claims process has several steps. First, claimants file applications and provide all available medical evidence. Energy then develops the claims by requesting records of employment, medical treatment, and exposure to toxic substances from the Energy facilities at which the workers were employed. If Energy determines that the worker was not employed by one of its facilities or did not have an illness that could be caused by exposure to toxic substances, the agency finds the claimant ineligible. For all others, once development is complete, a panel of three physicians reviews the case and decides whether exposure to a toxic substance during employment at an Energy facility was at least as likely as not to have caused, contributed to, or aggravated the claimed medical condition. The panel physicians are appointed by the National Institute for Occupational Safety and Health (NIOSH) but paid by Energy for this work. Claimants receiving positive determinations are advised that they may wish to file claims for state workers’ compensation benefits. Claimants found ineligible or receiving negative determinations may appeal to Energy’s Office of Hearings and Appeals. Each of the 50 states and the District of Columbia has its own workers’ compensation program to provide benefits to workers who are injured on the job or contract a work-related illness. Benefits include medical treatment and cash payments that partially replace lost wages. Collectively, these state programs paid more than $46 billion in cash and medical benefits in 2001. In general, employers finance workers’ compensation programs. Depending on state law, employers finance these programs through one of three methods: (1) they pay insurance premiums to a private insurance carrier, (2) they contribute to a state workers’ compensation fund, or (3) they set funds aside for this purpose as self- insurance. Although state workers’ compensation laws were enacted in part as an attempt to avoid litigation over workplace accidents, the workers’ compensation process is still generally adversarial, with employers and their insurers tending to contest aspects of claims that they consider not valid. State workers’ compensation programs vary as to the level of benefits, length of payments, and time limits for filing. For example, in 1999, the maximum weekly benefit for a total disability in New Mexico was less than $400, while in Iowa it was approximately $950. In addition, in Idaho, the weekly benefit for total disability would be reduced after 52 weeks, while in Iowa benefits would continue at the original rate for the duration of the disability. Further, in Tennessee, a claim must be filed within 1 year of the beginning of incapacity or death. In contrast, in Kentucky, a claim must be filed within 3 years of either the last exposure to most substances or the onset of disease symptoms, but within 20 years of exposure to radiation or asbestos. EEOICPA allows Energy, to the extent permitted by law, to direct its contractors to not contest the workers’ compensation claims filed by Subtitle D claimant who received a positive determination from a physician panel. In addition, the statute prohibits the inclusion of the costs of contesting such claims as allowable costs under its contracts with the contractors; however, Energy’s regulations allow the costs incurred as the result of a workers’ compensation award to be reimbursed in the manner permitted under the contracts. The Subtitle D program does not affect the normal operation of state workers’ compensation programs other than limiting the ability of Energy or its contractors to contest certain claims; Energy does not have authority to expand or contract the scope of any of these state programs. Thus, actions taken by Energy or its contractors will not make a worker eligible for compensation under a state workers’ compensation system if the worker is not otherwise eligible. As of December 31, 2003, Energy had completely processed about 6 percent of the more than 23,000 cases that had been filed, and the majority of all cases filed were associated with facilities in 9 states. Energy had begun processing on nearly 35 percent of cases, but processing had not begun on nearly 60 percent of the cases. Assessment of Energy’s achievement of case processing goals is complicated by systems limitations. Further, these limitations make it difficult to assess the achievement of goals related to program objectives, such as the quality of the assistance given to claimants in filing for state workers’ compensation. During the first 2½ years of the program, ending December 31 2003, Energy had fully processed about 6 percent of the more than 23,000 cases it received. The majority of these fully processed cases had been found ineligible because of either a lack of employment at an eligible facility or an illness related to toxic exposure. Of the cases that had been fully processed, 150 cases—less than 1 percent of the more than 23,000 cases filed—had received a final determination from a physician panel. More than half of these determinations (87 cases) were positive. As of the end of calendar year 2003, Energy had not yet begun processing nearly 60 percent of the cases, and an additional 35 percent of cases were in various stages of processing. As shown in figure 2, the majority of the cases being processed were in the case development stage, where Energy requests information from the facility at which the claimant was employed. About 2 percent of the cases in process were ready for physician panel review, and an additional 3 percent were undergoing panel review. A majority of all cases were filed early during program implementation, but new cases continue to be filed. More than half of all cases were filed within the first year of the program, between July 2001 and June 2002. However, between July 2002 and December 31, 2003, Energy continued to receive an average of more than 500 cases per month. Energy officials report that they continue to receive approximately 100 new cases per week. While cases filed are associated with facilities in 43 states or territories, the majority of cases are associated with Energy facilities in 9 states, as shown in figure 3. Facilities in Colorado, Idaho, Iowa, Kentucky, New Mexico, Ohio, South Carolina, Tennessee, and Washington account for more than 75 percent of cases received by December 31, 2003. The largest group of cases is associated with facilities in Tennessee. Workers filed the majority of cases, and cancer is the most frequently reported illness. Workers filed more than 60 percent of cases, and survivors of deceased workers filed about 36 percent of cases. In 2 percent of the cases, a worker filed a claim that was subsequently taken up by a survivor. Cancer is the illness reported in nearly 60 percent of the cases. Diseases affecting the lungs accounted for an additional 15 percent of the cases. Specifically, chronic beryllium disease and/or beryllium sensitivity were reported in 7 percent of the cases, 8 percent reported asbestosis, and less than 1 percent claimed chronic silicosis. Insufficient strategic planning regarding system design, data collection, and tracking of outcomes has made it more difficult for Energy officials to manage some aspects of the program and for those with oversight responsibilities to determine whether Energy is meeting goals for processing claims. The data system Energy uses to aid in case management was developed by contractors without detailed specifications from Energy. Furthermore, the system was developed before Energy established its processing goals and did not collect sufficient information to track Energy’s progress in meeting these goals. While recent changes to the system have improved Energy’s ability to track certain information, these changes have resulted in some recent status data being not completely comparable with older status data. In addition, Energy will be unable to completely track the timeliness of its processing for approximately one-third of the cases that were being processed as of December 2003 because key data are not complete. For example, Energy established a goal of completing case development within 120 days of case assignment to a case manager. At least 70 percent of the cases for which case development was complete were missing dates corresponding to either the beginning or the end of the case development process—data that would allow Energy officials to compute the time elapsed during case development. Energy has not been sufficiently strategic in identifying and systematically collecting certain data that are useful for program management. For instance, Energy does not track the reasons why particular cases were found ineligible in a format that can be easily analyzed. Systematic tracking of the reasons for ineligibility would make it possible to quickly identify cases affected by policy changes. For example, when a facility in West Virginia was determined to be only a Department of Energy facility and not also an atomic weapons employer, it was necessary for Energy to identify which cases had been ruled ineligible because of employment at the West Virginia facility. While some ineligibility information may be stored in case narratives, this information is not available in a format that would allow the agency to quickly identify cases declared ineligible for similar reasons. Ascertaining the reason for ineligibility would at best require review of individual case narratives, and indeed, Energy officials report that it is sometimes necessary to refer back to application forms to find the reasons. As a result, if additional changes are made that change eligibility criteria, Energy may have to expend considerable time and resources determining which cases are affected by the change in policy. In addition, because it did not adequately plan for the various uses of its data, Energy lacks some of the data needed to analyze how cases will fare when they enter the state workers’ compensation systems. Specifically, it is difficult for Energy to predict whether willing payers of workers’ compensation benefits will exist using case management system data because the information about the specific employer for whom the claimant worked is not collected in a format that can be systematically analyzed. In addition, basic demographic data such as the age of employees is not necessarily accurate due to insufficient edit controls— for example, error checking that would prevent employees’ dates of birth from being entered if the date was in the future or recent past. Reliable age data would allow Energy to estimate the proportion of workers who are likely to have health insurance such as Medicare. Insufficient tracking of program outcomes hampers Energy’s ability to determine how well it is providing assistance to claimants in filing claims for state workers’ compensation benefits. Energy has not so far systematically tracked whether claimants subsequently file workers’ compensation claims or the decisions on these claims. However, agency officials recently indicated that they now plan to develop this capability. In addition, Energy does not systematically track whether claimants who receive positive physician panel determinations file workers’ compensation claims, nor whether claims that are filed are approved, or paid. Furthermore, unless Energy’s Office of Hearings and Appeals grants an appeal of a negative determination, which is returned to Energy for further processing, Energy does not track whether a claimant files an appeal. Lack of information about the number of appeals and their outcomes may limit Energy’s ability to assess the quality and consistency of its decision making. Energy was slow in implementing its initial case processing operation, but it is now processing enough cases so that there is a backlog of cases awaiting physician panel review. With panels operating at full capacity, the small pool of physicians qualified to serve on the panels may ultimately limit the agency’s ability to produce more timely determinations. Claimants have experienced lengthy delays in receiving the determinations they need to file workers’ compensation claims and have received little information about claims status as well as what they can expect from this process. Energy has taken some steps intended to reduce the backlog of cases. Energy’s case development process has not always produced enough cases to ensure that the physician panels were functioning at full capacity, but the agency is now processing enough cases to produce a backlog of cases waiting for panel review. Energy officials established a goal of completing the development of 100 cases per week by August 2003 to keep the panels fully engaged. However, the agency did not achieve this goal until several months later. Energy was slow to implement its case development operation. Initially, agency officials did not have a plan to hire a specific number of employees for case development, but they expected to secure additional staff as they were needed. When Energy first began developing cases, in the fall of 2002, the case development process had about 8 case managers. With modest staffing increases, the program quickly outgrew the office space used for this function. Though Energy officials acknowledged the need for more personnel by spring 2003, they delayed hiring until additional space could be secured in August. By November 2003, Energy had more than tripled the number of case managers developing cases, and since that month the agency has continued to process an average of more than 100 cases per week to have them ready for physician panel review. Energy transferred nearly $10 million in fiscal year 2003 funds into this program from other Energy accounts. Further, after completing a comprehensive review of its Subtitle D program, the agency developed a plan that identifies strategies for further accelerating its case processing. This plan sets a goal of eliminating the entire case backlog by the end of calendar year 2006 and depends in part on shifting an additional $33 million into the program in fiscal year 2004, to quadruple the case- processing operation. With additional resources, Energy plans to complete the development of all pending cases as quickly as possible and have them ready for the physician panels. However, this could create a larger backlog of cases awaiting review by physician panels. Because a majority of the claims filed so far are from workers whose medical conditions are likely to change over time, building this backlog could further slow the decision process by making it necessary to update medical records before panel review. Even though additional resources have allowed Energy to speed initial case development, the limited pool of qualified physicians for panels may limit Energy’s capacity to decide cases more quickly. Under the rules Energy originally established for this program that required that each case be reviewed by a panel of 3 physicians and given the 130 physicians currently available, it could have taken more than 13 years to process all cases pending as of December 31, without consideration of the hundreds of new cases the agency is receiving each month. However, in an effort to make the panel process more efficient, Energy published new rules on March 24, 2004, that re-defined a physician panel as one or more physicians appointed to evaluate these cases and changed the timeframes for completing their review. Under the new rule, a panel composed of a single physician will initially review each case, and if a positive determination is issued, no further review is necessary. Negative determinations made by a single physician panels will require review by one or more additional single-physician panels. In addition to revising its rules, the agency began holding a full-time physician panel in Washington, D.C., in January 2004, staffed by physicians who are willing to serve full- time for a 2- or 3-week period. Energy and NIOSH officials have taken steps to expand the number of physicians who would qualify to serve on the panels and to recruit more physicians, including some willing to work full-time. While Energy has made several requests that NIOSH appoint additional physicians to staff the panels, such as requesting 500 physicians in June 2003, NIOSH officials have indicated that the pool of physicians with the appropriate credentials and experience is limited. The criteria NIOSH originally used to evaluate qualifications for appointing physicians to these panels included: (1) board certification in a primary discipline; (2) knowledge of occupational medicine; (3) minimum of 5 years of relevant clinical practice following residency; and (4) reputation for good medical judgment, impartiality, and efficiency. NIOSH recently modified these qualifications, primarily to reduce the amount of required clinical experience so that physicians with experience in relevant clinical or public health practice or research, academic, consulting, or private sector work can now qualify to serve on the panels. NIOSH has revised its recruiting materials to reflect this change and to point out that Energy is also interested in physicians willing to serve on panels full-time. However, a NIOSH official said that he was uncertain about the effect of the change in qualifications on the number of available physicians. In addition, the official indicated that only a handful of physicians would likely be interested in serving full-time on the panels. Energy officials have also explored additional sources from which NIOSH might recruit qualified physicians, but they have expressed concerns that the current statutory cap on the rate of pay for panel physicians may limit the willingness of physicians from these sources to serve on the panels. For example, Energy officials have suggested that physicians in the military services might be used on a part-time basis, but the rate of pay for their military work exceeds the current cap. Similarly, physicians from the Public Health Service could serve on temporary full-time details as panel physicians. To elevate the rate of pay for panel physicians to a level that is consistent with the rate physicians from these sources normally receive, Energy officials recently submitted to the Congress a legislative proposal to eliminate the current cap on the rate of pay and also expand Energy’s hiring authority. Panel physicians have also suggested methods to Energy for improving the efficiency of the panels. For example, some physicians have said that more complete profiles of the types and locations of specific toxic substances at each facility would speed their ability to decide cases. While Energy officials reported that they have completed facility overviews for most of the major sites, specific site reference data are available for only a few sites. Energy officials told us that, in their view, the available information is sufficient for decision making by the panels. However, based on feedback from the physicians, Energy officials are exploring whether developing additional site information would be cost beneficial. Energy has not always provided claimants with complete and timely information about what they could achieve in filing under this program. Energy officials concede that claimants who filed in the early days of the program may not have been provided enough information to understand the benefits they were filing for. As a consequence, some claimants who filed under both Subtitle B and Subtitle D early in the program later withdrew their claims under Subtitle D because they had intended to file only for Subtitle B benefits or because they had not understood that they would still have to file for state workers’ compensation benefits after receiving a positive determination from a physician panel. After the final regulations were published in August 2002, Energy officials said that claimants had a better understanding of the benefits for which they were applying. Energy has not kept claimants sufficiently informed about the status of their claims under Subtitle D. Until recently, Energy’s policy was to provide no written communication about claims status between the acknowledgment letters it sent shortly after receiving applications and the point at which it began to process claims. Since nearly half of the claims filed in the first year of the program remained unprocessed as of the December 31, 2003, these claimants would have received no information about the status of their claims for more than 1 year. Energy recently decided to change this policy and provide letters at 6-month intervals to all claimants with pending claims. Although the first of these standardized letters sent to claimants in October 2003 did not provide information about individual claims status, it did inform claimants about a new service on the program’s redesigned Web site through which claimants can check on the status of their claim. However, this new capability does not provide claimants with information about the timeframes during which their claims are likely to be processed and claimants would need to re-check the status periodically to determine whether the status of the claim has changed. In addition, claimants may not receive sufficient information about what they are likely to encounter when they file for state workers’ compensation benefits. For example, Energy’s letter to claimants transmitting a positive determination from a physician panel does not always provide enough information about how they would go about filing for state workers’ compensation benefits. A contractor in Tennessee reported that a worker was directed by Energy’s letter received in September 2003 to file a claim with the state office in Nashville when Tennessee’s rules require that the claim be filed with the employer. The contractor reported the problem to Energy in the same month, but Energy letters sent to Tennessee claimants in October and December 2003 continued to direct claimants to the state office. Finally, claimants are not informed as to whether there is likely to be a willing payer of workers’ compensation benefits and what this means for the processing of that claim. Specifically, advocates for claimants have indicated that claimants may be unprepared for the adversarial nature of the workers’ compensation process when an insurer or state fund contests the claim. Energy officials recently indicated that they plan to test initiatives to improve communication with claimants. Specifically, they plan to conduct a test at one Resource Center that would provide claimants with additional information about the workers’ compensation process and advice on how to proceed after receiving a positive physician panel determination. In addition, they plan to begin contacting individuals with pending claims this summer to provide information on the status of their claims. Our analysis shows that a majority of cases associated with Energy facilities in 9 states that account for more than three-quarters of all Subtitle D cases filed are not likely to be contested. However, the remaining 20 percent of cases lack willing payers and are likely to be contested. These percentages provide an order of magnitude estimate of the extent to which claimants will have willing payers and are not a prediction of actual benefit outcomes for claimants. The workers’ compensation claims for the majority of cases associated with major Energy facilities in 9 states are likely to have no challenges to their claims for state workers’ compensation benefits. Specifically, based on analysis of workers’ compensation programs and the different types of workers’ compensation coverage used by the major contractors, it appears that slightly more than half of the cases will potentially have a willing payer. In these cases, self-insured contractors will not contest the claims for benefits as ordered by Energy. Another 25 percent of the cases, while not technically having a willing payer, have workers’ compensation coverage provided by an insurer that has stated that it will not contest these claims and is currently processing several workers’ compensation claims without contesting them. The remaining 20 percent of cases in the 9 states we analyzed are likely to be contested. Because of data limitations, these percentages provide an order of magnitude estimate of the extent to which claimants will have willing payers. The estimates are not a prediction of actual benefit outcomes for claimants. As shown in table 1, the contractors for four major facilities in these states are self-insured, and Energy’s direction to them to not contest claims that receive a positive physician panel determination will be adhered to. In such situations where there is a willing payer, the contractor’s action to pay the compensation consistent with Energy’s order to not contest a claim could result in a payment that might otherwise have resulted in a denial of a claim, for reasons such as failure to file a claim within a specified period of time. Similarly, the informal agreement by the commercial insurer with the contractors at the two facilities that constitute 25 percent of the cases to pay the workers compensation claims will more likely result in payment, despite potential grounds to contest under state law. However, since this insurer is not bound by Energy’s orders and it does not have a formal agreement with either Energy or the contractors to not contest these claims, there is nothing to guarantee that the insurer will continue to process claims in this manner. About 20 percent of cases in the 9 states we analyzed are likely to be contested. Therefore, in some instances, these cases may be less likely to receive compensation than a comparable case for which there is a willing payer, unless the claimant is able to overcome challenges to the claim. In addition, contested cases can take longer to be resolved. For example, one claimant whose claim is being contested by an insurer was told by her attorney that because of pretrial motions filed by the opposing attorney, it would be 2 years before her case was heard on its merits. Specifically, the cases that lack willing payers involve contractors that (1) have a commercial insurance policy, (2) use a state fund to pay workers’ compensation claims, or (3) do not have a current contract with Energy. In each of these situations, Energy maintains that its orders to contractors would have a limited effect. For instance, an Ohio Bureau of Workers’ Compensation official said that the state would not automatically approve a case with a positive physician panel determination, but would evaluate each workers’ compensation case carefully to ensure that it was valid and thereby protect its state fund. Furthermore, although the contractor in Colorado with a commercial policy attempted to enter into agreements with prior contractors and their insurers to not contest claims, the parties have not yet agreed and several workers’ compensation claims filed with the state program are currently being contested. These estimates could change as better data become available or as circumstances change, such as new contractors taking over at individual facilities. For example, the contractor currently performing environmental cleanup at the Paducah Gaseous Diffusion Plant will not re-compete for this work when its contract ends on September 30, 2004. Energy is soliciting proposals for a new contract to continue the cleanup work and has indicated that the new contractors will not be required to take on the responsibility for the workers’ compensation claims filed by employees of former contractors. While Energy has proposed that the current clean up contractor continue to handle the claims of their employees and those of prior contractors under another of its contracts with the agency, it is unclear at this point whether the current contractor will be able to arrange for continuing coverage of these claims without securing workers’ compensation coverage through commercial insurance. Unless the current contractor can continue to self-insure its workers’ compensation coverage for these claims, the Paducah cases shown in table 1 would have to be moved to the category in which contests are likely. As a result of this single change in contractors, the proportion of cases for which contests are likely could increase from 20 to 33 percent. In contrast to Subtitle B provisions that provide for a uniform federal benefit that is not affected by the degree of disability, various factors may affect whether a Subtitle D claimant is paid under the state workers’ compensation program or how much compensation will be paid. Beyond the differences in the state programs that may result in varying amounts and length of payments, these factors include the demonstration of a loss resulting from the illness and contractors’ uncertainty on how to compute compensation. Even with a positive determination from a physician panel and a willing payer, claimants who cannot demonstrate a loss, such as loss of wages or unreimbursed medical expenses, may not qualify for compensation. On the other hand, claimants with positive determinations but not a willing payer may still qualify for compensation under the state program if they show a loss and can overcome all challenges to the claim raised by the employer or the insurer. Contractors’ uncertainty about how to compute compensation may also cause variation in whether or how much a claimant will receive in compensation. While contractors with self-insurance told us that they plan to comply with Energy’s directives to not contest cases with positive determinations, some contractors were unclear about how to actually determine the amount of compensation that a claimant will receive. For example, one contractor raised a concern that no guidance exists to inform contractors about whether they can negotiate the degree of disability, a factor that could affect the amount of the workers’ compensation benefit. Other contractors will likely experience similar situations, as Energy has not issued written guidance on how to consistently compute compensation amounts. While not directly affecting compensation amounts, a related issue involves how contractors will be reimbursed for claims they pay. Energy uses several different types of contracts to carry out its mission, such as operations or cleanup, and these different types of contracts affect how workers’ compensation claims will be paid. For example, a contractor responsible for managing and operating an Energy facility was told to pay the workers’ compensation claims from its current operating budget. The contractor said that this procedure may compromise its ability to conduct its primary responsibilities. On the other hand, a contractor cleaning up an Energy facility under a cost reimbursement contract was told by Energy officials that its workers’ compensation claims would be reimbursed and, therefore, paying claims would not affect its ability to perform cleanup of the site. Various options are available to improve payment outcomes for the cases that receive a positive determination from Energy, but lack willing payers under the current program. If it chooses to change the current program, Congress would need to examine these options in terms of several issues, including the source, method, and amount of the federal funding required to pay benefits; the length of time needed to implement changes; the criteria for determining who is eligible; and the equitable treatment of claimants. In particular, the cost implications of these options for the federal government should be carefully considered in the context of the current and projected federal fiscal environment. We identified four possible options for improving the likelihood of willing payers, some of which have been offered in proposed legislation. While not exhaustive, the options range from adding a federal benefit to the existing program for cases that lack a willing payer to addressing the willing payer issue as part of designing a new program that would allow policymakers to decide issues such as the eligibility criteria and the type and amount of benefits without being encumbered by existing program structures. A key difference among the options is the type of benefit that would be provided. Option 1—State workers’ compensation with federal back up. This option would retain state workers’ compensation structure as under the current Subtitle D program but add a federal benefit for cases that receive a positive physician panel determination but lack a willing payer of state workers’ compensation benefits. For example, claims involving employees of current contractors that self-insure for workers’ compensation coverage would continue to be processed through the state programs. However, claims without willing payers such as those involving contractors that use commercial insurers or state funds likely to contest workers’ compensation claims could be paid a federal benefit that approximates the amount that would have been received under the relevant state program. Option 2—Federal workers’ compensation model. This option would move the administration of the Subtitle D benefit from the state programs entirely to the federal arena, but would retain the workers’ compensation concept for providing partial replacement of lost wages as well as medical benefits. For example, claims with positive physician panel determinations could be evaluated under the eligibility criteria of the Federal Employees Compensation Act and, if found eligible, could be paid benefits consistent with the criteria of that program. Option 3—Expanded Subtitle B program that does not use a workers’ compensation model. Under this option, the current Subtitle B program would be expanded to include the other illnesses resulting from radiation and toxic exposures that are currently considered under the Subtitle D program. The Subtitle D program would be eliminated as a separate program and, if found eligible, claimants would receive a lump- sum payment and coverage of future medical expenses related to the workers’ illnesses, assuming they had not already received benefits under Subtitle B. The Department of Labor would need to expand its regulations to specify which illnesses would be covered and the criteria for establishing eligibility for each of these illnesses. In addition, since the current programs have differing standards for determining whether the worker’s illness was related to his employment, it would have to be decided which standard would be used for the new category of illnesses. Option 4—New federal program that uses a different type of benefit structure. This option would address the willing payer issue as part of developing a new program that involves moving away from the workers’ compensation and Subtitle B structures and establishing a new federal benefit administered by a structure that conforms to the type of the benefit and its eligibility criteria. This option would provide an opportunity to consider anew the purpose of the Subtitle D provisions. As a starting point, policymakers could consider different existing models such as the Radiation Exposure Compensation Act, designed to provide partial restitution to individuals whose health was put at risk because of their exposure even when their illnesses do not result in ongoing disability. But, they could also choose to build an entirely new program that is not based on any existing model. In deciding whether and how to change the Subtitle D program to ensure a source of benefit payments for claims that would be found eligible if they had a willing payer, policymakers will need to consider the trade-offs involved. Table 2 arrays the relevant issues to provide a framework for evaluating the range of options in a logical sequence. We have constructed the sequence of issues in this framework in terms of the purpose and type of benefit as being the focal point for the evaluation, with consideration of the other issues flowing from that first decision. For example, decisions about eligibility criteria would need to consider issues relating to within- state and across-state equity for Subtitle D claimants. The framework would also provide for decisions on issues such as the method of federal funding—trust fund or increased appropriations—and the appropriate federal agency to administer the benefit. For each of the options, the type of benefit would suggest which agency should be chosen to administer the benefit and would depend, in part, on an agency’s capacity to administer a benefit program. In examining these issues, the effects on federal costs would have to be carefully considered. Ultimately, policymakers will need to weigh the relative importance of these issues in deciding whether and how to proceed. In evaluating how the purpose and type of benefit now available under Subtitle D could be changed, policymakers would first need to focus on the goals they wish to achieve in providing compensation to this group of individuals. If the goal is to compensate only those individuals who can demonstrate lost wages because of their illnesses, a recurring cash benefit in an amount that relates to former earnings might be in order and a workers’ compensation option, either a state benefits with a federal back up or a federal workers’ compensation benefit, would promote this purpose. If, on the other hand, the goal is to compensate claimants for all cases in which workers were disabled because of their employment—even when workers continue to work and have not lost wages–the option to expand Subtitle B would allow a benefit such as a flat payment amount not tied to former earnings. For consideration of a new federal program option, it might be useful to also consider other federal programs dealing with the consequences of exposure to radiation as a starting point. For example, the Radiation Exposure Compensation Act was designed to provide partial restitution to individuals whose health was put at risk because of their exposure. Similar to Subtitle B, the act created a federal trust fund, which provides for payments to individuals who can establish that they have certain diseases and that they were exposed to radiation at certain locations and at specified times. However, this payment is not dependent on demonstrating ongoing disability or actual losses resulting from the disease. The options could also have different effects with respect to eligibility criteria and the equity of benefit outcomes for current Subtitle D claimants based on these criteria. By equity of outcomes, we mean that claimants with similar illnesses and circumstances receive similar benefit outcomes. The current program may not provide equity for all Subtitle D claimants within a state because a claim that has a willing payer could receive a different outcome than a similar claim that does not have a willing payer, but at least three of the options could provide within-state equity. With respect to across-state equity, the current program and the option to provide a federal back up to the state workers’ compensation programs would not achieve equity for Subtitle D claimants in different states. In contrast, the option based on a federal workers’ compensation model as well as the expanded Subtitle B option would be more successful in achieving across-state equity. Regardless of the option, changes made to Subtitle D could also potentially result in differing treatment of claims decided before and after the implementation of the change. In addition, changing the program to remove the assistance in filing workers’ compensation claims may be seen as depriving a claimant of an existing right. Further, any changes could also have implications beyond EEOICPA, to the extent that the changes to Subtitle D could establish precedents for federal compensation to private sector employees in other industries who were made ill by their employment. Effects on federal costs would depend on the generosity of the benefit in the option chosen and the procedures established for processing claims for benefits. Under the current program, workers’ compensation benefits that are paid without contest will come from contract dollars that ultimately come from federal sources—there is no specific federal appropriation for this purpose. Because all of the options are designed to improve the likelihood of payment for claimants who meet all other criteria, it is likely that federal costs would be higher for all options than under the current program. Specifically, federal costs would increase for the option to provide a federal back up to the state workers’ compensation program because it would ensure payment at rates similar to the state programs for the significant minority of claimants whose claims are likely to be contested and possibly denied under the state programs. Further, the federal costs of adopting a federal workers’ compensation option would be higher than under the first option because all claimants—those who would have been paid under the state programs as well as those whose claims would have been contested under the state programs—would be eligible for a federal benefit similar to the benefit for federal employees. In general, federal workers’ compensation benefits are more generous than state benefits because they replace a higher proportion of the worker’s salary than many states and the federal maximum rate of wage replacement is higher than all the state maximum rates. For either of the two options mentioned earlier, a decision to offset the Subtitle D benefits against the Subtitle B benefit could lessen the effect of the increased costs, given reports by Energy officials that more than 90 percent of Subtitle D claimants have also filed for Subtitle B benefits. However, the degree of this effect is difficult to determine because many of the claimants who have filed under both programs may be denied Subtitle B benefits. The key distinction would be whether workers who sustained certain types of illnesses based on their Energy employment should be compensated under both programs as opposed to recourse under only one or the other. If they were able to seek compensation from only one program, the claimant’s ability to elect one or the other based on individual needs should be considered. The effects on federal cost of an expanded Subtitle B option or a new federal program option are more difficult to assess. In many cases, the Subtitle B benefit of up to $150,000 could exceed the cost of the lifetime benefit for some claimants under either of the workers’ compensation options, resulting in higher federal costs. However, the extent of these higher costs could be mitigated by the fact that many of the claimants who would have filed for both benefits in the current system would be eligible for only one cash benefit regardless of the number or type of illnesses. The degree of cost or savings would be difficult to assess without additional information on the specific claims outcomes in the current Subtitle B program. The effects on federal costs for the new federal program option would depend on the type and generosity of the benefit selected. More than 3 years after the passage of EEOICPA, few claimants have received state workers’ compensation benefits as a result of assistance provided by Energy. While Energy has eliminated the bottleneck in its claims process that it encountered early in program implementation—the initial development of cases—in doing so it has created a growing backlog of cases awaiting review by a physician panel. In the absence of changes that would expedite this review, many claimants will likely wait years to receive the determination they need to pursue a state workers’ compensation claim. In the interim, their medical conditions may worsen, and claimants may even die before they receive consideration by a state program. While Energy has taken some steps designed to reduce the backlog of cases for the physician panels, it is too early to assess whether these initiatives will be sufficient to resolve this growing backlog. Whether they ultimately receive positive or negative determinations, claimants deserve complete and timely information about what they could achieve in filing under this program, what the claims process entails, the status of their claims, and what they are likely to encounter when they file for state workers’ compensation benefits. Without complete information, claimants are unable to weigh the benefits and risks of pursuing the process to its conclusion. Indeed, given that the majority of claimants have also filed for benefits under Subtitle B and many may have already received decisions on those claims, some claimants may not be aware that they still have a Subtitle D claim pending. Further, given the limited communication from Energy since their claims were filed, some claimants may be unaware that resources are being expended developing their claims. Finally, because Energy does not currently communicate to claimants what they are likely to encounter when they file for state benefits, claimants may be unprepared for what may be a difficult and protracted pursuit of state benefits. Energy may be hindered in its ability to improve its claims process and evaluate the quality of the assistance it is providing to claimants in this program using the data it currently collects. Energy may also be unprepared to provide the analysis needed to inform policymakers as they consider whether changes to the program are needed because it does not systematically track the outcomes of cases that are appealed or the outcomes of claims that are filed with state workers’ compensation programs. Finally, Energy will be limited in its ability to provide complete and accurate information to claimants regarding the status and outcomes of their claims without good data. Even if all claimants were to receive timely physician panel determinations stating that the workers’ illnesses had likely been caused by their employment with Energy, some may never receive state workers’ compensation benefits. The lack of a willing payer may delay the receipt of benefits for some claimants as insurers and state fund officials challenge various aspects of the claim. For other claimants, the challenges raised in the absence of willing payers may ultimately result in denial of benefits based on issues such as not filing the claim within the time limits set by the state program—issues that would not be contested by willing payers. This disparity in potential outcomes for Subtitle D claimants may warrant the consideration of changes to the current program to ensure that eligible claims are paid without undue delay and that there is a willing payer for all claimants who would otherwise be eligible. To improve Energy’s effectiveness in assisting Subtitle D claimants in obtaining compensation for occupational illnesses, we recommend that the Secretary of Energy: in order to reduce the backlog of cases waiting for review by a physician panel, take additional steps to expedite the processing of claims though its physician panels and focus its efforts on initiatives designed to allow the panels to function more efficiently. For example, Energy should pursue the completion of site reference data to provide physicians with more complete information about the type and degree of toxic exposures that may have occurred at each Energy facility. in order to provide claimants with more complete information, expand and expedite its plans to enhance communications with claimants. These plans should focus on providing more complete information describing the assistance Energy will provide to claimants, the timeframes for claims processing, the status of claims, and the process that claimants will encounter when they file claims for state workers’ compensation benefits. in order to facilitate program management and oversight, develop cost- effective methods for improving the quality of the data in its case management system and increasing its capabilities to aggregate these data to address program issues. In addition, Energy should develop and implement plans to track the outcomes of cases that progress through the state workers’ compensation systems and use this information to evaluate the quality of the assistance it provides to claimants in the Subtitle D program. Such data could also be used by policy makers to assess the extent to which this program is achieving its goals and purposes. in order to reduce disparities in potential outcomes between claimants with and without willing payers, consider developing a legislative proposal for modifying the EEOICPA statute to address the willing payer issue. When assessing different options, several issues such as those discussed in this report should be considered, including the purpose and type of benefit, eligibility criteria and equity of benefit outcomes, and effects on federal costs. We provided a draft of this report to Energy for comment. In commenting on the draft report, Energy indicated that the agency had already incorporated several of our recommendations and will aggressively tackle the remainder. However, Energy did not specifically comment on each recommendation. In addition, the comments highlighted several initiatives either planned or underway that are designed to improve the Subtitle D program. Several of these initiatives address issues raised in our report for which we recommended changes. In particular, Energy agreed with our findings regarding problems with communications with Subtitle D claimants and outlined the steps the agency has planned to correct these problems. Further, Energy agreed with our finding that there was not a system in place to track the outcomes of workers’ compensation claims filed with the state programs and indicated that the agency has recently initiated such a system, as we recommended. Finally, the comments provide more recent information about the agency’s progress in processing Subtitle D claims and reiterate the agency’s plan for eliminating the backlog of claims by 2006. Energy’s comments are provided in appendix II. Energy also provided technical comments, which we have incorporated as appropriate. Copies of this report are being sent to the Secretary of Energy, appropriate congressional committees, and other interested parties. The report will also be made available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. To determine the number of cases filed under Subtitle D, the status of these cases and characteristics of claimants, we used administrative data from Energy’s Case Management System (CMS). Energy does not publish standardized data extracts from this system, so we requested that Energy query the system to provide customized extracts for our analysis. The first extract contained data on the status and characteristics of cases filed between July 2001 and June 30, 2003. The second extract was obtained as an update and contained data related to cases filed between July 2001 and December 31, 2003. Because multiple claims can be associated with a single case, Energy’s system contains data at two levels — the case level and the claim level. For example, if both the widow and child of a deceased Energy employee file claims, both claims will be associated with a single case, which is linked to the Energy employee. At the case level, the system contains information about the Energy employee, such as date of birth and date of death (if applicable), the facilities at which the employee worked, and their dates of employment and the status of the case as it moves through the development process in preparation for physician panel review. At the claim level, CMS contains information related to the individual claimants, such as the date the claim was signed and the claimant’s relationship to the Energy employee. The extracts provided by Energy contain case-level data, for the most part. Data elements that are collected at the claim level were reported at the case level in our files. For example, the system includes a claim signature date for each claim. In our case-level file, Energy provided the earliest signature date, so that we would know when the first claim was signed. Illness data are also collected at the claim level. In our case-level file, Energy provided all the illnesses claimed by all claimants. We then aggregated the illness data to determine which illnesses were claimed on each case. We interviewed key Energy officials and contractors and reviewed available system documentation, such as design specifications and system update documents. Once the first data extract was received from Energy, we tested the data set to determine that it was sufficiently reliable for our purposes. Specifically, we performed electronic testing to identify missing data or logical inconsistencies and reviewed determination letters for cases that had physician panel determinations. We then computed descriptive statistics, including frequencies and cross-tabulations, to determine the number and status of cases received as of June 30, 2003. When we received the second data extract, containing data through the end of calendar year 2003, we matched it to the first one to determine how many additional cases had been received between July 1, 2003, and December 31, 2003, and to determine if any cases were missing. We determined that some cases (less than 2 percent) that had been in the first extract were missing from the second file. We consulted with Energy contractors and determined that one case had been accidentally omitted from the query results and that the remaining cases had been dropped from CMS because they were duplicate cases or had been determined to be non-Subtitle D cases. This is possible because the Resource Centers use the CMS system to document incoming cases for both Subtitle B and Subtitle D. Energy contractors provided a replacement file that included the case that had been inadvertently dropped. They also reported that there were still a small number of duplicate cases identified in CMS, and hence in our data extract, but that Energy had not yet decided which cases to retain. Since Energy officials had not yet decided which case records to retain and which to delete at the time of our extract, we decided to leave the cases identified as duplicates in our analysis file. We reviewed available system documentation, performed electronic testing and reviewed determination letters for cases that had physician panel determinations to determine that the data contained in the second extract was sufficiently reliable for our purposes. During our electronic testing, we discovered a discrepancy between the December 31, 2003, status information included in our file and the December 31, 2003, status information reported by Energy on its Web site. On further discussion with Energy officials and contractors, we determined that when running the query, Energy contractors had calculated the December 31, 2003, status information using the wrong field in the database. Energy contractors gave us a third data file containing the correct status information that we then appended to the analysis file. We then computed additional descriptive statistics, including frequencies and cross-tabulations to determine the number and status of cases received as of December 31, 2003. To determine the extent to which Energy policies and procedures help employees file timely claims for state workers’ compensation benefits, we reviewed Energy’s regulations, policies, procedures, and communications with claimants. In addition, we interviewed key Energy officials and contractors at Energy facilities. We also interviewed panel physicians and contractors responsible for case development. In addition, we interviewed advocates, claimants, and officials at the National Institute for Occupational Safety and Health. Finally, we conducted site visits to three Energy facilities in Oak Ridge, Tennessee—the state accounting for the largest number of Subtitle D cases. To estimate the number of claims for which there will not be willing payers of workers’ compensation benefits, we reviewed the provisions of workers’ compensation programs in the 9 states that account for more than three-quarters of the cases filed. The 9 states are: Colorado, Idaho, Iowa, Kentucky, New Mexico, Ohio, South Carolina, Tennessee, and Washington. The results of our analysis cannot necessarily be applied to the remaining 25 percent of the cases filed nationwide. Because of data limitations, we assumed that: (1) all cases filed would receive a positive determination by a physician panel; (2) all workers lost wages because of the illness and were not previously compensated for this loss; and (3) in all cases, the primary contractor rather than a subcontractor at the Energy facility employed the worker. While we believe that the first two assumptions would not affect the proportions shown in each category, the third assumption could result in an underestimate of the proportion of cases lacking willing payers to the extent that some workers may have been employed by subcontractors that used commercial insurers or state funds for workers’ compensation coverage. Some subcontractors use these methods of workers’ compensation coverage because they may not employ enough workers to qualify for self-insurance under some state workers’ compensation programs. We also interviewed Energy officials, key state workers’ compensation program officials, workers’ compensation experts, private insurers, and the contractors operating the major facilities in each of the states to determine the method of workers’ compensation coverage these facilities used. Finally, we took several steps to identify possible options for changing the program in the event that there may not be willing payers of benefits. We reviewed existing laws, regulations, and programs; analyzed pending legislation; and considered characteristics of existing federal and state workers’ compensation programs. We also identified the issues that would be relevant for policy makers to consider in implementing these options. In addition to the above contacts, Melinda L. Cordero, Mary Nugent, and Rosemary Torres Lerma made significant contributions to this report. Also, Luann Moy and Elsie Picyk assisted in the study design and analysis; Margaret Armen provided legal support; and Amy E. Buck assisted with the message and report development.
Subtitle D of the Energy Employees Occupational Illness Compensation Program Act of 2000 allows the Department of Energy (Energy) to help its contractors' employees file state workers' compensation claims for illnesses determined by a panel of physicians to be caused by exposure to toxic substances while employed at an Energy facility. This report examines the effectiveness of the benefit program under Subtitle D and focuses on four key areas: (1) the number, status, and characteristics of claims filed with Energy; (2) the extent to which Energy policies and procedures help employees file timely claims for these state benefits; (3) the extent to which there will be a "willing payer" of workers' compensation benefits, that is, an insurer that--by order from or agreement with Energy--will not contest these claims; and (4) a framework that could be used for evaluating possible options for changing the program. During the first 2 1/2 years of the program, ending December 31, 2003, Energy had completely processed about 6 percent of the more than 23,000 cases that had been filed. Energy had begun processing nearly 35 percent of the cases, but processing had not yet begun on nearly 60 percent of the cases. Further, insufficient strategic planning and systems limitations complicate the assessment of Energy's achievement of goals related to case processing, as well as goals related to program objectives, such as the quality of the assistance provided to claimants in filing for state workers' compensation. While Energy got off to a slow start in processing cases, it is now processing enough cases that there is a backlog of cases waiting for review by a physician panel. Energy has taken some steps intended to reduce this backlog, such as reducing the number of physicians needed for some panels. Nonetheless, a shortage of qualified physicians continues to constrain the agency's capacity to decide cases more quickly. Consequently, claimants will likely continue to experience lengthy delays in receiving the determinations they need to file workers' compensation claims. In the meantime, Energy has not kept claimants sufficiently informed about the delays in the processing of their claims as well as what claimants can expect as they proceed with state workers' compensation claims. GAO estimates that more than half of the cases associated with Energy facilities in 9 states that account for more than three-quarters of all Subtitle D cases filed are likely to have a willing payer of benefits. Another quarter of the cases in these 9 states, while not technically having a willing payer, have workers' compensation coverage provided by an insurer that has stated that it will not contest these claims. However, the remaining 20 percent of the cases in these 9 states lack willing payers and are likely to be contested. This has created concerns about program equity in that many of these cases may be less likely to receive compensation. Because of data limitations, these percentages provide an order of magnitude estimate of the extent to which claimants will have willing payers. These estimates could change as better data become available or as circumstances change, such as new contractors taking over at individual facilities. The estimates are not a prediction of actual benefit outcomes for claimants. Various options are available to improve payment outcomes for the cases that receive a positive physician panel determination, but lack willing payers. While not recommending any particular option, GAO provides a framework that includes a range of issues to help the Congress assess options if it chooses to change the current program. One of these issues in particular--the federal cost implications--should be carefully considered in the context of the current and projected federal fiscal environment. Please note that the recommendations for this report are identical to the recommendations in GAO-04-515 .
Enacted in 1965 as title XIX of the Social Security Act, Medicaid is a federally aided, state-administered medical assistance program. At the federal level, the program is administered by the Health Care Financing Administration (HCFA), an agency within the Department of Health and Human Services. Within broad federal guidelines, each state designs and administers its own Medicaid program, which HCFA must approve for compliance with current law and regulations. HCFA is also responsible for providing program guidance and oversight to the state programs. Nationwide, Medicaid served approximately 34 million low-income people in fiscal year 1994, with combined federal and state expenditures of $143 billion. California established its Medicaid program, named Medi-Cal, in 1965. The cost of the Medi-Cal program was estimated to be about $15 billion in federal and state funds in fiscal year 1994, serving about 5.4 million people. The California Department of Health Services (DHS) is the agency responsible for administering the Medi-Cal program. It determines policy, establishes fiscal and management controls, contracts with managed care health plans, and reviews program activities. California has over 20 years of experience with Medi-Cal managed care programs. DHS began contracting with Prepaid Health Plan (PHP) pilot projects in 1968. Abuses and scandals plagued the early years of PHP contracting, resulting in beneficiaries being denied access to care. This led the California legislature to pass the Waxman-Duffy Prepaid Health Plan Act in 1972, which established standards for California Medicaid PHP contracts and for program administration. Controls have been continually strengthened over the years through amendments to the Waxman-Duffy Act. The Knox-Keene Health Care Service Plan Act of 1975 gave the California Department of Corporations authority to license and regulate fully capitated PHPs in the state. One Waxman-Duffy amendment made Knox-Keene licensure a prerequisite to obtaining a Medi-Cal PHP contract. With the advent of the Waxman-Duffy and Knox-Keene acts, the majority of then-contracting PHPs had to leave the Medi-Cal program because they failed to meet the new standards. Beginning in the 1980s, the state enacted several pieces of legislation authorizing the development and testing of alternative ways to deliver managed health care services to Medi-Cal beneficiaries. The first legislation, in 1981, authorized the development of pilot Primary Care Case Management (PCCM) programs. Subsequent legislation, in 1982, authorized County Organized Health Systems (COHS) and a Geographic Managed Care (GMC) program, and also permitted routine PCCM contracting. Medi-Cal managed care is currently built on a foundation of PHPs and PCCMs. Contractors are all paid on a capitated basis for the services they provide; that is, the state pays the managed care plan a monthly fee for each enrollee, and the plan assumes responsibility for the full cost of the services it has contracted to provide. PHPs are capitated to provide all basic benefits covered by Medi-Cal, excluding a few selected services such as organ transplants, chronic renal dialysis, long-term care, and dental care. The capitation fee is intended to equal DHS’ cost of providing the same services on a fee-for-service basis to an actuarially equivalent population. PCCMs are operated by physicians and other primary care providers who are capitated to provide all outpatient physician services and to manage all of the services provided to their enrollees. They may elect to provide certain additional services for an increased capitation fee. The capitation fee for PCCMs is set at 95 percent of the fee-for-service equivalent. All services not capitated are available to the PCCM enrollee on a fee-for-service basis. DHS rewards PCCMs for effective case management by paying them a percentage of the amount by which the state’s costs for the noncapitated services fall below the projected costs for an equivalent non-case-managed population. California also uses other managed care delivery systems. COHSs deliver health care to Medicaid beneficiaries in three counties—San Mateo, Santa Barbara, and Solano. A COHS is a local agency that contracts with the state Medicaid program to administer a capitated, comprehensive, case-managed health care delivery system. The COHS is responsible for administering claims, controlling utilization, and providing services to all Medicaid beneficiaries residing in the county. Beneficiaries in the COHS area must enroll in the COHS. They have a wide choice of managed care providers but cannot obtain services under the traditional fee-for-service system unless authorized by the COHS. All Medi-Cal services are arranged for by COHSs through subcontracts with providers. The state plans to have COHSs in two more counties—Orange and Santa Cruz—in 1995. California began a GMC pilot in Sacramento County in 1994. Under this project, the state contracts with several managed care plans to serve that county’s recipients of Aid to Families With Dependent Children (AFDC) population on a mandatory basis and other Medicaid beneficiaries on an optional basis. The state is planning an additional GMC project in San Diego County. Presently, approximately 890,000 Medicaid beneficiaries are enrolled in managed care plans in 20 of the state’s 58 counties. Table 1 shows enrollment by type of plan. Most of the managed care enrollments are voluntary; that is, each beneficiary may choose to receive care through either a managed care plan or the traditional fee-for-service system. In addition, in some counties, Medicaid beneficiaries have a choice of managed care plans. DHS monitoring and evaluation of Medi-Cal contractors consist of annual medical and periodic financial audits of all Medi-Cal managed care plans. DHS also performs periodic monitoring and oversight, including quarterly site visits to managed care plans by DHS contract management staff. Managed care plans are reviewed for compliance with federal and state regulations, with plan contract requirements, and with plans’ own procedures. The DHS Audits and Investigations Division conducts the annual medical quality assurance audits and the periodic financial audits (approximately every 2 years) of all Medi-Cal managed care plans. The medical audit focuses on plan performance in areas such as accessibility, continuity of care, quality assurance, personnel licensure, preventive services, grievances, and facilities and equipment. DHS contract management staff’s monitoring of managed care plan performance in terms of access and quality of care includes (1) investigation of complaints from beneficiaries, county welfare departments, beneficiary advocate groups, and providers; (2) review of disenrollments from plans; (3) review of emergency room visits by plan members; (4) follow-up of contractor corrective action plans for deficiencies identified in medical and financial audits; and (5) reviews of plan capacity and access. At the time of our review, DHS employed 17 contract managers. Each contract manager was responsible for one to three health plans. In addition to DHS’ periodic financial and medical audits, health plans involved in Medi-Cal managed care must undergo several other types of audits on a regular basis. These include the state Department of Corporations’ medical and financial audits of all health maintenance organizations (HMO); annual certified public accountant audits of health plans; and HCFA-sponsored biennial independent cost, access, and quality assessments of PCCMs and COHSs. Effective monitoring and oversight activities are critical to the success of any state Medicaid managed care program to ensure that beneficiaries have access to quality health care. However, a 1992 HCFA review found that DHS was not always able to manage and monitor its managed care program well enough to ensure that the health plans it contracted with were meeting all their responsibilities or that beneficiaries were receiving needed services in a timely fashion. Our review found three areas for continued concern. First, although DHS monitors managed care plans for compliance with their Medicaid contracts, it provides little guidance and training to those responsible for this important task. Second, the state does minimal review of plans’ provider financial incentives to ensure that they do not encourage inappropriately withholding health care services. Third, the state does little monitoring to ensure that services are actually provided through the Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) program, which focuses on preventive services for children. After its 1992 review, HCFA criticized DHS’ monitoring of managed care plans. Representatives of advocacy groups we interviewed also criticized DHS’ contract management capabilities and lack of enforcement activity. Although DHS often cites managed care plans for noncompliance with contractual obligations, advocacy groups said effective enforcement is not undertaken. A December 1994 study by the Center for Health Care Rights in Los Angeles concluded that Medi-Cal health plans with a history of poor care had not been penalized or forced to make improvements. The Center reviewed a sample of health plans’ medical audits over time and DHS activities after deficiencies were discovered. In its report, the Center expressed concern with the inconsistent application of deficiency ratings and the lack of DHS and health plan follow-up for serious quality of care problems found during an audit. Contract managers we interviewed expressed concerns of their own, including their inability to monitor managed care health plans on a proactive basis. They said that they dealt with problems brought to their attention but did not have time to anticipate, identify, and resolve emerging problems. Heavy workloads are one reason for this problem. After its 1992 review of DHS, HCFA cited the agency’s lack of staff as one reason for DHS’ inadequate monitoring, and an advocacy group we spoke with also said contract managers’ heavy workloads had an adverse effect on Medicaid beneficiaries’ access to quality health care. Although the state has added staff, a HCFA official told us that DHS has just enough people to keep current programs running and that heavy workloads remain a HCFA concern. Another factor involved in DHS’ inadequate monitoring that HCFA found in its 1992 review was a generally low level of contract manager experience and technical expertise. Our review also indicated that DHS could improve its monitoring and management of health plans by giving contract managers more training and guidance. In our discussions with DHS officials and contract managers, we found that only on-the-job training and ad hoc workshops were provided. Only one of the five contract managers we interviewed said enough training was provided. Other contract managers wanted additional training on topics such as how to interpret a contract, how to review plans’ cost and utilization reports, and what to look for during site visits. We also found that DHS had not provided contract managers with written policies and procedures on how to perform their duties. For instance, contract managers said they did not have adequate guidance regarding what they should do when contractors fail to comply with state requirements. They believed that DHS policy on the use of sanctions for noncompliance was vague and, therefore, not easily implemented. DHS agreed contract managers could use more guidance and training. Managed care plans that are paid on a capitated basis, such as PHPs and PCCMs, often give incentives to their providers to encourage them to control costs. If the costs of the services these plans provide are higher than the capitation payments they receive, the plans must make up the difference. As a result, primary care physicians in managed care plans typically serve as “gatekeepers” who must preapprove certain services for their patients. Financial incentive arrangements adjust the compensation paid to primary care physicians to discourage them from providing health services such as inpatient hospital care, referrals to specialists, and certain diagnostic tests when the services are unneeded. While there are no reliable current data on the extent to which managed care plans use financial incentives, evidence suggests that most HMOs use some incentives. All three Medi-Cal PHPs we spoke to regarding their financial arrangements do use incentives. One plan, which has salaried primary care physicians, pays them a bonus of up to 20 percent of salary partly on the basis of the cost of individual physician referrals for specialty and hospital care. A second plan pays a medical group on a capitated basis to provide virtually all medical services in-house except inpatient care. Any surplus or deficit up to a specified limit in the inpatient hospital budget is shared equally with physicians. The third plan, which passes along almost all the financial risk, compensates each of its medical groups on a capitated basis for nearly all medical services, including services for which patients must be referred outside the group. It also uses bonus incentives for low hospital utilization. Appendix II contains a more detailed discussion on the use of financial incentives. Ideally, financial incentives operate to reduce unnecessary medical procedures, but they also have the potential to deny patients beneficial and necessary services. Although there are few data on whether financial incentives actually reduce the quality of medical care, the American Medical Association, the Department of Health and Human Services, advocacy groups, academic experts, and our past reports have commented on the potential of incentives to impair quality. Among the factors cited as influencing how much of a hazard incentives may pose are (1) the extent to which a physician’s compensation is placed at risk for services approved by, but not directly provided by, the physician; (2) whether the incentives are based on the service utilization patterns of individual physicians or of a group of physicians in the aggregate; (3) whether a physician’s risk is spread over a large patient pool and the duration of the period used for computing a bonus or deficit; (4) whether the managed care plan provides stop-loss insurance to limit a physician’s risk; and (5) whether the plan has an effective quality assurance program that attempts to counteract any adverse effect the incentive may have on patient care. Although DHS reviews financial incentive arrangements, officials told us they have no criteria or guidelines regarding the types of financial incentives that are acceptable. Their auditors, who review the Medicaid managed care plans’ financial statements, use their individual professional judgment when analyzing financial arrangements between the plans and their subcontractors. The auditors focus primarily on whether the compensation a plan pays to a subcontractor is adequate to protect the financial viability of the subcontractor, not on whether the compensation arrangements threaten quality of care. DHS relies primarily on the Department of Corporations’ licensing process to uncover unsatisfactory financial incentive arrangements. A Department of Corporations official told us that the only general rule applied when reviewing financial incentive arrangements is that a provider or provider group that does not provide hospital or other institutional care may not be capitated for such care. Apart from applying this rule, the Department of Corporations reviews financial incentive arrangements on a case-by-case basis. In general, though, it does not examine the arrangements closely because the managed care plan is responsible for ensuring that its subcontracting arrangements meet the state’s quality assurance requirements. Pursuant to the Omnibus Budget and Reconciliation Act of 1990 (P.L. 101-508), HCFA is currently developing standards for imposing restrictions on the financial incentive arrangements that managed care plans contracting under Medicare and Medicaid can enter into with their physicians. The final regulation is undergoing internal review and is scheduled to be published in 1995. EPSDT strives to improve low-income children’s health by providing a framework for the timely detection and treatment of health problems. However, research shows that the percentage of eligible children participating is low, indicating that the program has not been entirely successful. Under managed care, the success of an EPSDT program depends largely on the capitated providers who are generally responsible for furnishing most primary and preventive health services. Though DHS contracts with managed care plans to provide EPSDT services to Medi-Cal enrollees, DHS does not know to what extent the services are provided because of inadequate data and monitoring. In addition, in violation of federal and state requirements, the state does not require that all eligible children be periodically notified of available services. HCFA requires states to ensure that Medicaid-eligible children, from birth through age 20, are provided preventive health services under the EPSDT program. EPSDT services consist of screening services; vision, dental, and hearing tests; diagnostic services; and other medical services needed to correct conditions discovered during screenings. Problems with California’s EPSDT data and monitoring were noted by HCFA in its 1992 review. HCFA reported that plans kept poor medical records that made it impossible to determine whether children received appropriate services. HCFA recommended that the state adopt procedures to track Medicaid providers’ activities and to validate that children received necessary diagnostic and treatment services in accordance with EPSDT requirements. We found continuing problems in the current program. Specifically, DHS (1) allowed plans to submit aggregate data regarding their provision of EPSDT services that allow for little or no analysis or verification; (2) did not comply with federal and state requirements by periodically notifying, or ensuring PHPs periodically notify, all eligible children of available services; and (3) could not ensure that all children referred for diagnosis and treatment actually received it. PHPs may report EPSDT services to the state on an encounter level (Form PM-160) or monthly aggregate basis. According to DHS officials, most report aggregate data. From these aggregate data, it cannot be determined how many children actually received at least one screening, whether eligible children were receiving all required screenings, or whether children referred for diagnosis and treatment because of screening results actually received treatment. Aggregate data are also difficult to verify, making validation of reported services impossible. HCFA’s policy had been that PHPs can be “deemed” to have met EPSDT requirements because they are assumed to emphasize preventive care. According to a DHS official, this policy gives DHS no incentive, for HCFA reporting purposes, to track or verify whether plans actually provided EPSDT services. HCFA recently revised its reporting requirements and no longer allows “deeming.” As a result, in September 1994, California changed its reporting requirements and now requires PHPs to submit encounter-level reports (Form PM-160). The state, however, admits that implementation will be slow and incremental. Federal and state requirements say DHS must notify EPSDT-eligible children through age 20 of upcoming screenings and of the availability of assistance with transportation and scheduling appointments, and must record the response to this notification. DHS issues notices for children under age 3 except for those enrolled in PHPs that submit aggregate data to the state. However, the state does not determine whether PHPs issue notices except through annual audits, in which issuance of screening notices may not be reviewed. Furthermore, DHS neither issues notices nor requires PHPs to issue notices to children aged 3 and older. Federal law requires the state to provide diagnostic and treatment services for children with conditions discovered during screenings. To determine whether children referred for diagnosis and treatment actually received it, follow-up is needed. DHS conducts follow-up for children enrolled in PCCMs through county Child Health Disability Prevention offices. However, it does not do follow-up for children in PHPs because the task is considered to be case management, a role PHPs are required to perform. The state does not verify whether PHPs perform follow-up as required. DHS’ Medical Review Branch does annual medical quality assurance audits of managed care plans to determine if the plans comply with federal and state regulations, contract requirements, and the plans’ own procedures. If any problems or deficiencies related to EPSDT happened to be noted in the annual health plan audits, contract managers should determine if managed care plans corrected them. However, the DHS audits are designed to assess the quality of services rendered to beneficiaries in general. They are not designed to assess or estimate the number of children who received screenings or to determine the rate at which health plans provided diagnosis and treatment to children who needed it based on screening. In addition, unless children had at least three visits to a provider during the year, their records are not audited. Movement toward the expansion of managed care in the Medi-Cal program began in 1991 with the passage of state legislation that emphasized managed care as a means for delivering health care services. After considering different models of managed care, including using a single plan organized by a local agency with broad representation in each county, the state decided to use two plans per county. A draft plan was released in January 1993. Public hearings followed and, based on the testimony of interested parties, the state revised its plan for expansion. It was published in final form in March 1993. By December 1996, California intends to have approximately 3.4 million Medicaid beneficiaries enrolled in managed care—a majority of the estimated 6 million Medi-Cal beneficiaries. Most of these beneficiaries will be enrolled as part of the state’s 12-county expansion plan. Figure 1 shows the counties with Medi-Cal managed care programs by type of enrollment after the expansion has occurred. California sees managed care as the solution to many of the access and quality problems of its current, largely fee-for-service program. These problems include difficulties in finding physicians who accept Medicaid patients and the lack of a quality assurance system for Medicaid beneficiaries in the fee-for-service program. State officials believe the expanded managed care program will bring a greater number of providers into the program and give beneficiaries greater continuity of care. In addition, they believe that managed care offers better opportunities for controlling costs than a fee-for-service environment. The state intends to contract with two health plans in each of 12 counties—one would be a “local initiative” and the other would be a “mainstream” plan. The local initiative could take different forms. County governments were given the first opportunity to establish local initiatives. If a county government chose not to, a local initiative could have been formed by a consortium of local stakeholders. However, all 12 counties submitted a formal letter of intent to establish local initiatives. The mainstream plan will be a single private plan selected through a bidding process; joint ventures will be considered. All health plans within the new program will have to meet federal and state requirements for access and quality for managed care plans. Under the state’s proposal, only AFDC and AFDC-related Medicaid beneficiaries (who together make up approximately 67 percent of the Medicaid beneficiaries statewide) will be required to enroll. Other categories of eligibles such as Supplemental Security Income (SSI) and SSI-related Medicaid beneficiaries will not be required to enroll in managed care plans, but they may do so if the plans in their areas have the capacity to serve these groups and their coverage is provided for in the plans’ Medicaid contracts. DHS set the minimum enrollment level at 22,500 for each plan. DHS believes this number will ensure the viability of the mainstream plan and the safety-net providers participating in the local initiative. Safety-net providers include community health centers and hospitals that provide charity care and serve relatively high numbers of Medicaid beneficiaries. Maximum enrollment levels, still to be determined, will be established to moderate the effect the enrollment of beneficiaries in the mainstream plan will have on disproportionate-share payments to safety-net hospitals, which have partially compensated them for their volume of charity care and Medicaid services. The state expects to set a maximum enrollment for the mainstream plan at approximately 30 to 40 percent of the total Medicaid managed care enrollees in most counties. To implement several provisions of the program, the state will have to seek a waiver of certain federal requirements that set minimum standards for state Medicaid programs. For example, to require a beneficiary to join a managed care plan, the state must obtain a waiver of federal Medicaid requirements that allow beneficiaries a choice of providers. In an effort to improve program operations and oversight, DHS has proposed several enhancements to the Medi-Cal managed care program. These changes are described in the state’s September 1994 “Request for Application” that solicited health plan applications for the mainstream plan contracts in the 12 counties. The request for application also forms the basis for agreements with the local initiatives. These initiatives are based on input received from interested parties throughout the state. Proposed changes include the following: Increasing access to, and coordination of, care through promoting the integration of public health and specialty services within managed care—The expansion will include traditional and safety-net providers, including community clinics and family planning in managed care networks. Health plans will be expected to enter into memorandums of understanding with local health departments for the provision of specified public health services, for example, immunization, family planning, and detection and treatment of sexually transmitted diseases. Improving the health status of the Medi-Cal population through strengthening and standardizing the definition of preventive health care services—Contracting plans will notify members of the availability of an initial health assessment and will be required to complete the assessment within 120 days of enrollment. The plans will be required to meet other requirements consistent with the EPSDT periodicity schedule. Strengthening quality assurance efforts through expanding monitoring and data reporting capabilities—Contracting plans will report encounter-level data to DHS that will allow the state to identify all of the diagnoses and procedures performed by the health care provider during an interaction with a patient. Removing barriers to accessing care through the development of standards for cultural and linguistic services. In addition, once the request for application procurement process is completed, request for application requirements will be developed into a standardized program manual for training and use by DHS staff. On the basis of our review of the current program operations, we believe the state must improve its monitoring and oversight activities to avoid problems in the expanded program. Although the proposed changes in contract requirements with health plans may improve the program, these provisions must be implemented, monitored, and enforced. As was described before, under the current program, the state’s monitoring of managed care plans has not been sufficient to determine whether beneficiaries received EPSDT services that plans had contracted to provide. Problems identified to date in a primarily voluntary enrollment program could be significantly magnified in a much larger program with mandatory enrollment. In addition, the state needs to provide better guidance and training to its contract managers and to expand oversight to include reviews of financial arrangements and EPSDT programs. In recognition of these needs, DHS has requested an additional 102 positions for the managed care program. While the state seeks to benefit from competition, we believe that by allowing only two plans to serve an area, California is limiting beneficiaries’ choices and may be reducing its ability to deal with plans that do not fulfill their contractual requirements. Limiting mandatory enrollment to AFDC and AFDC-related beneficiaries seems reasonable as the program is first implemented, but the state should reconsider including the SSI population once the expanded program is running smoothly. How the traditional and safety-net providers fare under the expansion plan remains to be seen, but the state’s plan attempts to strike a reasonable balance between protecting traditional and safety-net providers and moving them toward a competitive managed care system. Representatives of private managed care health plans that currently have Medi-Cal contracts have voiced concerns over the two-plan model. They believe that the limit on private contractors is unnecessary and that it eliminates meaningful choice for beneficiaries. In addition, they have challenged the state’s contention that the two-plan model will create a competitive environment. For example, private plans that are normally competitors may need to work together in the larger counties under the two-plan model to form large enough entities to handle the enrollment requirements. This would put them in the awkward position of sharing confidential information with their competitors. The California Association of Health Maintenance Organizations believes that without competition, Medi-Cal contractors will not be responsive to market demands for increased quality of care. According to an analysis done by the California Legislative Analyst’s Office, having two plans in a county—a local initiative plan that does not have to compete and a mainstream plan—does not represent a competitive market. The analysis said that once the mainstream plan has been selected, the state will feel compelled to continue existing contracts regardless of how poorly plans may perform because of the major disruption that would occur if enrollees were forced to change plans. DHS officials stated that the primary reason for the two-plan model is that the state lacks the resources to administer and oversee more plans. They said another major reason is that having multiple plans would force the state to lower the minimum enrollment levels set for each plan (now set at 22,500 enrollees). DHS officials believe that the initial bidding process for the mainstream plan and subsequent rebidding will provide adequate competition among private sector plans. With regard to choice, DHS officials pointed out that each of the contracting plans will contain large networks of providers, giving beneficiaries a choice of providers within each plan. Finally, state officials said that they will act aggressively against plans when problems with access and quality are identified—even if that means terminating a contract. We believe that California’s two-plan model will restrict competition and beneficiary choice. For example, competition will be restricted in Los Angeles County where several health plans appeared ready to compete for the mainstream contract. While beneficiaries may have a choice of many providers, the plans’ policies and practices are a controlling factor in the care that providers give. Selecting a different provider will not solve a problem that arises because of plan decisions, such as to restrict the availability of high-cost specialized services. Relying on only two plans could also reduce the state’s willingness to cancel a contract and thus weaken its ability to make plans comply with contract provisions. Each plan will have a large number of enrollees, making canceling a plan extremely disruptive to many people. This will be especially true in large metropolitan areas. In addition, because DHS has already had difficulty ensuring access, quickly finding alternative sources of care for large numbers of enrollees could be very difficult. While the state’s ability to administer and oversee the program is important, administrative limits are not a persuasive argument for restricting the number of plans to two. The number of plans may not have as much effect on DHS’ administrative workload as the total number of providers or number of enrollees in the managed care program. DHS officials were also concerned that having multiple plans would force the state to lower enrollment levels. However, DHS could keep the minimum and maximum enrollment levels that it established for both the local and mainstream plans, but use the maximum enrollment number for the mainstream plan in each county as a “global cap” for multiple participating plans. As in other states, California’s decision not to require the aged and disabled of the SSI and SSI-related population to enroll was the culmination of weighing the potential benefits of including the entire Medicaid population in the managed care program versus the uncertainties of how to implement and manage such a program. There have been diverse views from several parties in assessing the decision by California. In our March 1993 report on state managed care efforts, we noted that although some states had included other population groups in their managed care programs, most states targeted their Medicaid managed care programs to AFDC and AFDC-related beneficiaries. They did this because AFDC and AFDC-related populations (1) most closely resemble patients in existing primary care practices and generally do not require the same specialized health care services as the SSI population, (2) are more likely to benefit from the types of preventive services that are the hallmark of a managed care delivery strategy, and (3) have the greatest problems getting access to care. Officials in two California counties operating COHSs have stated that the way to achieve the maximum benefits from managed care is to have the entire Medicaid population, including the SSI recipients, enrolled. They attributed some of the financial savings of their programs to the mandatory enrollment of the SSI population in their counties. They said that including SSI beneficiaries increases the COHSs ability to spread risk and to achieve savings. In contrast, after its 1992 review, HCFA officials recommended that instead of including more beneficiaries as the COHS officials suggest, California limit the size of its program by targeting its managed care effort to specific high-risk/high-cost beneficiary groups, rather than enroll the entire AFDC and AFDC-related population in so many counties. Given the state’s limited administrative resources, the expansion effort could be improved by targeting managed care on groups that would benefit the most from case management while, at the same time, controlling costs, HCFA officials believe. Administering a managed care program of over 3 million AFDC and AFDC-related beneficiaries will create an even greater administrative strain on the state than it is now experiencing, HCFA officials said. However, DHS officials point out that the primary goal of the new program is to improve Medicaid beneficiaries’ access to health care services while controlling costs. They believe that enrolling the entire AFDC and AFDC-related population in the designated regions is the best way to increase access for the largest number of people at this time. DHS officials also point out that the state is dealing with problems associated with high-risk/high-cost beneficiaries through the design and implementation of special projects. The state is expanding its Programs of All-Inclusive Care for the Elderly that provide a continuum of care from primary and acute care to long-term care. In addition, the state has begun a medical case management program for high-risk/high-cost beneficiaries. We believe that excluding the SSI population from the expanded program may limit the potential for cost savings. However, at the same time, leaving out the SSI population during the implementation of the program may be a prudent decision with such a large expansion. Administering this program is going to be a major challenge for the state. The state can reconsider the desirability of enrolling the SSI population once the expanded program is running smoothly. Counties in California are financially responsible for providing health care to those who are medically indigent but do not qualify for Medi-Cal. To do so, some counties administer and partially fund their own health care systems that include hospitals and clinics. These county systems, which also provide care to Medi-Cal beneficiaries, are recognized as traditional and safety-net providers. Critics, including Los Angeles County officials and some advocacy groups, believe that the two-plan model may harm the financial viability of safety-net and traditional providers, diminishing their ability to provide care for the medically indigent and for undocumented aliens. With few exceptions, counties have little or no experience in running managed care systems. County hospitals and clinics receive their revenues from a variety of sources, including county appropriations and third party payers such as private insurance, Medicare, and Medicaid. These third party payers primarily reimburse on a fee-for-service basis. Although Medicaid does not reimburse for all the health care expenses a county incurs, it has been a reliable source of substantial revenue. Medicaid disproportionate-share payments—supplemental payments to hospitals that serve large numbers of Medicaid and other low-income patients—also have become an important revenue source for California’s county hospitals, particularly to subsidize the care of the uninsured. If county hospitals lose Medicaid patients to other managed care providers, county revenues to fund health care will be affected in two ways: (1) direct Medicaid reimbursement for services provided will be lost and (2) disproportionate-share payments will decline as the number of Medicaid beneficiaries they serve drops. The state’s managed care expansion model extends participation to safety-net and traditional providers to ensure that the hard-to-serve populations will have access to health care. The local initiative must include all safety-net providers that agree to the terms and conditions required of similar providers affiliating with the initiative. The local initiative will also be required to submit standards for including traditional providers. Furthermore, DHS will encourage mainstream plans to include safety-net and traditional providers in their networks by assigning favorable weighting to mainstream plan proposals that provide for the inclusion of traditional and safety-net providers. Most of the counties involved in the expansion program have asked to be allowed to set up COHSs with no competition from mainstream plans. County officials believe that these arrangements are the best way to provide for the Medicaid population as well as the indigent and the undocumented alien populations. COHSs would eliminate the competition of the private plan and, therefore, minimize the potential financial losses and risk safety-net providers face. However, Los Angeles County health care officials believe that even if plans contract with safety-net providers, the state’s plan for expanding managed care will lead to a loss of essential revenues for their health care system because many Medicaid beneficiaries who now obtain care in county facilities will enroll in mainstream plans. They fear this will destroy the viability of some safety-net providers, resulting in reduced access to care for the remaining Medicaid beneficiaries and the indigent uninsured. County officials are also concerned that the mainstream plans will enroll the healthier beneficiaries, leaving the sicker and more costly beneficiaries in the county system. In addition, according to the California Association of Public Hospitals, because traditional and safety-net providers lack experience running managed care plans, they may be at a disadvantage competing with experienced private plans. State officials told us that there are no plans to establish COHSs beyond the ones already operating or scheduled to start in 1995. The officials believe that assurances have been built into the strategic plan so that counties will receive adequate revenues to ensure the financial viability of safety-net providers. Specifically, the state has put in place safeguards to reduce bias selection between health plans. For example, beneficiaries who do not choose plans will be equitably distributed between plans. Furthermore, the expansion plan protects disproportionate-share payments in three ways: The enrollment floor for the local initiative is based on total disproportionate-share hospital days in the county and is designed to protect disproportionate-share hospital payments flowing to that county. A 2-year implementation period and a 2-year data lag will allow disproportionate-share hospitals time to make the transition to the new managed care environment. Safety-net providers will play a significant role in the local initiative and therefore will be able to arrange admissions to hospitals in a way that protects their disproportionate-share supplemental payments. Despite these efforts to protect some of the revenue sources for counties, however, state officials believe that county facilities could operate more efficiently. They believe more efficient operations by the county, and by safety-net providers in general, will lead to better care for Medicaid beneficiaries because care will be provided in more appropriate settings, such as physicians’ offices rather than hospital emergency rooms. We believe the state’s plan attempts to reasonably balance the need to protect traditional and safety-net providers, while moving them toward a competitive system. To insulate them completely from competition would preclude gaining any of the benefits of competition. California has devoted considerable effort to its proposed expansion of Medi-Cal managed care. It has involved the public in its planning process, modified its plan based on the comments of interested parties, and adjusted its schedule for implementation when problems have been encountered. The expanded program will attempt to improve Medicaid beneficiaries’ access to care and control costs. However, given incentives to control costs by limiting services, the state will need to provide effective oversight to ensure that managed care plans provide beneficiaries with high-quality comprehensive care. This is especially important for EPSDT and other preventive services where outreach efforts are often required. The state’s decision to exclude the SSI population from mandatory enrollment is consistent with the practices of other states and seems reasonable at this time. However, we believe that California may want to consider mandatory enrollment of the high-cost SSI population once the expanded program is running smoothly. Insulating county health systems and other traditional and safety-net providers from competition to avoid any risk would eliminate the benefits of competition. The state’s plan to protect these providers attempts to strike a reasonable balance between protecting their viability and fostering competition. The success of California’s planned expansion is largely dependent on its ability to increase access by creating competition and choice. We believe that the two-plan model will unnecessarily restrict both competition and beneficiary choice. In addition, it may limit the state’s ability to take action against plans that do not comply with contract provisions. We obtained official comments on this report from HCFA and DHS. We have incorporated their views where appropriate. Specifically, HCFA agreed with our conclusion that more competition is desirable in the proposed expansion. In its comments, DHS (1) emphasized the improvements that the new Medi-Cal managed care contracting program will bring under the expanded program; (2) provided additional information and top management’s perspective on DHS monitoring and oversight activities; and (3) acknowledged that while competition will be limited under the expansion plan, DHS believes that the expanded access and choice for beneficiaries are major improvements over the current fee-for-service environment. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from its issue date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of HCFA, the Director of the Office of Management and Budget, the Director of DHS, and other interested parties. We also will make copies available to others upon request. Please call me on (202) 512-4561 if you or your staff have any questions about this report. To obtain general information on California’s Medi-Cal managed care program and plans for expansion, we interviewed state officials from DHS; the Departments of Finance, Corporations, and Personnel; and the California Medical Assistance Commission. We also interviewed federal officials from HCFA’s headquarters and Region IX office, contractors, advocacy groups, and associations. We reviewed documents related to the Medi-Cal managed care program and plans for expansion, including the following: federal and state program laws, regulations, policies, and procedures; HCFA Region IX’s Review of California’s Administration of Its Managed Care Program for fiscal year 1993; and California’s strategic plan for expanding the Medi-Cal managed care program. To obtain specific information on contract administration, we interviewed DHS Medi-Cal Managed Care Division officials, including contract managers and supervisors, as well as contractors and advocates. We also reviewed documents related to contract management, including contract manager duty statements, state contract administration job series requirements, and managed care contracts. To better understand the financial incentive arrangements between managed care plans and their physicians, we interviewed officials from HCFA, the state Department of Corporations, the American Medical Association, Group Health Association of America, experts, advocates, and contractors. We also reviewed documents regarding physician incentive arrangements, including a 1990 Department of Health and Human Services report, legislative provisions, proposed HCFA regulations, our previous reports, and journal articles. To assess how adequately managed care plans provide EPSDT services to children, we interviewed officials from DHS’ Children’s Medical Services Branch, its Medi-Cal Managed Care Division, its Audits and Investigations Division, policy consultants, and contractors. We also reviewed documents related to the federal EPSDT program and the state Child Health Disability Prevention Program. Our work was performed between May 1993 and December 1994 in accordance with generally accepted government auditing standards. Financial incentive arrangements may be loosely defined as compensation arrangements between a health maintenance organization (HMO) and its physicians that are intended to encourage physicians to control the services provided to plan enrollees. HMOs often assume responsibility for providing services to their enrollees for a fixed, predetermined capitation fee. Thus, they are at risk for the difference between the capitation fee and the cost of the care they provide. Although precise figures are unavailable, it is believed that most HMOs use financial incentives in their compensation arrangements with physicians. While financial incentives may operate to reduce unnecessary or inappropriate services, many analysts believe they also have the potential to reduce the quality of medical care by denying patients beneficial treatments. As a result, in our prior reports, we, like others, have called for increased oversight and quality assurance monitoring. Pursuant to a congressional mandate in the Omnibus Budget and Reconciliation Act of 1990 (P.L. 101-508), HCFA is developing regulations to restrict the financial incentive arrangements which HMOs that are Medicare or Medicaid contractors can use. Financial incentives are only one of several means an HMO may use to control the amount of care its physicians provide. Other methods include requiring physicians to obtain the HMO’s preapproval when ordering expensive procedures, educating physicians regarding cost control, reprimanding and possibly terminating the contracts of physicians who exceed utilization guidelines, and screening out physician applicants who do not seem to share the plan’s cost-control goals. A number of factors counteract the tendency of financial incentives to impair the quality of care. These include the professional ethics of the medical profession, concern about malpractice liability, the desire of HMOs to attract and retain patients, HMOs’ quality assurance programs, and quality assurance reviews by external entities, such as government regulatory agencies. In addition, the structure of a managed care plan may affect the financial incentive arrangement it uses. In two-tier HMOs, the plan contracts directly with individual physicians. Examples of two-tier plans are (1) staff model plans, which provide care through physicians they employ, and (2) independent practice association plans, which contract with physicians to provide care through their independent practices. In three-tier HMOs, the plan contracts with one or more medical groups or associations, which in turn contract with individual physicians. Thus, in a three-tier HMO, the contract between the plan and the group and the contract between the group and its physicians may each contain financial incentive arrangements—and the arrangements may differ significantly. The manner in which HMOs compensate primary care physicians and physician groups for the services they provide is the starting point for an analysis of financial incentive arrangements. Financial incentives commonly take the form of adjustments to primary care physicians’ compensation. In addition, the method of compensation employed can itself have incentive effects. HMOs generally pay primary care physicians in one of the following three ways: Fee-for-service: It has been estimated that about 40 percent of HMOs pay primary care physicians by the traditional fee-for-service method, compensating them for each unit of service they provide. The fee-for-service rate is usually lower than the fees the physicians would charge nonplan patients. Capitation: About half of individual primary care physicians also are paid in this manner. Most three-tier HMOs pay the middle-tier medical group or association on a capitated basis, whereby a fixed monthly fee per enrollee is deemed payment in full for all services provided to that enrollee. Salary: About 15 percent of HMO primary care physicians are paid a salary for their primary care services. But an estimated 80 percent of staff model HMOs pay their primary care physicians in this manner. Salaried physicians cannot increase their income either by providing additional services, as physicians paid on a fee-for-service basis can, or by providing fewer services in order to increase the number of patients assigned to them as capitated physicians can. As a result, salary is widely regarded as having the most “neutral” incentive effects of any of the three modes of compensation. Financial incentive arrangements take many forms. Typically, however, they consist of mechanisms for adjusting the compensation of primary care physicians or groups to encourage them to limit service utilization. Although the incentives may be designed to limit the services provided by the primary care physicians themselves (“direct services”), it is believed that they more commonly target services preapproved by the primary care physicians but provided by others (“referral services”). Primary care physicians in HMOs usually serve as “gatekeepers” who must authorize all or most nonprimary care, including inpatient hospital care, visits to specialists, and diagnostic tests and other forms of ancillary care. The following are four commonly used financial incentive arrangements. In a shared deficit arrangement, the HMO may establish separate budgets for primary care, inpatient hospital, specialty, and ancillary services. If there is a deficit in any of the referral funds, primary care physicians are required to absorb a portion of it. Those physicians with the highest referral costs are sometimes required to contribute the most. Primary care physicians who are compensated on a fee-for-service basis may also be required to absorb a portion of any deficit in the primary care fund to discourage them from providing too many services themselves. Often a portion of the physician’s compensation, usually not exceeding 20 percent, is withheld to be applied against a possible deficit. Some HMOs limit the physician’s liability to the amount withheld. Others require the physician to make up deficits beyond that amount through direct repayment, deductions from future compensation, or increased withholding rates. The following describes a typical shared deficit arrangement: A primary care physician is paid $20 per patient per month for providing direct services (and for providing administrative services, such as serving as a gatekeeper for nonprimary care). Of the $20, $4 is withheld to cover a possible deficit in the inpatient hospital care fund. If there is a deficit in that fund, some or all of the $4 will not be returned to the physician, depending on the physician’s hospital referral rate. In addition, the physician may be liable for additional amounts beyond the $4. Shared surplus arrangements operate much like shared deficit arrangements, except instead of being penalized if there is a fund deficit, physicians are given a bonus if there is a surplus. Bonuses are widely used in staff model HMOs, to reward salaried primary care physicians for holding down referral costs. The following is an example of a shared surplus arrangement: A primary care physician is paid $16 per patient per month for direct services. If there is a surplus in the specialty care fund, the physician can receive an additional amount based on the physician’s specialist referral rate. Often, the additional amount is limited to a percentage of the physician’s compensation. Thus, in this case, the maximum bonus might be $4 per patient. In a shared surplus and deficit arrangement, if there is a deficit in a fund, primary care physicians are required to contribute toward it. Conversely, if there is a surplus, they receive a bonus. If money was withheld from the physicians and there is a surplus, they receive the amount withheld plus a bonus. The following illustrates how a shared surplus and deficit arrangement might work: A physician is paid $20 per patient per month for direct services. Of the $20, $4 is withheld to cover a possible deficit in the ancillary services fund. If there is a deficit in that fund, the physician may forfeit the $4 and possibly be liable for additional contributions as well. If there is a surplus in the fund, the physician may receive the $4 withheld plus a bonus. When physicians are capitated to provide not only primary care but also referral services, the physicians’ compensation for the services they render directly can be reduced by 100 percent of the cost of the referral services. This is a more potent incentive to deny patients referral services than the deficit/surplus arrangements under which the physicians bear only a portion of the cost. It is also an arrangement that can impose a considerable financial risk on the physicians, depending on the scope of the referral services for which they are capitated. The risk can range from being responsible only for the costs associated with processing in-office tests performed by the primary care physician to being responsible for the cost of all patient care—including inpatient hospitalization. The following example illustrates how a capitation-for-referral-services arrangement might work: A physician group is paid $100 per patient per month to provide all primary care, specialty, and ancillary services. The group consists exclusively of primary care physicians and must contract with others for specialty services. Since the group must absorb the entire cost of specialty care, it could potentially pay out a significant share or even more than the entire compensation it receives for treating its patients. There are no reliable current data regarding the extent to which HMOs use financial incentive arrangements or the prevalence of the different types of arrangements. The best available data are derived from surveys conducted in the late 1980s by the Group Health Association of America, an HMO trade association; by a consulting firm under contract to HCFA, and by Alan Hillman, a leading academic expert on financial incentives. According to the Group Health Association of America’s 1987 survey, 85 percent of HMOs used financial incentive arrangements. The study by HCFA’s consulting firm, conducted in 1988, found that incentives were used in 95 percent of HMOs. In a 1990 journal article, Hillman stated that the great majority of HMOs use incentives. Although the data are not conclusive, there is evidence that HMOs are increasingly using financial incentive arrangements that shift more risk to providers. There is also evidence that it has become increasingly common for HMOs to capitate physicians, or (more typically) physician groups, for all medical services—including inpatient hospital care. This type of arrangement obviously places the physician or group at unlimited financial risk, unless, as is often the case, the plan provides stop-loss insurance. It is not a new development for primary care physicians to be capitated for some referral services. A 1987 study by Alan Hillman found that primary care physicians were capitated for the cost of outpatient lab tests 40 percent of the time. The Group Health Association of America’s 1987 study found that capitation fees paid to primary care physicians usually covered not only primary care services, but also referrals to specialists and ancillary services. However, they rarely covered inpatient hospital care. Officials at both HCFA and California’s Department of Corporations told us they believe this is changing and that the capitation of medical groups for all medical services, including inpatient care, is becoming widespread. Many factors can affect the likelihood that a financial incentive will influence a physician’s practice decisions. The extent of a physician’s risk is the physician’s maximum possible financial gain or liability under an incentive arrangement. In the case of a deficit or a capitation-for-referral-services arrangement, the physician’s liability is potentially unlimited. In the case of a surplus arrangement, the physician’s potential gain is limited because the amount by which costs can be constrained below the HMO’s budget is finite. HMOs may limit the extent of a physician’s potential liability or gain to a percentage of the compensation the physician is paid for direct services.For this reason, extent of risk is sometimes defined as the maximum possible percentage increase or decrease in a physician’s compensation for direct services that an incentive can produce. Obviously, the more a physician is placed at risk for a type of service, the greater the incentive for the physician to limit the use of that service. In a deficit or a capitation-for-referral-services arrangement, the HMO may sometimes provide its physicians or physician groups with stop-loss insurance to limit their potential risk. Stop-loss insurance is typically provided on a per patient basis and is designed to protect physicians whose patients suffer catastrophic illnesses. Coverage usually begins at between $1,000 and $9,000 per patient per year for outpatient referral services and between $10,000 and $100,000 per patient per year for inpatient hospital services. If stop-loss insurance is not combined with a limitation on the physician’s overall risk to a specified percentage of his or her direct compensation, and if the physician happens to have an unusually sick group of patients, then the insurance may not prevent the physician from having to pay out more than his or her entire direct compensation. Deficits and surpluses can be distributed on the basis of the cost performance of either an individual physician or a group of physicians. Numerous researchers, including the American Medical Association, independent analysts, advocacy groups, and HMO industry representatives have stated that incentives based on group performance are less likely to influence a physician’s behavior. When a physician’s cost performance is aggregated with that of other physicians, the effect on the physician’s income of each decision he or she makes regarding patient services is reduced. In addition, a physician with unusually sick patients would be less likely to reduce care provided to patients needing expensive treatments because the physician’s performance would be aggregated with that of physicians with healthier patients. Some researchers have suggested, however, that peer pressure might actually make group performance incentives more potent than individual performance incentives. In instances where the distribution of a surplus or deficit is based on the performance of a group of physicians, it is generally believed that as the size of the group increases, the effect of the incentive on physician behavior may diminish. The greater the number of physicians, the smaller the impact of each physician’s decisions on the physician’s incentive payment. Numerous analysts have maintained that the more patients assigned to a physician, the less effect financial incentives are likely to have on the physician’s behavior. Having more patients increases the probability that a physician can recoup the cost of treating the sickest patients from the savings generated by healthier ones. As a result, the physician would be less inclined to withhold an expensive treatment from a sick patient. According to many analysts, the shorter the period over which a physician’s cost performance is assessed, the greater the impact the incentive is likely to have on the physician’s behavior. A shorter period allows a physician less opportunity to recoup the cost of treating a very sick patient from healthy patients. Risk assessment periods generally range from 1 month to 1 year. According to the Group Health Association of America, at least one-third of HMOs assess financial incentive payments more frequently than annually. The less the physician is compensated for primary care services, the more sensitive the physician will be to an incentive that would increase or decrease the physician’s income on the basis of his or her referral service utilization rate. The greater the proportion of a physician’s total income that comes from an HMO, the greater the likely effect of that HMO’s financial incentives on the physician’s practice pattern. In general, the lower the cost target a physician must achieve to obtain a bonus or avoid contributing towards a deficit, the more powerful the incentive to withhold services. An approach used by some HMOs that puts physicians under particular pressure is one that requires individual physicians to meet or beat a group average. This places the physicians in competition with each other. Three Southern California PHPs that are Medi-Cal contractors use the following financial incentive arrangements: One plan, which is a staff model, uses a bonus arrangement. The plan compensates primary care physicians at its 30 medical centers on a salaried basis. It seeks to control referral costs by providing a bonus to the physicians that is based in part on the extent to which they hold down specialty and inpatient hospital referrals. The bonus is linked to the referral rates of individual physicians and cannot exceed 20 percent of a physician’s salary. Another plan, which is a group model, uses a combination deficit and surplus arrangement. The plan pays its medical group a capitation fee as compensation for providing all medical services except inpatient hospital care. Since the group is staffed to provide virtually all of the services it is capitated for, the arrangement does not fall into the capitation-for-referral-services incentive category. To control hospital referrals, the plan establishes a budget for hospitalization and shares any surplus or deficit with the group. The amount that can be shared with the group is subject to a relatively modest cap. The group in turn divides its share of any surplus or deficit among its physicians equally, rather than on the basis of their individual referral rates. A third plan, which is a network/independent practice association model, uses capitation for some referral services plus a bonus arrangement for others. The plan pays the medical groups and independent practice associations it contracts with a capitation fee that covers virtually all medical services except inpatient hospital care. Because the groups and associations contract out some specialty and surgical care, this arrangement amounts to a capitation-for-referral-services type of incentive. In addition, the plan uses a bonus incentive to reward groups and associations that keep hospital referral rates low. The plan believes that most of its groups and associations adjust the compensation of their primary care physicians to encourage them to control specialty and hospital referral costs. Few data exist on the extent to which financial incentives affect physicians’ service utilization patterns. A number of studies have shown that HMOs in general hospitalize patients at a significantly lower rate than traditional fee-for-service practices. However, these studies did not assess whether the HMOs’ lower rate was attributable to their use of financial incentives or to other differences between HMOs and fee-for-service providers. There have been at least two studies of the utilization effects of different forms of compensation within the HMO setting. But in the course of our work, we were unable to find any systematic analyses of the effects of specific types of incentives on the utilization of the services the incentives are intended to reduce, such as the effect of a bonus for controlling specialty referrals on such referrals. Even if the effects of financial incentives on service utilization were known, their impact on quality of care would not be readily ascertainable because there is currently no consensus about how quality of care should be defined and measured. If a financial incentive induced a physician to withhold a service that a patient did not need, then quality of care would not be impaired. The difficulty lies in determining which services are “needed.” Although the effect on patient outcomes is one measure of quality, efforts to measure the impact of care or outcomes are still in their infancy. Robert Hughes, Assistant Director Aleta Lee Hancock, Evaluator-in-Charge, (213) 346-8069 Richard N. Jensen, (202) 512-7146 Carla D. Brown Jay Goldberg The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed California's Medicaid managed care program, focusing on: (1) state oversight of managed care contractors; (2) state plans for expansion; and (3) key issues in implementing the expanded program. GAO found that: (1) California plans a major expansion of its Medi-Cal managed care program in selected counties; (2) by the end of 1996, the number of enrollees in California managed care plans will total over 3.4 million, almost four times the number currently enrolled; (3) enrollment will be mandatory for women and children, who will choose from one of two plans, unlike the current voluntary system with several choices; (4) mandatory enrollment could magnify the problems already associated with California's Medi-Cal program, such as availability and quality of services, capabilities of management staff, and providers' financial incentives to limit care; and (5) any benefits of competitive managed care could be lessened by California's decision to limit beneficiaries to two health plans.
PCASG fund recipients that responded to our October 2008 survey reported that they used PCASG funds to hire or retain health care providers and other staff, add primary care services, and open new sites. (See table 1.) Recipients also said that the PCASG funds helped them improve service delivery and access to care for the patients they served. As of September 20, 2009, PCASG recipients reported to LPHI that they had used PCASG funds—in conjunction with other funds, such as other federal grants and Medicaid reimbursement—to support services provided to almost 252,000 patients. These patients had over 1 million encounters with a health care provider, two-thirds of which were for medical and dental care and one-third of which were for mental health care. A small number of encounters were for specialty care. The patients served by the PCASG fund recipients were typically uninsured or enrolled in Medicaid. We reported in July 2009 that for the first several months during which PCASG funds were available, at more than half of the PCASG fund recipients, at least half—and at times over 70 percent—of the patient population was uninsured. Of the 20 recipients that reported in our October 2008 survey that they used PCASG funds to hire health care providers, half hired both medical and mental health providers. (See fig. 1.) One recipient reported that by hiring one psychiatrist, it could significantly increase clients’ access to services by cutting down a clinic’s waiting list and by providing clients with a “same-day” psychiatric consultation or evaluation. Another recipient reported that it hired 23 medical care providers, some of whom were staffed at its new sites. Some recipients reported that hiring additional providers enabled them to expand the hours some of their sites were open. Of the 23 recipients that responded to our survey, 17 reported they used PCASG funds to retain health care providers, and 15 of these reported that they also used grant funds to retain other staff. For example, one recipient reported that PCASG funds were used to stabilize positions that were previously supported by disaster relief funds and donated services. Nineteen of the 23 PCASG fund recipients that responded to our survey reported using PCASG funds to add or expand medical, mental health, or dental care services, and more than half of these added or expanded more than one type of service. Specifically, 11 added or expanded medical care, 15 added or expanded mental health care, and 4 added or expanded dental care services. In addition, PCASG fund recipients also reported using grant funds to add or expand specialty care or ancillary services. One recipient reported that it used PCASG funds to create a television commercial announcing that a clinic was open and that psychiatric services were available there, including free care for those who qualified financially. Almost all of the PCASG fund recipients that responded to our survey reported they used PCASG funds for their physical space. Ten recipients that responded to our survey reported using grant funds to renovate existing sites, such as expanding a waiting room, adding a registration window, and adding patient restrooms, to accommodate more patients. Officials from one PCASG fund recipient reported that relocating to a larger site allowed providers to have additional examination rooms. PCASG fund recipients that responded to our survey reported that certain program requirements—such as developing a network of local specialists and hospitals for patient referrals and establishing a quality assurance and improvement program that includes clinical guidelines or evidence-based standards of care—have had a positive effect on their delivery of primary care services. In addition, they reported that the PCASG funds helped them improve access to health care services for residents of the greater New Orleans area. For example, one PCASG fund recipient reported that the PCASG funds have helped it to expand services beyond residents in shelter and housing programs to include community residents who were not homeless but previously lacked access to health care services. Representatives of other PCASG fund recipients have reported that their organization improved access to care by expanding services in medically underserved neighborhoods or to people who were uninsured or underinsured. Representatives of local organizations also told us the PCASG provided an opportunity to rebuild the health care system and shift the provision of primary care from hospitals to community-based primary care clinics. PCASG fund recipients also used other federal hurricane relief funds to help support the restoration of primary care services. According to LDHH data, as of August 2008, 11 PCASG fund recipients expended $12.9 million of the SSBG supplemental funds that were awarded to Louisiana and that the state designated for primary care. They used these funds to pay for staff salaries, purchase medical equipment, and support operations. For example, one recipient used SSBG supplemental funds to hire new medical and support staff and, as a result, expanded its services for mammography, cardiology, and mental health. The two PCASG fund recipients that received a total of almost $12 million in SSBG supplemental funds designated for mental health care used those funds to provide crisis intervention, substance abuse, and other mental health services, mostly through contracts to other organizations and providers. The majority of funds were expended on the categories LDHH identified as “substance abuse treatment and prevention services,” “immediate intervention and crisis response services,” and “behavioral health services for children and adolescents.” As of August 2008, most of the 25 PCASG fund recipients had retained or hired a health care provider who had received a Professional Workforce Supply Grant incentive payment to continue or begin working in the greater New Orleans area. Among the health care providers working for PCASG fund recipients, 69 received incentives that totaled $4.5 million. The number of those health care providers who were employed by individual PCASG fund recipients ranged from 1 or 2 at 7 recipient organizations to 10 at 2 recipient organizations. Three-quarters of recipients of incentive payments were existing employees who were retained, while one-quarter were newly hired. PCASG fund recipients face significant challenges in hiring and retaining staff, as well as in referring patients outside of their organizations, and these challenges have grown since Hurricane Katrina. Recipients are taking actions to address the challenge of sustainability, but are concerned about what will happen when PCASG funds are no longer available. Although most of the 23 PCASG fund recipients that responded to our October 2008 survey hired or retained staff with grant funds, most have continued to face significant challenges in hiring and retaining staff. Twenty of the 23 recipients reported the hiring of health care providers to be either a great or moderate challenge. Among those, over three-quarters responded that this challenge had grown since Hurricane Katrina. For example, in discussing challenges, officials from one recipient organization told us that after Hurricane Katrina they had greater difficulty hiring licensed nurses than before the hurricane and that most nurses were being recruited by hospitals, where the pay was higher. Moreover, officials we interviewed from several recipient organizations said that the problems with housing, schools, and overall community infrastructure that developed after Hurricane Katrina made it difficult to attract health care providers and other staff. In addition, 16 of the 23 recipients reported that retaining health care providers was a great or moderate challenge. Among those, about three-quarters also reported that this challenge had grown since Hurricane Katrina. An additional indication of the limited availability of primary care providers in the area is HRSA’s designation of much of the greater New Orleans area as health professional shortage areas (HPSA) for primary care, mental health care, and dental care. Specifically, HRSA designated all of Orleans, Plaquemines, and St. Bernard parishes, and much of Jefferson Parish, as HPSAs for primary care. While some portions of the greater New Orleans area had this HPSA designation before Hurricane Katrina, additional portions of the area received that designation after the hurricane. Similarly, HRSA designated all four parishes of the greater New Orleans area as HPSAs for mental health in late 2005 and early 2006; before Hurricane Katrina, none of the four parishes had this designation for mental health. In addition, HRSA has designated all of Orleans, St. Bernard, and Plaquemines parishes and part of Jefferson Parish as HPSAs for dental care; before Katrina, only parts of Orleans and Jefferson parishes had this designation. The PCASG fund recipients that primarily provide mental health services in particular faced challenges both in hiring and in retaining providers. Six of the seven that responded to our October 2008 survey reported that both hiring and retaining providers were either a great or moderate challenge. Officials we interviewed from one recipient told us that while the Greater New Orleans Service Corps, which was funded through the Professional Workforce Supply Grant, had been helpful for recruiting and retaining physicians, it had not helped fill the need for social workers. Furthermore, officials we interviewed from two recipients told us that some staff had experienced depression and trauma themselves and found it difficult to work in mental health settings. Beyond challenges in hiring and retaining their own providers and other staff, PCASG fund recipients that responded to our survey reported significant challenges in referring their patients to other organizations for mental health, dental, and specialty care services. We also reported on a lack of mental health providers in our July 2009 report that examined barriers to mental health services for children in the greater New Orleans area. Specifically, 15 of the 18 organizations we interviewed for that work identified a lack of mental health providers— including challenges recruiting and retaining child psychiatrists, psychologists, and nurses—as a barrier to providing mental health services for children. In addition, we reported that HRSA’s Area Resource File (ARF)—a county-based health resources database that contains data from many sources including the U.S. Census Bureau and the American Medical Association—indicated that the greater New Orleans area has experienced more of a decrease in mental health providers than some other parts of the country. For example, we found that ARF data documented a 21 percent decrease in the number of psychiatrists in the greater New Orleans area from 2004 to 2006, during which time there was a 1 percent decrease in Wayne County, Michigan (which includes Detroit and which had pre- Katrina poverty and demographic characteristics similar to those of the greater New Orleans area) and a 3 percent increase in counties nationwide. In our July 2009 report on the PCASG, we found that an additional challenge that the PCASG fund recipients face is to be sustainable after PCASG funds are no longer available in September 2010. All 23 recipients that responded to our October 2008 survey reported that they had taken or planned to take at least one type of action to increase their ability to be sustainable—that is, to be able to serve patients regardless of the patients’ ability to pay after PCASG funds are no longer available. For example, all responding recipients reported that they had taken action—such as screening patients for eligibility—to facilitate their ability to receive reimbursement for services they provided to Medicaid or LaCHIP beneficiaries. Furthermore, 16 recipients that responded to our October 2008 survey reported that they were billing private insurance, with an additional 5 recipients reporting they planned to do so. However, obtaining reimbursement for all patients who are insured may not be sufficient to ensure a recipient’s sustainability, because at about half of the PCASG fund recipients, over 50 percent of the patients were uninsured. Many PCASG fund recipients reported that they intended to use Health Center Program funding or FQHC Look-Alike designation—which allows for enhanced Medicare and Medicaid payment rates—as one of their sustainability strategies. Four recipients were participating in the Health Center Program at the time they received the initial disbursement of PCASG funds. One of these recipients had received a Health Center New Access Point grant to open an additional site after Hurricane Katrina and had also received an Expanded Medical Capacity grant to increase service capacity, which it used in part to hire additional staff and buy equipment. Another of these recipients received a New Access Point grant to open an additional site after receiving PCASG funds. Beyond these four recipients, one additional recipient received an FQHC Look-Alike designation in July 2008. HRSA made additional grants from appropriations made available by the American Recovery and Reinvestment Act of 2009, awarding five PCASG fund recipients with additional Health Center Program grants totaling $7.4 million as of October 19, 2009. Specifically, three PCASG fund recipients were awarded New Access Point grants totaling $3.9 million, five received Capital Improvement Program grants totaling more than $2.4 million, and five received Increased Demand for Services grants totaling nearly $1.1 million. Of the remaining 18 recipients that responded to our survey, 6 said they planned to apply for both a Health Center Program grant and an FQHC Look-Alike designation. In addition, one planned to apply for a grant only and another planned to apply for an FQHC Look-Alike designation only. Although many recipients indicated that they intended to use Health Center Program funding as a sustainability strategy, it is unlikely that they would all be successful in obtaining a grant. For example, in fiscal year 2008 only about 16 percent of all applications for New Access Point grants resulted in grant awards. About three-quarters of PCASG fund recipients reported that as one of their sustainability strategies they had applied or planned to apply for additional federal funding, such as Ryan White HIV/AIDS Program grants, or for state funding. In addition, a few reported that they had applied or planned to apply for private grants, such as grants from foundations. In our fall 2009 interviews, LPHI and PCASG recipient officials told us that there is uncertainty and concern among the PCASG fund recipients as the time approaches when PCASG funding will no longer be available. LPHI officials told us that they expect that some PCASG fund recipients might have to close, and others could be forced to scale back their current capacity by as much as 30 or 40 percent. For example, one PCASG fund recipient official we spoke with in November 2009 told us that the organization’s mobile medical units may not be sustainable without PCASG funding; services provided by mobile units are not eligible for Medicaid funding without a referral and collecting cash from patients could make the units targets for crime. LPHI officials said they expect that the loss of PCASG funds would most affect PCASG fund recipients that serve the largest number of uninsured patients. To help PCASG fund recipients achieve sustainability, the LPHI developed a sustainability strategy guide in April 2009. This guide suggests actions that the recipients could take to become sustainable entities, such as maximizing revenues by improving their ability to screen patients for eligibility for Medicaid and other third party payers, enroll eligible patients, electronically bill the insurers, and collect payment from insurers. LPHI and a PCASG fund recipient have identified additional potential approaches for securing revenues to decrease what LPHI estimated would be a $30 million gap in the PCASG fund recipients’ annual revenues when PCASG funds are no longer available. The LPHI sustainability strategy guide proposed that expanding Medicaid eligibility through a proposed Medicaid demonstration project that HHS is reviewing could result in a decrease in the number of uninsured people; these are the patients for whom PCASG fund recipients are most dependent on federal subsidies. The LPHI guide also suggested that it could be helpful if Louisiana received greater flexibility to use Medicaid disproportionate share dollars for outpatient primary care not provided by hospitals. In addition, a PCASG fund recipient official told us in November 2009 that a no-cost extension for PCASG funds might help some PCASG fund recipients if they are able to stretch their PCASG dollars beyond September 30, 2010. Although PCASG fund recipients have completed or planned actions to increase their ability to be sustainable and have received guidance from LPHI, it is unclear which recipients’ sustainability strategies will be successful and how many patients recipients will be able to continue to serve. With the availability of PCASG funds scheduled to end in less than 10 months, preventing disruption in the delivery of primary care services could depend on quickly identifying and implementing workable sustainability strategies. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the committee may have at this time. For further information about this statement, please contact Cynthia A. Bascetta at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Helene F. Toiv, Assistant Director; Carolyn Feis Korman; Deitra Lee; Coy J. Nesbitt; Roseanne Price; and Jennifer Whitworth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The greater New Orleans area--Jefferson, Orleans, Plaquemines, and St. Bernard parishes--continues to face challenges in restoring health care services disrupted by Hurricane Katrina which made landfall in August 2005. In 2007, the Department of Health and Human Services (HHS) awarded the $100 million Primary Care Access and Stabilization Grant (PCASG) to Louisiana to help restore primary care services to the low-income population. Louisiana gave PCASG funds to 25 outpatient provider organizations in the greater New Orleans area. GAO was asked to testify on (1) how PCASG fund recipients used the PCASG funds, (2) how recipients used and benefited from other federal hurricane relief funds, and (3) challenges recipients faced and recipients' plans for sustaining services after PCASG funds are no longer available. This statement is based on a recent GAO report, Hurricane Katrina: Federal Grants Have Helped Health Care Organizations Provide Primary Care, but Challenges Remain (GAO-09-588), other GAO work, and updated information on services, funding, and sustainability plans, which we shared with HHS officials. For the report, GAO analyzed responses to an October 2008 survey sent to all 25 PCASG fund recipients, to which 23 responded, and analyzed information related to other federal funds received by PCASG fund recipients. GAO also interviewed HHS and Louisiana Department of Health and Hospitals officials and other experts. PCASG fund recipients reported in 2008 that they used PCASG funds to hire or retain health care providers and other staff, add primary care services, and open new sites. For example, 20 of the 23 recipients that responded to the GAO survey reported using PCASG funds to hire health care providers, and 17 reported using PCASG funds to retain health care providers. In addition, most of the recipients reported that they used PCASG funds to add primary care services and to add or renovate sites. Recipients also reported that the grant requirements and funding helped them improve service delivery and expand access to care in underserved neighborhoods. As of September 2009, recipients used PCASG funds to support services for almost 252,000 patients, who had over 1 million interactions with a health care provider. Other federal hurricane relief funds helped PCASG fund recipients pay staff, purchase equipment, and expand mental health services to help restore primary care. According to data from the Louisiana Department of Health and Hospitals, 11 recipients received HHS Social Services Block Grant (SSBG) supplemental funds designated by Louisiana for primary care, and 2 received SSBG supplemental funds designated by Louisiana specifically for mental health care. The funds designated for primary care were used to pay staff and purchase equipment, and the funds designated for mental health care were used to provide a range of services including crisis intervention and substance abuse prevention and treatment. Most of the PCASG fund recipients benefited from the Professional Workforce Supply Grant incentives. These recipients hired or retained 69 health care providers who received incentives totaling over $4 million to work in the greater New Orleans area. PCASG fund recipients face multiple challenges and have various plans for sustainability. Recipients face significant challenges in hiring and retaining staff, as well as in referring patients outside of their organizations, and these challenges have grown since Hurricane Katrina. For example, 20 of 23 recipients that responded to the 2008 GAO survey reported hiring health care providers was a great or moderate challenge, and over three-quarters of these 20 recipients reported that this challenge had grown since Hurricane Katrina. PCASG fund recipients also reported challenges in referring patients outside their organization for mental health, dental, and specialty care services. Although all PCASG fund recipients have completed or planned actions to increase their ability to be sustainable, recipients are concerned about what will happen when PCASG funds are no longer available. Officials of the Louisiana Public Health Institute, which administers the PCASG locally, expect that some recipients might have to close and others could be forced to scale back capacity by as much as 30 or 40 percent. They have suggested strategies to decrease what they estimate would be a $30 million gap in annual revenues when PCASG funds are no longer available. With the availability of PCASG funds scheduled to end in less than 10 months, preventing disruptions in the delivery of primary care services could depend on quickly identifying and implementing workable sustainability strategies.
IRCA provided for sanctions against employers who do not follow the employment verification (Form I-9) process. Employers who fail to properly complete, retain, or present for inspection a Form I-9 may face civil or administrative fines ranging from $110 to $1,100 for each employee for whom the form was not properly completed, retained, or presented. The Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996 limited employer liability for certain technical violations of Form I-9 paperwork requirements. According to the act, a person or entity is considered to have complied with the employment verification process if the person or entity made a good faith attempt to properly complete the Form I-9. Employers who knowingly hire or continue to employ unauthorized aliens may be fined from $275 to $11,000 for each employee, depending on whether the violation is a first or subsequent offense. Employers who engage in a pattern or practice of knowingly hiring or continuing to employ unauthorized aliens are subject to criminal penalties consisting of fines up to $3,000 per unauthorized employee and up to 6 months imprisonment for the entire pattern or practice. ICE is primarily responsible for enforcing the employer sanction provisions of IRCA as well as many other immigration-related laws. ICE has approximately 5,000 investigative agents in 26 Office of Investigations field offices that are headed by special agents in charge. ICE’s Worksite Enforcement/Critical Infrastructure Unit oversees programs to protect U.S. critical infrastructure, including military, economic, industrial, and transportation infrastructure, and manages the agency’s worksite enforcement efforts. Prior to the creation of ICE in March 2003, INS enforced IRCA and other immigration-related laws. IIRIRA required INS to operate three voluntary pilot programs to test electronic means for employers to verify an employee’s eligibility to work: the Basic Pilot Program, the Citizen Attestation Verification Pilot Program, and the Machine-Readable Document Pilot Program. The three pilot programs were to test whether pilot verification procedures could improve the existing Form I-9 process by reducing (1) document fraud and false claims of U.S. citizenship, (2) discrimination against employees, (3) violations of civil liberties and privacy, and (4) the burden on employers to verify employees’ work eligibility. IIRIRA established the three pilot programs to be in effect for 4 years, but Congress extended authorization for the pilots for an additional 2 years in 2002 and for another 5 years in 2003. Congress also mandated DHS to expand the Basic Pilot Program to employers in all 50 states by December 2004, which DHS did. DHS terminated the Citizen Attestation Verification Pilot Program and the Machine-Readable Document Pilot Program in 2003 because of technical difficulties and unintended consequences, such as increased fraud and discrimination, identified in evaluations of the programs. The Basic Pilot Program is a part of USCIS’s Systematic Alien Verification for Entitlements Program, which provides a variety of verification services for federal, state, and local government agencies. USCIS estimates that there are more than 150,000 federal, state, and local agency users that verify immigration status through the Systematic Alien Verification for Entitlements Program. In fiscal year 2004, about 2,300 employers actively used the Basic Pilot Program within the Systematic Alien Verification for Entitlements Program. The Basic Pilot Program provides participating employers with an electronic method to verify their employees’ work eligibility. Employers may participate voluntarily in the Basic Pilot Program but are still required to complete Forms I-9 for all newly hired employees in accordance with IRCA. After completing the forms, these employers query the pilot program’s automated system by entering employee information provided on the forms, such as name and Social Security number, into the pilot Web site within 3 days of the employees’ hire date. The pilot program then electronically matches that information against information in SSA and, if necessary, DHS databases to determine whether the employee is eligible to work, as shown in figure 1. The Basic Pilot Program electronically notifies employers whether their employees’ work authorization was confirmed. Those queries that the DHS automated check cannot confirm the pilot refers to USCIS staff, called immigration status verifiers, who check employee information against information in other DHS databases. In cases when the pilot system cannot confirm an employee’s work authorization status either through the automatic check or the check by an immigration status verifier, the system issues the employer a tentative nonconfirmation of the employee’s work authorization status. In this case, the employers must notify the affected employees of the finding, and the employees have the right to contest their tentative nonconfirmations within 8 working days by contacting SSA or USCIS to resolve any inaccuracies in their records. During this time, employers may not take any adverse actions against those employees, such as limiting their work assignments or pay. When employees do not contest their tentative nonconfirmations within the allotted time, the Basic Pilot Program issues a final nonconfirmation for the employees. Employers are required to either immediately terminate or notify DHS of the continued employment of workers who do not successfully contest the tentative nonconfirmation and those who the pilot program finds are not work-authorized. There is ongoing congressional consideration about employment verification and worksite enforcement efforts, and various initiatives have been proposed related to these issues, including possible new temporary worker programs. Since January 2004, the current administration has discussed the possibility of initiating a guest worker program in which foreign workers would be granted status for employment in the United States for a specified period of time. Similarly, some recent legislative proposals would provide a means for foreign workers to obtain temporary employment and possible permanent residency or citizenship at a later date. Other initiatives propose revising visa programs to increase the number of foreign workers legally admitted to the United States. In addition, legislative proposals have addressed methods for enhancing employment verification and worksite enforcement efforts. For example, one proposal would make use of the Basic Pilot Program mandatory for all employers, and another would increase the fine amounts for employers who knowingly hire unauthorized workers. These initiatives reflect differing perspectives on employment verification and worksite enforcement and touch on a variety of related issues, such as the number of foreign workers, if any, needed in the United States, the economic impact of illegal aliens residing in the country, and policy decisions on ways to address the millions of illegal aliens in the United States. The current employment verification process relies on employers’ review of work eligibility documents to determine whether employees are authorized to work, but the process has several weaknesses. Document and identity fraud have hindered employers’ efforts to reliably verify employees’ work eligibility under the Form I-9 process. In addition, the large number and variety of documents acceptable for proving work eligibility have undermined the process. We have previously reported on the need to reduce the number of acceptable work eligibility documents and to improve the integrity of the documents. The Basic Pilot Program, as a voluntary, automated verification program, offers a mechanism with potential to enhance the employment verification process by reducing document fraud. ICE officials said that access to Basic Pilot Program information could help the agency better target its worksite enforcement efforts by identifying employers who do not follow program requirements. However, existing weaknesses in the program, such as the inability of the program to detect identity fraud, delays in entering data into DHS databases, and some employer noncompliance with pilot program requirements, could become more significant and additional resources could be needed if employer participation in the program greatly increased or was made mandatory. In 1986, IRCA established the employment verification process based on employers’ review of documents presented by employees to prove identity and work eligibility. Under the process, employers must request that newly hired employees present a document or documents that confirm employees’ identity and work eligibility. On the Form I-9, employees must attest that they are U.S. citizens, lawfully admitted permanent residents, or aliens authorized to work in the United States. Employers must then certify that they have reviewed the documents presented by their employees to establish identity and work eligibility and that the documents appear genuine and relate to the individual presenting them. In making their certifications, employers are expected to judge whether the documents presented are obviously counterfeit. Employers are deemed in compliance with IRCA if they have followed the verification procedures, including instances when an unauthorized alien may have presented fraudulent documents that appeared genuine. In addition, on the Form I-9, employers are required to reverify the employment eligibility of individuals whose work authorization has expired, such as aliens with temporary work authorization, to determine whether the individuals are authorized to continue to work. Since the passage of IRCA in 1986, document fraud (use of counterfeit documents) and identity fraud (fraudulent use of valid documents or information belonging to others) have made it difficult for employers who want to comply with IRCA to ensure that they employ only authorized workers through the current verification and reverification processes. In its 1997 report to Congress, the U.S. Commission on Immigration Reform noted that the widespread availability of false documents made it easy for unauthorized aliens to obtain jobs in the United States. In 1999, we reported that large numbers of unauthorized aliens have either fraudulently used valid documents that belong to others or presented counterfeit documents as evidence of employment eligibility. Furthermore, in 2004 we reported that unauthorized workers were able to use false documents to illegally gain entry to secure areas of critical infrastructure sites, such as airports, nuclear power plants, and military bases. Representatives from some of the employers and employer associations we interviewed for this review indicated that, in cases where employees present documents that employers suspect of being counterfeit, employers may not request that these employees present other documents proving their work eligibility because the employees could claim that employers are discriminating against them. To help protect against discriminatory hiring practices, such as employers requesting specific documents from foreign-looking or sounding employees, employers are prohibited under IRCA from requesting that new employees present specific documents from among the list of acceptable documents to prove their identity and work eligibility. Although studies suggest that the majority of employers comply with IRCA and try to hire only authorized workers, the studies have also noted that some employers knowingly hire unauthorized workers, often to exploit the workers’ low cost labor. In 1997, the U.S. Commission on Immigration Reform reported that the minority of employers who knowingly hired illegal aliens avoided sanctions by going through the motions of compliance while accepting false documents. Likewise, in 1999 we concluded that those employers who do not want to comply with IRCA can intentionally hire unauthorized aliens under the guise of having complied with the employment verification requirements by claiming that unauthorized workers presented false documents to obtain employment. The large number and variety of documents that are acceptable for proving work eligibility have also complicated employer verification efforts under IRCA. Following passage of IRCA in 1986, employees could present any of 29 different documents to establish their identity and/or work eligibility. In a 1997 interim rule, INS reduced the number of acceptable work eligibility documents from 29 to 27. Eight of these documents establish both identity and employment eligibility (e.g., U.S. passport or permanent resident card); 12 documents establish identity only (e.g., driver’s license); and 7 documents establish employment eligibility only (e.g., Social Security card without the legend “Not Valid for Employment”). The interim rule implemented changes to the list of acceptable work eligibility documents mandated by IIRIRA and was intended to serve as a temporary measure until INS issued final rules on modifications to the Form I-9. In 1998, INS proposed a further reduction in the number of acceptable work eligibility documents to 14 but did not finalize the proposed rule. Since the passage of IRCA, various studies have addressed the need to reduce the number of acceptable work eligibility documents to make the employment verification process simpler and more secure. In 1990, we reported that the multiplicity of work eligibility documents contributed to (1) employer uncertainty about how to comply with the employment verification requirements and (2) discrimination against authorized workers. A 1992 report prepared by the Senate Committee on the Judiciary noted that the first step to simplifying the employment verification process was to reduce the current list of acceptable work eligibility documents and make them more counterfeit-proof. In 1998, INS noted that, when IRCA was first passed, a long, inclusive list of acceptable work eligibility documents was allowed for the Form I-9 to help ensure that all persons who were eligible to work could easily meet the requirements, but as early as 1990, there had been evidence that some employers found the list confusing. In 1999 we reported that various studies of IRCA’s employment verification process advocated that the number of documents that employees can use to demonstrate employment eligibility should be reduced to make the employment verification process more secure and easier to understand. Additionally, some of the employers, employer associations, and immigration experts we interviewed for this review told us that the large number of documents acceptable for proving work eligibility and the fact that the Form I-9 has not been updated have impeded employer efforts to verify employment eligibility. Representatives from three employer associations said that member employers have expressed concerns that the Form I-9 has not been updated to reflect changes in the list of acceptable work eligibility documents, causing confusion among some employers regarding which documents are acceptable. In addition, among the 23 employers we interviewed, 5 discussed the need to update the Form I-9 to reflect revisions to the list of acceptable work eligibility documents. Two of these employers told us that they manually edit the Form I-9 to reflect the changes in the list of acceptable work eligibility documents. DHS officials told us that the department is assessing possible revisions to the Form I-9 process, including revisions to the number of acceptable work eligibility documents, but has not established a target time frame for completing this assessment. They said that the Handbook for Employers, which provides guidance for completing the Form I-9, would also need to be updated. In May 2005, DHS released an updated version of the Form I-9 that changed references from INS to DHS but did not modify the list of acceptable work eligibility documents on the Form I-9 to reflect changes made to the list by the 1997 interim rule. In the absence of final regulations and an updated Form I-9 and handbook, employers, employees, and other stakeholders may not clearly understand the Form I-9 process, particularly which documents are acceptable for proving work eligibility. We have previously reported on efforts to enhance the integrity of acceptable work eligibility documents, which could help reduce document fraud and make the employment verification process more secure, especially if the number of acceptable documents was reduced. For example, in 1999 we reported that INS had taken steps to increase the integrity of immigration documents, such as by issuing new employment authorization documents with visible security features like holograms and by issuing permanent resident cards with digital photographs and fingerprint images. We noted that, although INS enhanced the integrity of its documents, unauthorized aliens could present non-INS documents, such as Social Security cards, to employers to prove work eligibility. In 1998, we reported on estimates of costs associated with alternative proposals for SSA issuance of enhanced Social Security cards. We are currently reviewing SSA efforts to enhance the integrity of Social Security cards and how enhanced cards might strengthen the employment verification process and plan to report on these issues next year. In addition, we have previously reported on the possible use of biometrics in verification and identification processes—such as those used at U.S. ports of entry. Biometrics covers a wide range of technologies that can be used to verify identity by measuring and analyzing human characteristics. Biometrics can theoretically be very effective personal identifiers because the characteristics they measure are thought to be distinct to each person. Because they are tightly bound to an individual, biometrics are more reliable, cannot be forgotten, and are less easily lost, stolen, or guessed. While biometrics may show promise in enhancing verification and identification processes, we have also reported on the trade-offs for using biometric indicators, such as concerns regarding the protections under current law for biometric data, the absence of clear criteria governing data sharing, and infrastructure processes such as the binding of an identity to the biometric data. We reported that while a biometric placed on a token, such as a passport or visa, cannot necessarily link a person to his or her identity, it can reduce the potential for an individual to assume multiple identities. We also reported that although federal agencies are required by statute to provide security protections for information collected and maintained by or for the agency commensurate with the risk and magnitude of harm that would result from unauthorized disclosure, disruption, modification, or destruction of the information, poor information security is a widespread federal problem with potentially severe consequences. In reporting on the possible use of biometrics in verification and identification processes, we identified several examples of such risks associated with using biometric data. Recent laws and legislative proposals have addressed possible ways to enhance the integrity of documents and strengthen the employment verification process. The Real ID Act of 2005 mandated that states must meet minimum standards in developing and issuing driver’s licenses before federal government authorities can accept state driver’s licenses as identification for official purposes. These standards include (1) adding physical security features to prevent counterfeiting and tampering, (2) including common machine-readable technology on driver’s licenses, and (3) requiring driver’s license applicants to provide evidence of their dates of birth and Social Security numbers. The Intelligence Reform and Terrorism Prevention Act of 2004 required SSA to form a task force to, among other things, establish standards for safeguarding Social Security cards from counterfeiting, tampering, alteration, and theft. In addition to these laws, various legislative proposals address possible ways to make identity and work eligibility documents more secure and to enhance the employment verification process. For example, one recent proposal would mandate that individuals can present only machine-readable, counterfeit and tamper-resistant Social Security cards to obtain employment. According to the proposal, these machine-readable cards would allow employers to check employees’ work authorization status against information maintained in an employment eligibility database. These laws and proposals differ in the extent to which they address issues related to enhancing employment verification through electronic means, such as the availability and accessibility of machine-readable technology and the security and privacy of information maintained on documents and in databases. Various immigration experts have noted that the most important step that could be taken to reduce unlawful migration is the development of a more effective system for verifying work authorization. In particular, the U.S. Commission on Immigration Reform concluded that the most promising option for verifying work authorization was a computerized registry based on employers’ electronic verification of an employee’s Social Security number with records on work authorization for aliens. The Basic Pilot Program, which is currently available on a voluntary basis to all employers in the United States, operates in a similar way to the computerized registry recommended by the commission. Yet only a small portion—about 2,300 in fiscal year 2004—of the approximately 5.6 million employer firms nationwide actively used the pilot program. The Basic Pilot Program assists employers in detecting document fraud by helping to eliminate employer guesswork as to whether information contained on work eligibility documents presented by employees is authentic or counterfeit. If newly hired employees present counterfeit documents containing false information, the pilot program would not confirm the employees’ work eligibility because the employees’ Form I-9 information, such as a false name or Social Security number, would not match SSA and DHS database information when queried through the Basic Pilot Program. In the evaluation of the Basic Pilot Program, the Institute for Survey Research at Temple University and Westat found that the program appeared to reduce unauthorized employment arising from employee presentation of counterfeit or altered documents containing false information. Twenty of the 22 employers we interviewed who participated in the Basic Pilot Program indicated that the program helps them to reliably verify newly hired employees’ work authorization status. ICE has no direct role in monitoring employer use of the Basic Pilot Program and does not have direct access to program information, which is maintained by USCIS. ICE officials noted that, in a few cases, they have requested and received pilot program data from USCIS on specific employers who participate in the program and are under ICE investigation. ICE officials told us that program data could indicate cases in which employers do not follow program requirements and therefore would help ICE better target its worksite enforcement efforts toward those employers. For example, the Basic Pilot Program’s confirmation of numerous queries of the same Social Security number could indicate that the Social Security number is being used fraudulently or that an unscrupulous employer is knowingly hiring unauthorized workers by accepting the same Social Security number for multiple employees. However, USCIS officials stated that they have concerns about providing ICE with broader access to Basic Pilot Program information for the worksite enforcement program. USCIS officials said that, if ICE has access to pilot program information for worksite enforcement purposes, that access might create a disincentive for employers to participate in this voluntary program and could be used for purposes other than identifying potentially unscrupulous employers. These officials stated that employers may be less likely to join or participate in the program because the employers may believe that they are more likely to be targeted for a worksite enforcement investigation as a result of program participation. ICE suggested that there could be possible benefits to their worksite enforcement efforts if employers were required to participate in a mandatory automated verification program like the Basic Pilot Program. ICE officials said that a mandatory automated verification system could help ICE focus worksite enforcement efforts on employers who try to evade using the program. They also stated that a mandatory system like the pilot program could limit the ability of employers who knowingly hired unauthorized workers to claim that the workers presented false documents to obtain employment, assisting ICE agents in proving employer violations of IRCA. Officials from 7 of the 12 Special Agent in Charge field offices we interviewed suggested that a mandatory Basic Pilot Program could help them better target their worksite enforcement efforts. Although an automated verification program like the Basic Pilot Program has potential to enhance the employment verification process and help employers detect use of counterfeit documents, the program cannot currently help employers detect identity fraud. In 2002 we reported that, although not specifically or comprehensively quantifiable, the prevalence of identity fraud seemed to be increasing, a development that may affect employers’ ability to reliably verify employment eligibility. If an unauthorized worker presents valid documentation that belongs to another person authorized to work, the Basic Pilot Program may find the worker to be work-authorized. Similarly, if an employee presents counterfeit documentation that contains valid information and appears authentic, the Basic Pilot Program may verify the employee as work- authorized. DHS officials told us that the department is currently considering possible ways to enhance the Basic Pilot Program to help it detect cases of identity fraud, for example, by modifying the program to provide a digitized photograph associated with employment authorization information presented by an employee. Yet, DHS cannot fully assess possible ways to modify the Basic Pilot Program to address identity fraud in the absence of data on the costs and feasibility of implementing such modifications. In addition, the Basic Pilot Program does not assist employers in verifying the work authorization status of employees whose status requires reverification and therefore does not help employers detect document or identity fraud in the reverification process. Employers currently may not use the Basic Pilot Program to re-verify the employment eligibility of individuals whose work authorization has expired, and employers agree not to use the pilot program for reverification when registering to participate in the program. Therefore, participating employers cannot fully use the Basic Pilot Program to verify the work authorization status of all employees for whom verification, including reverification, is required under the Form I-9 process. According to one USCIS official, the pilot program does not face any technological or other limitations that would prevent the program from being used for reverification purposes, if such use was required or allowed as part of the pilot program. Another current weakness in the Basic Pilot Program that could affect the program’s success if use increased or was made mandatory is delays in the entry of information on immigrants’ and nonimmigrants’ arrivals and employment authorization into DHS databases. Although the majority of pilot program queries entered by participating employers are confirmed via the automated SSA and DHS verification checks, about 15 percent of queries authorized by DHS required manual verification by immigration status verifiers in fiscal year 2004. According to USCIS, immigration status verifiers typically resolve cases referred to them for verification within 24 hours, but a small number of cases take longer. For example, nine employers we interviewed reported that a small number of immigration status verifier verifications took longer than 24 hours to resolve, with a few verifications taking as long as 2 weeks to resolve. Immigration status verifiers reported that the primary reason for queries to require verification by them is because of delays in entry of employment authorization information into DHS databases. USCIS officials told us that those verifications that take longer than a few days to resolve are generally caused by delays in the entry of data on employees who received employment authorization documents generated by a computer and camera that are not directly linked to DHS databases, such as those used at ports of entry for refugees and at USCIS field offices. They said that information on the employment authorization documents generated through this process is electronically sent to USCIS headquarters for entry but is sometimes lost or not entered into databases in a timely manner. By contrast, employment authorization documents issued at USCIS service centers are produced via computers that are used to update data in USCIS databases, which USCIS officials told us represent the majority of employment authorization documents currently issued by USCIS. The Temple University Institute for Survey Research and Westat found that verifications that require immigration status verifiers’ review lengthen the time needed to complete the employment verification process. In addition, among the 22 employers we interviewed, 7 reported that they may experience some losses in work time, training, or money for background checks and physicals when employees contest tentative nonconfirmations. USCIS has taken steps to increase the timeliness and accuracy of information entered into databases used as part of the Basic Pilot Program. In June 2004, USCIS reported that, among other improvements, it had started work to expedite data entry for new lawful permanent residents and arriving nonimmigrants and to improve data entry for changes in work authorization status. For example, USCIS said that it has worked to reduce the time in which data are available for Basic Pilot Program verifications by expediting submission of data on newly arrived immigrants and nonimmigrants from ports of entry and field offices to USCIS service centers for data entry. The agency reported that, as a result of its efforts, data on new immigrants are now typically available for verification within 10 to 12 days of an immigrant’s arrival in the United States while previously, the information was not available for up to 6 to 9 months after arrival. Moreover, USCIS reported it has worked to increase the timeliness and availability of temporary work authorization information in its databases by increasing the number of employment authorization documents issued by service centers as compared with the number of documents issued through computers not directly linked to DHS databases. The department reported that, while in 1999 less than half of all employment authorization documents were issued by service centers, over three-quarters of the cards are now issued through service centers. USCIS officials told us that the agency has continued these efforts to improve the timeliness and accuracy of information entered into DHS databases and noted that the agency is currently planning to fund another evaluation of the Basic Pilot Program that will include a review of the accuracy of DHS database information. Furthermore, analysis of the Basic Pilot Program database indicates that the timeliness and accuracy of the DHS automated checks against the Basic Pilot Program database have improved. In fiscal year 2004, about 10 percent of all queries were referred to DHS for verification. Among those queries authorized by DHS, the percentage of queries verified by the DHS automated check increased from about 67 percent in fiscal year 2000 to about 85 percent in fiscal year 2004, as shown in figure 2. Although USCIS has taken some steps to improve the timeliness and accuracy of information entered into databases used as part of the Basic Pilot Program and plans to review the accuracy of database information as part of its planned evaluation of the pilot program, USCIS cannot effectively assess future use of the pilot program, including possible increased program usage, without information on the costs and feasibility of ways to further reduce delays in the entry of information into DHS databases. Another factor that may reduce the effectiveness of the pilot program if usage is increased or made mandatory is employer noncompliance with Basic Pilot Program requirements. These requirements are intended to safeguard employees queried through the program from such harm as discrimination or reduced training and pay. The Temple University Institute for Survey Research and Westat evaluation of the Basic Pilot Program concluded that the majority of employers surveyed appeared to be in compliance with Basic Pilot Program procedures. However, the evaluation found evidence of some noncompliance with these procedures that specifically prohibit screening job applicants and taking actions that adversely affect employees while they are contesting tentative nonconfirmations, such as limiting employees’ work assignments or pay. For example, 30 percent of the employers surveyed for the evaluation reported restricting work assignments while employees contested tentative nonconfirmations, a practice that is prohibited under the Basic Pilot Program. Of the 22 employers we interviewed who participate in the pilot, 7 reported using the Basic Pilot Program in a way that did not conform with pilot program procedures, including using the pilot program to screen job applicants before offering jobs to the applicants. The Basic Pilot Program provides a variety of reports that may help USCIS determine whether employers follow program requirements. For example, these reports could help USCIS identify employers who do not appear to refer employees contesting tentative nonconfirmations to SSA or DHS, which is required under pilot program procedures. USCIS could then follow up to determine if such employers are following pilot procedures that require employers to refer all employees with tentative nonconfirmations to SSA or DHS. USCIS officials told us that efforts to review employers’ use of the pilot program have been limited by lack of staff available to oversee and examine employer use of the program, and they noted that there are currently 15 USCIS headquarters staff members responsible for administering USCIS verification programs, including the Basic Pilot Program. The officials said that, as part of the next evaluation of the pilot program, USCIS plans to assess the extent to which employers follow pilot program requirements and procedures, such as employer adherence to requirements to notify employees of tentative nonconfirmations. However, without information on the costs and feasibility of routinely reviewing employers’ use of the pilot program, USCIS cannot fully determine possible ways to regularly examine employer use of the program and therefore the extent to which employers comply with pilot program requirements. According to USCIS officials, due to the growth in other USCIS verification programs, current USCIS staff may not be able to complete timely verifications if the number of employers using the Basic Pilot Program were to significantly increase. In particular, these officials said that if a significant number of new employers registered for the program or if the program were mandatory for all employers or a segment of employers, additional resources would be needed to maintain timely verifications, given the growth in other verification programs. For example, the REAL ID Act of 2005 mandated that states must meet minimum standards in issuing driver’s licenses and nondriver identification cards, including verifying the immigration status of all noncitizen applicants, before federal government authorities can accept the licenses and cards for official purposes beginning in 2008. Currently, USCIS has approximately 38 immigration status verifiers available for completing Basic Pilot Program verifications, and these verifiers reported that they are able to complete the majority of current required checks within their target time frame of 24 hours. However, USCIS officials said that because of the growth in other verification programs that would increase the number of verifications that require checks by immigration status verifiers, the agency has serious concerns about its ability to complete timely verifications if the number of Basic Pilot Program users greatly increased. USCIS officials also stated that the agency lacks funding to further expand the Basic Pilot Program. The Basic Pilot Program and other verification programs have been funded by fees USCIS receives from applicants for adjudication of immigration and citizenship benefits. USCIS allocated about $3.5 million from its fee accounts for all of its verification programs, including the Basic Pilot Program, in fiscal year 2005. USCIS officials said that this allocation included a $500,000 increase for additional employee verifications by employers using the Basic Pilot Program. However, these officials told us that current funding levels allocated for USCIS verification programs would not be sufficient to cover costs associated with mandatory use of the Basic Pilot Program for all employers, should this be adopted. In 2004, we reported that USCIS fees were not sufficient to fully fund the agency’s operations but noted that cost data were insufficient to determine the full extent of the funding shortfall. The Temple University Institute for Survey Research and Westat estimated a range of costs associated with expanding the dial-up version of the pilot program, including costs for making the program mandatory for a selected group of employers, like employers with more than 10 employees, and making the program mandatory for all employers, regardless of the number of employees. The report estimated that a mandatory dial-up version of the pilot program for all employers would cost the federal government, employers, and employees about $11.7 billion total per year, with employers bearing most of the costs. USCIS has worked with participating employers to switch them to the Web-based version of the program and discontinued the dial-up version in June 2005. The Temple University Institute for Survey Research and Westat did not estimate costs for a mandatory Web-based version, although they noted that operating costs associated with such a program would be less than for the dial-up version because employer computer maintenance and telephone costs would be lower. As part of the next evaluation of the pilot program, USCIS plans to assess the costs and potential time frames associated with making the Web-based version mandatory for all employers or specific segments of employers. Given the growth in other USCIS verification programs, USCIS cannot effectively assess potential costs for making the Web-based version of the Basic Pilot Program mandatory without information on other possible resources needed for the program, such as staff needed for conducting manual verifications. The worksite enforcement program is one of various ICE immigration enforcement programs, and has been a relatively low priority. Since fiscal year 1999, the number of notices of intent to fine issued to employers for violations of IRCA and the number of administrative worksite arrests have declined, which, according to ICE, are due to various factors, such as the widespread use of counterfeit documents that make it difficult for ICE agents to prove employer violations. INS and ICE have also faced difficulties in setting and collecting meaningful fine amounts and in detaining unauthorized workers arrested at worksites. In addition, ICE has not yet developed outcome goals and measures for the worksite enforcement program, making it difficult for ICE and Congress to assess program performance and determine resource levels for the program. Worksite enforcement is one of various immigration enforcement programs formerly managed by INS and now managed by ICE, and competes for resources with these other program areas, such as alien smuggling and fraud. Among INS and ICE responsibilities, worksite enforcement has been a relatively low priority. For example, in the 1999 INS Interior Enforcement Strategy, the strategy to block and remove employers’ access to undocumented workers was the fifth of five interior enforcement priorities. In this same year, we reported that, relative to other enforcement programs in INS, worksite enforcement received a small portion of INS’s staffing and enforcement budget. We noted that the number of employer investigations INS was able to conduct each year covered only a fraction of the estimated number of employers who may have hired unauthorized aliens. In keeping with the primary mission of DHS to combat terrorism, after September 11, 2001, INS and then ICE focused investigative resources primarily on national security cases, such as investigations of aliens in the United States who may have overstayed their authorized time periods for being in the country and the National Security Entry and Exit Registration System; on participation in Joint Terrorism Task Forces; and on critical infrastructure protection. In particular, INS and then ICE focused available resources for worksite enforcement mainly on identifying and removing unauthorized workers from critical infrastructure sites, such as airports and nuclear power plants, to help reduce vulnerabilities at those sites. In 2004, we reported that, if critical infrastructure-related businesses were to be compromised by terrorists, this would pose a serious threat to domestic security. In 2003, we testified that, given ICE’s limited resources, it needs to ensure that it targets those industries where employment of illegal aliens poses the greatest potential risk to national security. According to ICE officials, the agency adopted this focus on critical infrastructure protection because the fact that unauthorized workers can obtain employment at critical infrastructure sites indicates that there are vulnerabilities in those sites’ hiring and screening practices, and unauthorized workers employed at those sites are vulnerable to exploitation by terrorists, smugglers, traffickers, or other criminals. Consistent with these priorities, in 2003 ICE headquarters issued a memo requiring field offices to request approval from ICE headquarters prior to opening any worksite enforcement investigation not related to the protection of critical infrastructure sites, such as investigations of farms and restaurants. ICE officials told us that the purpose of this memo was to help ensure that field offices focused worksite enforcement efforts on critical infrastructure protection operations. Field office representatives told us that noncritical infrastructure worksite enforcement was one of the few investigative areas for which offices had to request approval from ICE headquarters to open an investigation. According to ICE, the agency recently issued a memo delegating authority to approve noncritical infrastructure worksite enforcement cases to field offices’ Special Agents in Charge. Eight of the 12 offices we interviewed told us that worksite enforcement was not an office priority unless the worksite enforcement case related to critical infrastructure protection. ICE has inspected Forms I-9 and employer records at hundreds of critical infrastructure sites as of March 2005. For example, as part of Operation Tarmac, ICE conducted investigations at nearly 200 airports nationwide and, as part of Operation Glow Worm, conducted investigations at more than 50 nuclear power plants as of March 2005. Between October 2004 and the beginning of May 2005, about 77 percent of the worksite enforcement cases opened by ICE were related to critical infrastructure protection. Since fiscal year 1999, INS and ICE have dedicated a relatively small portion of overall agent resources to the worksite enforcement program. As shown in figure 3, in fiscal year 1999, INS allocated about 240 full-time equivalents to worksite enforcement efforts, while in fiscal year 2003, ICE allocated about 90 full-time equivalents. Between fiscal years 1999 and 2003, the percentage of agent work-years spent on worksite enforcement efforts generally decreased from about 9 percent to about 4 percent. Although worksite enforcement may remain a low priority relative to other programs, ICE has proposed increasing agent resources for the worksite enforcement program by adding staff to its headquarters’ worksite enforcement unit, which was comprised of three staff members as of July 2005, and hiring additional worksite enforcement staff for field offices. In particular, ICE plans to use the $5 million provided for fiscal year 2005 by a congressional conference report for the worksite enforcement program to fund additional headquarters positions for the worksite enforcement unit. In its fiscal year 2006 budget submission, ICE requested funding for 117 compliance officers, 20 additional investigative agents, and 6 additional program managers for worksite enforcement. ICE has proposed hiring these compliance officers to conduct the administrative elements of worksite enforcement cases, such as the inspection of Forms I-9 and other employment records. ICE officials said that these officers would pass cases involving potential criminal violations to investigative agents for review. ICE officials told us that the agency would use the compliance officers only for worksite enforcement efforts. According to ICE, compliance enforcement officers are less costly than investigative agents. ICE estimates that each investigative agent would cost the agency approximately $167,000 to $176,000 in fiscal year 2006, while one compliance enforcement officer would cost about $76,000. At this point, it is unclear what impact, if any, these additional resources would have on worksite enforcement efforts. The number of notices of intent to fine issued to employers as well as the number of unauthorized workers arrested at worksites have generally declined. Between fiscal years 1999 and 2004, the number of notices of intent to fine issued to employers for improperly completing Forms I-9 or knowingly hiring unauthorized workers generally decreased from 417 to 3. (See figure 4.) The number of unauthorized workers arrested during worksite enforcement operations has also declined since fiscal year 1999. As shown in figure 5, the number of administrative worksite arrests declined by about 84 percent from 2,849 in fiscal year 1999 to 445 in fiscal year 2003. According to ICE records, worksite enforcement criminal arrests totaled 159 in fiscal year 2004 and 81 in the period from October 2004 through April 2005. ICE attributes the decline in the number of notices of intent to fine issued to employers and number of administrative worksite arrests to various factors including the widespread availability and use of counterfeit documents and the allocation of resources to other priorities. Various studies have shown that the availability and use of fraudulent documents have made it difficult for ICE agents to prove that employers knowingly hire unauthorized workers. For example, in previous work we reported that the prevalence of document fraud made it difficult for INS to prove that an employer knowingly hired an unauthorized alien. In 1996, the Department of Justice Office of the Inspector General reported that the proliferation of cheap fraudulent documents made it possible for the unscrupulous employer to avoid being held accountable for hiring illegal aliens. ICE officials told us that employers who agents suspect of knowingly hiring unauthorized workers can claim that they were unaware that their workers presented false documents at the time of hire, making it difficult for agents to prove that the employer willfully violated IRCA. In commenting on a draft of this report, ICE also noted that the IIRIRA provision that limited employer liability for certain Form I-9 paperwork violations affects ICE’s ability to substantiate employer charges for knowingly hiring unauthorized workers and, therefore, the number of notices of intent to fine that ICE issues. This provision came into effect in 1996, so it is unclear what effect, if any, the provision had on the decline in the number of notices of intent to fine issued between fiscal years 1999 and 2004. In addition, according to ICE, the allocation of INS and ICE resources to other priorities has contributed to the decline in the numbers of notices of intent to fine and worksite arrests. For example, INS focused its worksite enforcement resources on egregious employer violators who were linked to other criminal violations like smuggling, fraud, or worker exploitation, and de-emphasized administrative employer cases and fines. Furthermore, INS investigative resources were redirected from worksite enforcement activities to criminal alien cases, which consumed more investigative hours by the late 1990s than any other enforcement activity. After September 11, 2001, INS and ICE focused investigative resources on national security cases and, in particular, focused worksite enforcement efforts on critical infrastructure protection, which is consistent with DHS’s primary mission to combat terrorism. According to ICE, the redirection of resources from other enforcement programs to perform national security- related investigations resulted in fewer resources for traditional program areas like fraud and noncritical infrastructure worksite enforcement. Additionally, some ICE field representatives, as well as immigration experts we interviewed, noted that the focus on critical infrastructure protection does not address the majority of worksites in industries that have traditionally provided the magnet of jobs attracting illegal aliens to the United States. INS and ICE have faced difficulties in setting and collecting final fine amounts that meaningfully deter employers from knowingly hiring unauthorized workers and in detaining unauthorized workers arrested at worksites. ICE officials told us that because fine amounts are so low, the fines do not provide a meaningful deterrent. These officials also said that when agents could prove that an employer knowingly hired an unauthorized worker and issued a notice of intent to fine, the fine amounts agents recommended were often negotiated down in value during discussions between agency attorneys and employers. The amount of mitigated fines may be, in the opinion of some ICE officials, so low that they believe that employers view the fines as a cost of doing business, making the fines an ineffective deterrent for employers who attempt to circumvent IRCA. ICE officials at 11 of the 12 field offices at which we interviewed staff said that they experienced instances in which fine amounts were mitigated down in value. According to ICE, the agency mitigates employer fine amounts because doing so may be a more efficient use of government resources than pursuing employers who contest or ignore fines, which could be more costly to the government than the fine amount sought. Recently, ICE settled a worksite enforcement case with a large company without going through the administrative fine process. As part of the settlement, the company agreed to pay $11 million and company contractors agreed to pay $4 million in forfeitures—more than any administrative fine amount ever issued against an employer for IRCA violations, according to ICE. One ICE official said that use of such civil settlements instead of pursuit of administrative fines, specifically in regard to investigations of noncritical infrastructure employers, could be a more efficient use of investigative resources. ICE officials also said that use of civil settlements could help ensure employers’ future compliance by including in the settlements a requirement to enter into compliance agreements, such as the Basic Pilot Program. ICE recently employed this strategy in its $15 million settlement with the large company. As part of the settlement, the company agreed to enter into a compliance program with ICE. Other compliance agreements with employers could involve mandatory participation in the Basic Pilot Program. Additionally, ICE officials said that the agency has proposed working with employers who are not the subjects of worksite enforcement investigations to help them ensure compliance with IRCA through enhanced education and partnerships. In April 2005, ICE issued its interim strategic plan, which, as part of its objective on identifying critical industries for worksite enforcement operations, included an approach for partnering with businesses to help them comply with IRCA. This partnership program, called the ICE Mutual Agreement between Government and Employers, is intended to provide employers with training and best practices for complying with IRCA. In addition to implementing this partnership program, ICE plans to promote expanded use of the Basic Pilot Program to help encourage employers in critical industries to strengthen their ability to verify employees’ work eligibility. The practice of civil settlements with employers and joint compliance programs are in the early stages of implementation; therefore the extent to which they may address the difficulties faced in setting fine amounts that provide a meaningful deterrent is not yet known. The former INS also faced difficulties in collecting total fine amounts from employers, but collection efforts have improved. We previously reported that the former INS faced difficulties in collecting total fine amounts from employers for a number of reasons including that employers went out of business, moved, or declared bankruptcy. In 1996, the Department of Justice Office of the Inspector General reported that the deterrent effect of civil fines on sweatshop operators was adversely affected by collection difficulties and noted that INS had no national system for billing, tracking, and collecting employer fines. In 1998, INS created the Debt Management Center to centralize the collections process, and the center is now responsible for collecting fines ICE issued against employers for violations of IRCA and providing other collection services for ICE and USCIS. The ICE Debt Management Center has collected total amounts on most of the invoices issued to employers for final fine amounts between fiscal years 1999 and 2004—about 94 percent as of the end of June 2005. In addition, ICE’s Office of Detention and Removal has limited detention space, and unauthorized workers detained during worksite enforcement investigations are a low priority for that space. In 2004, the Under Secretary for Border and Transportation Security sent a memo to the Commissioner of U.S. Customs and Border Protection and the Assistant Secretary for ICE outlining the priorities for the detention of aliens. According to this memo, aliens who are subjects of national security investigations were among those groups of aliens given the highest priority for detention, while aliens arrested as a result of worksite enforcement investigations were among those groups of aliens given the lowest priority. Officials in 8 of the 12 field offices we interviewed told us that lack of sufficient detention space has limited the effectiveness of worksite enforcement efforts. For example, ICE officials stated that if investigative agents arrest unauthorized aliens at worksites, the aliens would likely be released because the Office of Detention and Removal detention centers do not have sufficient space to house the aliens. Field office representatives said that offices can expend a large amount of resources to arrest unauthorized aliens at worksites and that these aliens would likely be released and may re-enter the workforce, in some cases returning to the worksites from where they were originally arrested. As a result, the use of resources to arrest unauthorized aliens at worksites may be unproductive. A congressional conference report for fiscal year 2005 provided funds to the Office of Detention and Removal for an additional 1,950 bed spaces. Given competing priorities for detention space, the effect, if any, these additional bed spaces will have on ICE’s priority given to workers detained as a result of worksite enforcement operations cannot currently be determined. Given ICE’s limited resources and competing priorities for those resources, ICE’s lack of performance goals and measures for the worksite enforcement program may hinder the agency’s ability to effectively determine and allocate resources for the program. Performance goals and measures are intended to provide Congress and agency management with the information to systematically assess a program’s strengths, weaknesses, and performance. A performance goal is the target level of performance—either output or outcome—expressed as a tangible, measurable objective against which actual achievement will be compared. A performance measure can be defined as an indicator, statistic, or metric used to gauge program performance and may typically include outputs and outcomes. Outputs provide status information about an initiative or program in terms of completing an action in a specified time frame. Outcomes show results or outcomes related to an initiative or program in terms of its effectiveness, efficiency, or impact. Outputs should support or lead to outcomes and, for each outcome goal, there are typically several output goals. Outputs and outcomes together help agencies determine and report on products or services provided through a program and the results of those products or services. ICE lacks output goals and measures necessary to inform its resource allocation decisions. Output goals and measures are an essential management tool in managing programs for results. They help provide the information that agencies need to aid in determining resources for a program and whether they are using program resources efficiently and effectively. ICE officials told us that the agency does not plan to focus on developing and using output goals and measures for worksite enforcement, such as the number of cases initiated or number of worksite arrests made, because they believe that such goals and measures do not adequately indicate ICE’s level of effort for worksite enforcement. Therefore, the ICE officials said that ICE plans to focus on developing outcome goals and measures for the program that better reflect the program’s effect. Yet in its fiscal year 2006 budget request, ICE identified two output measures for its worksite enforcement program: a 20 percent increase in the number of administrative worksite case completions and criminal employer case presentations made to the U.S. Attorney’s Office in fiscal year 2007 and a 30 percent increase in these two indicators in fiscal year 2008. Although these two measures would provide a general indication of ICE’s level of worksite enforcement activity, these measures alone would not allow ICE or Congress to effectively determine resources needed for the worksite enforcement program because these indicators address only two elements of the worksite enforcement program and do not address other program elements, such as critical infrastructure protection. Furthermore, in July 2005 the Secretary of Homeland Security discussed the need for DHS, of which ICE is a part, to be an effective steward of its resources. Without additional output goals and measures for worksite enforcement, ICE’s ability to effectively determine and allocate worksite enforcement resources needed to meet program goals, especially given other agency priorities for resources, and to fully assess whether the agency is using those resources effectively and efficiently in implementing the program may be hindered. In addition, ICE lacks outcome goals and measures that may hinder its ability to effectively assess the results of its worksite enforcement program efforts, including critical infrastructure protection efforts. Outcome measures provide agencies with an assessment of the results of a program activity or policy compared to its intended purposes. ICE officials told us that the agency plans to develop outcome goals and measures for its worksite enforcement program, but it has not yet developed these goals and measures. As a first step, ICE officials told us that field offices conducted baseline threat-level assessments in August and September 2004 to help identify regional risks, such as risks to critical infrastructure sites. These officials stated that an action plan will be developed to address these risks. Field office agents will then measure how well a particular threat has been addressed by measuring the impact of ICE’s investigative activities on deterring threats or decreasing vulnerabilities to national security. ICE has not yet established target time frames for developing worksite enforcement program outcome goals and measures and, without these goals and measures, ICE may not be able to effectively assess the results of program efforts. For example, until ICE fully develops outcome goals and measures, it may not be able to completely determine the extent to which its critical infrastructure protection efforts have resulted in the elimination of unauthorized workers’ access to secure areas of critical infrastructure sites, one possible goal that ICE may use for its worksite enforcement program. Efforts to reduce the employment of unauthorized workers in the United States necessitate a strong employment eligibility verification process and a credible worksite enforcement program to help ensure that employers are meeting verification requirements. The current Form I-9 employment verification process has not fundamentally changed since its establishment in 1986, and ongoing weaknesses in the process have undermined its effectiveness. Although DHS and the former INS have been assessing changes in the process since 1997, DHS has not yet issued final regulations on these changes, and it has not established a definitive time frame for completing the assessment. Completion of this assessment and issuance of final regulations should strengthen the current employment verification process and make it simpler and more secure. Furthermore, the Basic Pilot Program, or a similar automated verification system, if implemented on a much larger scale, shows promise for enhancing the employment verification process and reducing document fraud. However, current weaknesses in pilot program implementation would have to be fully addressed to help ensure the efficient and effective operation of an expanded or mandatory pilot program, or a similar automated employment verification program, and the cost of additional resources would be a consideration. Although USCIS plans to review current pilot program weaknesses, additional information on the costs and feasibility of addressing these weaknesses is needed to assist USCIS and Congress in assessing possible future use of the Basic Pilot Program, including increased program usage. Even with a strengthened employment verification process, a credible worksite enforcement program is needed because no verification process is foolproof and not all employers may want to comply with the law. ICE’s focus on critical infrastructure protection since September 11, 2001 is consistent with the DHS mission to combat terrorism by detecting and mitigating vulnerabilities to terrorist attacks at critical infrastructure sites which, if exploited, could pose serious threats to domestic security. This focus on critical infrastructure protection, though, generally does not address noncritical infrastructure employers’ noncompliance with IRCA. As a result, employers, particularly those not located at or near critical infrastructure sites, who attempt to circumvent IRCA face less of a likelihood that ICE will investigate them for failing to comply with the current employment verification process or knowingly hiring unauthorized workers. ICE is taking some steps to address difficulties it has faced in its worksite enforcement efforts, but it is too early to tell whether these steps will improve the effectiveness of the worksite enforcement program. In addition, given ICE’s limited resources and competing priorities for those resources, additional output goals and measures are needed to help ICE track the progress of its worksite enforcement efforts, effectively determine the resources needed to meet worksite enforcement program goals, and ensure that program resources are used efficiently and effectively. Moreover, a target time frame for developing outcome goals and measures is needed to assist Congress and ICE in determining whether the worksite enforcement program, including critical infrastructure protection, is achieving its desired outcomes. To strengthen the current employment verification process, we recommend that the Secretary of Homeland Security take the following action: set a specific time frame for completing the department’s review of the Form I-9 process, including an assessment of the possibility of reducing the number of acceptable work eligibility documents, and issuing final regulations on changes to the Form I-9 process and an updated Form I-9. To assist Congress and USCIS in assessing the possibility of increased or mandatory use of the Basic Pilot Program, we recommend that the Secretary of Homeland Security direct the Director of USCIS to take the following action: include, in the planned evaluation of the Basic Pilot Program, an assessment of the feasibility and costs of addressing the Basic Pilot Program’s current weaknesses, including its inability to detect identity fraud in the verification and reverification processes, delays in entry of new arrival and employment authorization information into DHS databases, and employer noncompliance with program procedures, and resources needed to support any increased or mandatory use of the program. To assist Congress and ICE in determining the resources needed for the worksite enforcement program and to help ensure the efficient and effective use of program resources, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for ICE to take the following two actions: establish additional output goals and measures for the worksite enforcement program to clearly indicate the target level of ICE worksite enforcement activity and the resources needed to implement the program, and set a specific time frame for completing the assessment and development of outcome goals and measures for the worksite enforcement program to provide a target level of performance for worksite enforcement efforts and measures to assess the extent to which program results have met program goals. We requested comments on this report from the Secretary of Homeland Security. In its response, DHS agreed with our recommendations. DHS’s comments are reprinted in Appendix V. DHS also provided technical comments, which we considered and incorporated where appropriate. We also received technical comments from SSA, which we considered and incorporated where appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Homeland Security, the Secretary of Labor, the Attorney General, the Commissioner of the Social Security Administration, the Director of the Office of Management and Budget, and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Employment Eligibility Verification Form (Form I-9) To determine how the employment eligibility verification (Form I-9) process functions, we examined laws related to the employment verification process, including the Immigration Reform and Control Act of 1986 and the Illegal Immigration Reform and Immigrant Responsibility Act of 1996; federal regulations on the Form I-9 process; and former U.S. Immigration and Naturalization Service (INS) guidance on the Form I-9, such as the Handbook for Employers, which provides instructions for completing the form. We evaluated this information to identify the Form I-9 requirements, including employer and employee responsibilities for completing the form, and challenges to meeting those requirements. We examined our past reports and other studies, such as the 1997 U.S. Commission on Immigration Reform Report to Congress, to obtain further information on the employment verification process. We analyzed former INS plans for addressing Form I-9 challenges, including its plans to modify the list of acceptable work eligibility documents. We also examined U.S. Immigration and Customs Enforcement’s (ICE) interim guidelines on the electronic Forms I-9 to determine what guidance, if any, they provide to employers using the electronic form. To determine challenges to the Form I-9 process and obtain information on the Basic Pilot Program, we also interviewed and obtained information from U.S. Citizenship and Immigration Services (USCIS), ICE, and Social Security Administration (SSA) officials. In addition, we interviewed representatives of 23 employers; 12 employer, employee, and advocacy groups; and 6 immigration experts to obtain their views on employment verification and worksite enforcement. We selected the employers to interview based on a mix of the following criteria: the total number of Basic Pilot Program queries; the total number or percentage of pilot program queries that resulted in authorized employment, tentative nonconfirmations, and final nonconfirmations; geographic proximity to the ICE field offices we visited; previous records of being sanctioned for Form I-9 violations; and industry categorization. The 23 employers we interviewed were located in the following states: California, Illinois, Michigan, New Jersey, New York, and Texas. The 23 employers were also part of the following industries: meat processing, transportation, health care, landscaping, manufacturing, accommodation, food services, agriculture, janitorial and maintenance, temporary employment, critical infrastructure, local government, and newspaper. One of the employers we interviewed did not participate in the Basic Pilot Program. As a result, when we discuss employers’ views on the Basic Pilot Program, we refer to the views of the 22 employers we interviewed who participated in the Basic Pilot Program. We selected the 9 employer and employee associations with which to meet based on a mix of criteria, including industry categorization, gross output by industry in 2002, number of paid employees by industry in 2002, and estimates of the number of illegal immigrants employed by industry. We interviewed officials from employer and employee associations in the following industries: construction, agriculture, accommodation, food services, retail, health care, and meat. We selected the 3 advocacy groups to interview based on the groups’ interest in issues related to employment verification and worksite enforcement efforts and interviewed officials from advocacy groups that represent a range of views on these issues. We selected the 6 immigration experts to interview based on the experts’ range of views on immigration issues. We analyzed information from these agencies, employers, groups, and experts to determine their views on the Form I-9 process and difficulties in verifying work eligibility through the process. We used information obtained from employers, employer and employee associations, and advocacy groups only as anecdotal examples, as information from these entities cannot be generalized to all employers and groups in the United States. Furthermore, we evaluated information from USCIS and SSA on the Basic Pilot Program, including the Basic Pilot Program user’s manual and memorandum of understanding for employers, to determine how the pilot program functions and how it might assist participating employers in reliably verifying employees’ work eligibility and in detecting counterfeit documents. We analyzed this information to determine ongoing challenges in implementing the Basic Pilot Program and ways these challenges could affect increased or mandatory use of the pilot program. We did not evaluate security measures in place for the Basic Pilot Program or the program’s vulnerability to security risks. To identify pilot program challenges, we examined the findings and methodology of the evaluation of the Basic Pilot Program completed by the Institute for Survey Research at Temple University and Westat in June 2002. In addition, we analyzed data on employer participation in and use of the Basic Pilot Program, including data on Basic Pilot Program employment authorizations, to determine how participation and use have changed since fiscal year 2000. We assessed the reliability of these data by reviewing them for accuracy and completeness, interviewing agency officials knowledgeable about the data, and examining documentation on how the data are entered, categorized, and verified in the databases. We determined that the independent evaluation and these data were sufficiently reliable for the purposes of our review. To obtain information on the implementation of the worksite enforcement program, we interviewed officials from ICE, the SSA Office of the Inspector General, the Department of Labor, the Federal Bureau of Investigation, and the Office of Special Counsel for Immigration-Related Unfair Unemployment Practices. We also interviewed officials from 12 of the 26 ICE Special Agent in Charge field offices. We met with officials from the following 8 field offices: Los Angeles and San Diego, California; Chicago, Illinois; Detroit, Michigan; Newark, New Jersey; New York City, New York; and Houston and San Antonio, Texas. We spoke with officials from the following 4 field offices over the telephone: Denver, Colorado; Miami, Florida; Buffalo, New York; and Seattle, Washington. We selected the 12 field offices based on a mix of the following criteria: the number of investigators in each field office in fiscal year 2003, the number of investigations conducted by each field office in fiscal year 2003, the estimated number of undocumented immigrants in the state in which each field office was located, the number of sanctions issued to employers as a result of closed cases located in the same city as the field office between calendar years 1986 and 2000, the number of critical infrastructure operations in which the field office participated from October 2001 through April 2004, the number of employers located in the same city as the field office that participated in the Basic Pilot Program, and geographic area. We also interviewed officials from 4 U.S. Attorney’s Offices that were located in the same areas as 4 of the field offices we visited. We met with officials from the following 3 U.S. Attorney’s Offices: the Southern District of New York U.S. Attorney’s Office; the Southern District of Texas U.S. Attorney’s Office; and the Western District of Texas U.S. Attorney’s Office. We spoke with the Southern District of California U.S. Attorney’s Office over the telephone. We used information obtained from the field offices only as anecdotal examples, as information from these entities cannot be generalized to all field offices in the United States. We analyzed ICE headquarters and field office guidance, memos, and other documents on worksite enforcement to evaluate ICE’s priorities for and management of worksite enforcement efforts and to identify any challenges in program implementation. We analyzed ICE’s April 2005 Interim Strategic Plan to determine ICE’s strategy for its worksite enforcement program. We also examined former INS guidance and strategies and other studies, such as reports from the Department of Justice Office of the Inspector General, to determine how worksite enforcement priorities, implementation, and challenges have evolved. In addition, we separately analyzed ICE and INS data on the worksite enforcement program and assessed their validity and reliability by reviewing them for accuracy and completeness, interviewing agency officials knowledgeable about the data, and examining documentation on how the data are entered, categorized, and verified in the databases. We determined that the data from each agency were sufficiently reliable for the purposes of our review. However, we could not compare the INS and ICE data because, following the creation of ICE in March 2003, the case management system used to enter and maintain information on immigration investigations changed. With the establishment of ICE, agents began using the legacy U.S. Customs Service’s case management system, called the Treasury Enforcement Communications System, for entering and maintaining information on investigations, including worksite enforcement operations. Prior to the creation of ICE, the former INS entered and maintained information on investigative activities in the Performance Analysis System, which captured information on immigration investigations differently than the Treasury Enforcement Communications System. Additionally, ICE officials indicated that, in a few cases, the INS and ICE data did not completely account for all worksite enforcement operations results. ICE officials told us that agents use judgment in categorizing cases entered into both systems and there are a limited number of instances in which agents did not appropriately categorize cases. For example, ICE officials told us that, in reviewing worksite enforcement cases in the ICE system for fiscal year 2004, they found a few cases that agents inappropriately categorized as worksite enforcement. To determine the investigative agent work-years, or full-time equivalents, that INS spent on the worksite enforcement program for each fiscal year from 1999 through 2003, we divided the total hours INS reported spending on employer investigations by the total hours spent on all investigations, including agent hours spent on leave, training, and other administrative and noninvestigative work. We then multiplied this result by 2,080 hours, which constitute one work-year, to determine the number of work-years spent on worksite enforcement. We conducted our work from September 2004 through July 2005 in accordance with generally accepted government auditing standards. In October 2004, Congress authorized the electronic Form I-9 to be implemented by the end of April 2005. ICE has provided interim guidelines for using electronic Forms I-9, until the agency issues final regulations on their use. The interim guidelines specify that employers will have options for completing, signing, storing, and presenting for inspection electronic Forms I-9. For example, the guidelines note that employers may choose to complete Forms I-9 on paper and store the forms electronically or they may choose to both electronically complete and store Forms I-9. The guidelines also state that electronic signatures could be generated through various technologies such as electronic signature pads, personal identification numbers, biometrics, and dialog boxes. The guidelines also state that employers could use electronic storage systems to retain Forms I-9 that include quality assurance steps to prevent and detect the unauthorized creation, addition, alteration, deletion, or deterioration of electronically stored data. In addition, employers may consider an electronic storage system that includes an indexing system and ability to reproduce legible and readable hard copies of electronically stored forms. Employer participation in and use of the Basic Pilot Program has generally increased. Between fiscal years 2002 and 2004, the number of employers actively using the Basic Pilot Program increased from 1,205 to 2,305. In addition, as shown in figure 6, the number of total queries processed through the Basic Pilot Program has generally increased since fiscal year 2000. As shown in figure 7, the majority of Basic Pilot Program queries that resulted in employment authorizations for each fiscal year from 2000 through 2004 were issued by SSA. In addition to the contact named above, Orlando Copeland, Michele Fejfar, Ann H. Finley, Rebecca Gambler, Kathryn Godfrey, Charles Michael Johnson, Eden C. Savino, and Robert E. White made key contributions to this report.
The opportunity for employment is one of the most important magnets attracting illegal immigrants to the United States. Immigration experts state that strategies to deter illegal immigration require both a reliable employment eligibility verification process and a worksite enforcement capacity to ensure that employers comply with immigration-related employment laws. This report examines (1) the current employment verification (Form I-9) process and challenges, if any, facing verification; and (2) the priorities and resources of U.S. Immigration and Customs Enforcement's (ICE) worksite enforcement program and any challenges in implementing the program. The current employment verification process is based on employers' review of documents presented by new employees to prove their identity and work eligibility. On the Form I-9, employers certify that they have reviewed employees' documents and that the documents appear genuine and relate to the individual presenting them. However, various studies have shown that document fraud (use of counterfeit documents) and identity fraud (fraudulent use of valid documents or information belonging to others) have made it difficult for employers who want to comply with the employment verification process to hire only authorized workers and easier for unscrupulous employers to knowingly hire unauthorized workers. The large number and variety of documents acceptable for proving work eligibility have also hindered verification efforts. In 1997, the former Immigration and Naturalization Service (INS), now part of the Department of Homeland Security (DHS), issued an interim rule on a reduction in the number of acceptable work eligibility documents and, in 1998, proposed a further reduction, but this proposal has not yet been finalized. DHS is currently reviewing the list of acceptable work eligibility documents, but has not established a target time frame for completing this review. The Basic Pilot Program, a voluntary program through which participating employers electronically verify employees' work eligibility, has potential to help enhance the verification process and substantially reduce document fraud. Yet, current weaknesses in the program, such as the inability of the program to detect identity fraud, DHS delays in entering data into its databases, and some employer noncompliance with pilot program requirements could, if not addressed, have a significant impact on the program's success. Furthermore, U.S. Citizenship and Immigration Services officials stated that the current Basic Pilot Program may not be able to complete timely verifications if the number of employers using the program significantly increased. Worksite enforcement is one of various immigration enforcement programs that compete for resources and, under the former INS and now under ICE, worksite enforcement has been a relatively low priority. Consistent with DHS's mission to combat terrorism, after September 11, 2001, INS and then ICE focused worksite enforcement resources mainly on removing unauthorized workers from critical infrastructure sites to help address those sites' vulnerabilities. Since fiscal year 1999, the numbers of employer notices of intent to fine and administrative worksite arrests have generally declined, according to ICE, due to various factors such as document fraud, which makes it difficult to prove employer violations. ICE has not yet developed outcome goals and measures for its worksite enforcement program, which, given limited resources and competing priorities for those resources, may hinder ICE's efforts to determine resources needed for the program.
The Department of Defense (DOD) and Congress have become increasingly concerned that U.S. and allied troops abroad may be attacked by chemical, biological, or nuclear weapons delivered by ballistic missiles. Operation Desert Storm demonstrated that the U.S. military and other allied forces have limited capability against theater ballistic missiles. In fact, U.S. defensive capability is limited to weapons that defend against missiles nearing the end of their flight, such as the Patriot. Consequently, developing weapon systems to defeat these threats is DOD’s top priority in its overall ballistic missile defense program. DOD has been working with laser technology for a long time. The following table shows some of the laser development efforts that DOD has undertaken. To date, none of these efforts has resulted in an operational laser weapon system. Currently, DOD is developing a variety of weapon systems as part of its Theater Missile Defense program to counter the potential threats posed by ballistic missiles. The first generation of these weapon systems uses interceptor missiles to intercept and destroy enemy missiles in the latter stages of the missiles' flight. Included among these systems are the Patriot Advanced Capability-3, an improved version of the Patriot system that was used during the Gulf War; Navy Area Defense; Medium Extended Air Defense System; Theater High Altitude Air Defense; and Navy Theater Wide. In addition, DOD is developing ballistic missile defense systems that will use laser beams to destroy enemy missiles. DOD plans to spend billions of dollars to develop these laser weapons and place them in the air (Airborne Laser) and in space (Space-Based Laser). In addition, DOD is developing a ground-based laser (Tactical High-Energy Laser) that is to be used to destroy short-range artillery rockets. Congress has generally endorsed DOD’s efforts to develop and produce these laser weapon systems. Its desire to have these systems developed, produced, and deployed as soon as possible was heightened by a July 1998 report by the Commission to Assess the Ballistic Missile Threat to the United States. The Commission concluded, among other things, that concerted efforts by a number of overtly or potentially hostile nations to acquire ballistic missiles with biological or nuclear warheads pose a growing threat to the United States, its deployed forces, and its friends and allies. While endorsing, and in some instances suggesting that DOD’s efforts to develop laser weapon systems to defeat ballistic missiles be accelerated, Congress has also expressed concern over the cost and risk associated with developing and demonstrating the maturity of the technologies required to develop such missile defense capabilities. The Ranking Minority Member, Committee on the Budget, and the Ranking Minority Member, Subcommittee on Military Research and Development, Committee on Armed Services, House of Representatives, asked us to review DOD's programs to develop laser weapons for missile defense to identify what laser weapons are being considered for missile defense and the coordination among the program offices developing the systems, determine the current status and cost of each system, and identify the technical challenges each system faces as determined by DOD program managers and analysts and other laser system experts. To identify the laser weapons being considered for missile defense and what coordination exists among the programs developing the systems, we reviewed DOD budget and Airborne Laser (ABL), Space-Based Laser (SBL), and Tactical High-Energy Laser (THEL) program office documents. We also met with officials of the Office of the Secretary of Defense; the Ballistic Missile Defense Organization, the ABL program office; the Air Force Space and Missile Systems Center; and the Army Space and Missile Defense Command. To determine the current status and cost of each system, we reviewed and analyzed DOD; Air Force; Army; ABL, SBL, and THEL program offices; and contractor documents regarding the status and cost of the DOD laser weapon programs. We discussed the laser programs with officials of the Ballistic Missile Defense Organization; the ABL program office; the Air Force Space and Missile Systems Center; the Army Space and Missile Defense Command; TRW, Inc.; and Lockheed Martin Corporation. To determine the technical challenges each system faces, we reviewed and analyzed documents and studies from DOD; Air Force; Army; ABL, SBL, and THEL program offices; and contractors. We discussed the technical aspects of the laser programs with officials of the Office of the Secretary of Defense (Operational Test and Evaluation); the Ballistic Missile Defense Organization; the Air Force Air Combat Command; the Air Force Operational Test and Evaluation Center; the ABL program office; the Air Force Scientific Advisory Board; the Air Force Space and Missile Systems Center; the Army Space and Missile Defense Command; TRW, Inc.; Lockheed Martin Corporation; and Lawrence Livermore National Laboratory. We conducted our review from November 1997 to December 1998 in accordance with generally accepted government auditing standards. DOD is developing two laser weapons, ABL and SBL, that are to be used by U.S. forces to destroy enemy ballistic missiles. Additionally, in a joint effort with Israel, DOD is developing the THEL, which is to be used by Israel to defend against short-range rockets. All three programs have benefited from work performed by the Air Force Research Laboratory on lasers and associated systems. In addition, the program directors for these three programs are coordinating their efforts by meeting periodically to discuss and share information on technology and program development issues. Moreover, some of the same contractors and contractor personnel are involved in all three programs, thereby increasing program coordination. The ABL is to be carried by a 747 aircraft, and the SBL by a constellation of satellites. Both of these weapons are to be used by U.S. forces to destroy ballistic missiles while the missiles are still in the early stage of their flight (boost phase). The THEL is a ground-based laser weapon Israel is to use to defend its northern border cities against Russian-made Katyusha rocket attacks in the final stages of the rockets' flight. The ABL, funded and managed by the Air Force, is planned to be the first system with the ability to detect and destroy enemy missiles in their boost phase several hundred kilometers away. It is a complex laser weapon system that is being designed to detect an enemy missile shortly after its launch, track the missile’s path, and hold a concentrated laser beam on the missile until the beam’s heat causes the pressurized missile casing to crack, in turn causing the missile to explode and the warhead to fall to earth well short of its intended target. The program involves placing a multimegawatt laser, beam control system, and related equipment in a Boeing 747-400 freighter aircraft. One prototype ABL is to be produced and tested in 2003 in attempts to shoot down missiles in their boost phase. If this demonstration is successful, the program is scheduled to move into the engineering and manufacturing development phase in 2004. Figure 2.1 shows the ABL concept. The ABL is expected to operate from a central base in the United States and be available to be deployed worldwide. Ultimately, with a seven-aircraft fleet, five aircraft are expected to be available for operational duty at any given time. The other two aircraft are expected to be undergoing modifications or undergoing maintenance or repair. When the ABLs are deployed, two aircraft are to fly in figure-eight patterns above the clouds at about 40,000 feet. Through in-flight refueling and rotation of aircraft, two ABLs will always be on patrol, thus ensuring 24-hour coverage of potential missile launch sites within the theater of operations. Each ABL is to be capable of destroying about 20 missiles before chemicals needed to generate the laser beam need to be replenished. At that point, the aircraft will have to land to refuel the laser. The SBL, jointly funded by the Ballistic Missile Defense Organization (BMDO) and the Air Force and managed by the Air Force, is to be capable of detecting a missile in its boost phase, tracking the missile’s path, and holding a concentrated laser beam on the missile until the beam’s heat causes the missile to be destroyed. The SBL program involves integrating a multimegawatt laser, beam control system, and related equipment on a space platform and launching it into low earth orbit. Air Force estimates show that a full SBL system would not be deployed until after 2020. Figure 2.2 shows a notional SBL engagement. DOD is developing the SBL to provide a continuous global boost phase intercept capability for both theater and national missile defense. The notional concept involves having a constellation of 20 to 35 SBLs. Each SBL is to be capable of destroying about 100 missiles and is to have a range of about 4,300 kilometers. The THEL, funded jointly with Israel and managed by the U.S. Army, is a ground-based laser weapon that is to be used to destroy short-range rockets toward the end of their flights. THEL is to accomplish this by detecting an incoming rocket shortly after it has been launched, tracking the rocket's path, and holding a concentrated laser beam on the rocket's warhead until the beam's heat causes the warhead to detonate, destroying the rocket. The THEL program involves designing and building a multi-hundred kilowatt chemical laser, a beam control system, a fuel supply system, a laser exhaust system, and other equipment to fit into separate, transportable containers, sized so that each container can be transported by a large truck. The transportable containers are to be placed on concrete pads at deployment sites. Once deployed, the THEL components in each separate container are to be integrated. All THEL components have been produced and are scheduled to be integrated and tested at White Sands Missile Range, New Mexico, in July 1999. Figure 2.3 shows the THEL concept. DOD is developing the THEL, in a joint effort with Israel, to be used by Israel to defend against Russian-made Katyusha rockets and other short- range rockets that have been used by terrorists to attack cities in northern Israel. The number of rockets THEL is capable of destroying is limited only by the amount of laser fuel stored at the deployment site. Although THEL is a transportable system that can be moved by large trucks, it is not a mobile system, in the sense that the integrated system cannot move under its own power. Because of this limitation, the United States has no use for THEL as it is currently designed. See chapter 5 for additional discussion of the U.S. need for a mobile THEL-type system. The three laser weapon development programs have coordinated their efforts by holding periodic program director conferences to share information. In addition, some of the same contractors and contractor personnel are involved in all three programs and all three programs have benefited from work performed by the Air Force Research Laboratory on lasers and associated systems. According to the program directors of the ABL, SBL, and THEL, they have conducted periodic conferences and frequent phone conversations to discuss and share information on technology and program development issues. They told us that technology developed under one program is shared where appropriate by all programs, thereby reducing duplication. For example, weight reduction techniques developed under the SBL program are to be used on the ABL and THEL programs. TRW is a subcontractor for the ABL and SBL programs and the prime contractor for the THEL program and is developing the lasers for all three programs. ABL program officials told us that some of the same TRW personnel work on all three programs, thus transferring and sharing their laser technology knowledge between the programs. In another case, the same contractor is to produce the deformable mirrors used in the ABL and SBL programs. All three programs have benefited from the research carried out by the Air Force Research Laboratory (AFRL). For example, all programs plan to use AFRL-developed optical coatings for beam control and laser optical systems. With these specialized coatings, optics absorb little energy from a high energy laser beam, and heavy, vibration-inducing cooling systems are not needed. AFRL officials have also participated in the three programs in various ways, which enhances information sharing. For example, an AFRL official participating in the ABL program is also acting as a THEL principal on-site government representative. The ABL program is currently in the program definition and risk reduction (PDRR) acquisition phase. Initial operational capability of three ABLs is scheduled for 2007 and full operational capability of seven ABLs is scheduled for 2009. This schedule reflects a program slip of about 1 year. The Air Force estimates the life-cycle cost of the ABL at about $11 billion. The ABL program has made progress in addressing some technical challenges, such as atmospheric turbulence that we and others have reported on in the past. However, challenges remain because the components of the system are in various stages of development and have yet to be produced in their final configurations, tested, and integrated into an operational weapon system. Because of the complexity of this integration, some laser experts both inside and outside of DOD have noted that the planned flight testing schedule for the program should be expanded. We believe that the technical complexity of the ABL and related integration issues also raises questions about whether the Air Force's planned ordering of a second aircraft, for modification during the engineering and manufacturing development (EMD) phase of the program, is premature. In November 1996, the Air Force awarded a 77-month PDRR contract to the contractor team of Boeing, TRW, and Lockheed Martin. Under the contract, Boeing is to produce and modify a 747-400 freighter aircraft and integrate the laser and beam control system with the aircraft; TRW is to develop the laser and ground support systems; and Lockheed Martin is to develop the beam control system. The PDRR phase includes two interim milestones--Authority to Proceed 1 (ATP-1), originally scheduled for June 1998, and ATP-2, scheduled for August 2002. The ABL passed ATP-1 in September 1998, 3 months late because the flight-weighted laser module had problems producing the required power level. The PDRR phase is scheduled to culminate with attempts, in 2003, by the PDRR ABL aircraft to destroy a boosting theater ballistic missile. If these demonstrations are successful, the program is scheduled to move into the engineering and manufacturing development phase in 2004. Initial operational capability of three ABLs is scheduled for 2007; full operational capability of seven ABLs is scheduled for 2009. This schedule reflects a 1-year slip in the original PDRR schedule. According to the program office, the revision to the schedule is due to a $25-million reduction Congress made in the fiscal year 1999 appropriation for the ABL and to an expanded test program. The Air Force estimates the life-cycle cost of the ABL to be about $11 billion, including $1.6 billion for the PDRR phase, $1.1 billion for the EMD phase, $3.6 billion for the production phase, and $4.6 billion for 20 years of operations and support. ABL Program Progress We reported on the ABL program in October 1997. At that time, the immediate area of concern that we and others reported was whether the program had adequately assessed the adverse effects of atmospheric turbulence on the ABL's operational effectiveness. We reported that the Air Force did not have all of the data needed to fully understand the effect that atmospheric turbulence would have on the operation of the ABL and that the Air Force had not determined whether non-optical turbulence measurements could be correlated to optical turbulence measurements. We reported that the Air Force had not shown that it could accurately predict the levels of turbulence the ABL will actually encounter; neither had it shown that the ABL's technical requirement regarding turbulence was appropriate. Consequently, we concluded that it was not yet known whether the ABL would be able to operate effectively in its operational environment. In addition, we reported that the Air Force planned to only take additional non-optical turbulence measurements to predict the severity of the optical turbulence the ABL would encounter without first determining whether the two measurement types could be correlated. The Air Force has now completed collecting non-optical atmospheric turbulence data from the Korean and Middle East theaters. In commenting on a draft of this report, DOD stated that, while the Air Force's analyses of these data argue that the design specification established for atmospheric turbulence is generally accurate, the DOD has yet to reach a final position on this issue. DOD stated further that it is still examining the design specification for atmospheric turbulence. According to DOD, the Air Force plans to collect and characterize additional data to further validate its design assumptions. DOD also stated that uncertainties remain concerning the ability to use non-optical turbulence measurements under all conditions to predict operational performance for the ABL. It said that it was considering what additional measurements and analysis are needed to resolve these uncertainties. The Air Force has also been able to establish that the correlation between non-optical and optical data is adequate for the purposes of estimating ABL performance using non-optical data at this stage of the program. However, according to DOD officials, there are instances where optical and non- optical data disagree and the causes of these differences are not understood. Consequently, the Air Force is continuing to collect and analyze data to further validate its turbulence design assumptions. While the ABL program has made progress in addressing technical challenges relating to atmospheric turbulence, other challenges remain. Developing a laser module that is of the size and weight that can be carried by the ABL aircraft (referred to as a flight-weighted laser module), and integrating the laser, beam control system, and related equipment into an aircraft, are two examples of these challenges. The technical challenge inherent in the ABL program is exemplified by problems experienced in developing the high-energy laser. The Air Force must build the laser to be able to contend with size and weight restrictions, motion and vibrations, and other factors unique to an aircraft environment, yet be powerful enough to sustain a killing force over a range of hundreds of kilometers. It is also to be constructed in a configuration that links modules together to produce a single high-energy beam. The laser being developed for the PDRR phase will have six modules. The laser for the EMD phase will have 14 modules. When we issued our report on the ABL in 1997, the program had constructed and tested a developmental laser module. Although that developmental module exceeded its energy output requirements, it was too heavy and large to meet integration requirements. It would have to be reduced in width by about one-third and reduced in weight by over one-half. To accomplish this, many components of the module would have to be reconfigured and built of advanced materials, such as composites. As previously discussed, the PDRR phase of the ABL program includes two milestone decision points--referred to as ATP-1 and ATP-2. To pass ATP-1, the Air Force had to "demonstrate a single laser module at full power with all critical components flight-weighted and show performance (power, beam quality, chemical efficiency, thermal management) is scaleable/ traceable to the EMD design through analysis." During testing of the flight-weighted laser module in connection with the scheduled June 1998 ATP-1 decision point, the module failed to meet its power output requirement. Because of this failure, the program provisionally passed ATP-1. The program fully passed ATP-1 when, 3 months later, the laser module exceeded its power output requirement by 10 percent. However, the power output was achieved using a flight-weighted laser module that was not representative of the laser modules that will be used in an operational ABL weapon system. Specifically, the flight-weighted laser module used for testing in connection with ATP-1 used a stable resonator. ABL design specifications require that an unstable resonator be used. According to program officials, an unstable resonator is needed because it would produce a laser beam that would allow the ABL's beam control system to focus more of the beam's power on the target than would be possible with a beam produced by a stable resonator. In commenting on a draft of this report, DOD stated that a stable, versus unstable, resonator was used for the initial flight-weighted laser module tests because the test facility had a stable resonator in place, and to replace the stable resonator with an unstable resonator would have been too costly and would have adversely affected the program schedule. In addition to demonstrating the laser module at full power, ATP-1 also required the program to demonstrate that the beam quality of the laser beam generated by the module would meet ABL design requirements. In meeting this requirement, the Air Force did not measure the quality of an actual laser beam generated by the module. Instead, it estimated the beam quality using computer models and measurements of the chemical flows within the laser. In future tests of the laser module, the Air Force plans to measure the beam quality of an actual beam generated by the laser module. In attempting to demonstrate the laser module at full power, the Air Force identified several design problems. For example, the catch tank and catch tank outlet, which collect and recirculate a chemical used by the laser, were too small. This limited the flow rate of the chemical, reducing the laser's power. Another problem identified was that too much water vapor entered the laser cavity, which reduced the amount of power generated. In addition, gas pressure within the laser cavity was too high, thus slowing the velocity of gases through the cavity, which also reduced the amount of power generated. Some modifications were made to achieve higher power levels during testing. These and other modifications are currently being finalized and incorporated into the flight-weighted laser module. The ABL program manager stated that integrating a weapon-level laser, beam control system, and the other related components into an aircraft is the largest challenge facing the program. Some individual components of the ABL have been tested under laboratory conditions and the program office has conducted modeling and computer simulations. However, the individual components have not been integrated and tested as a complete weapon system. As we stated in our October 1997 report, until this system integration and testing is accomplished, it is not possible to predict with any degree of certainty the probability that the ABL program will evolve into a viable missile defense system. A major aspect of this system integration testing will be the hot fire flight tests when the laser is turned on and the beam is controlled by the beam control system. According to planning documents, hot fire flight testing begins only 4 months prior to the 2003 theater ballistic missile shoot-down tests. Because of the complexity of the system integration task, some experts both inside and outside of DOD have noted that the planned flight testing schedule for the program is too dependent on successful tests and does not allow enough time and resources to deal with potential test failures and to prove the ABL concept. In a May 1998 Early Operational Assessment, the Air Force Operational Test and Evaluation Center characterized the flight test schedule as "compressed and success-oriented." In addition, the Air Force Scientific Advisory Board, in its February 1998 report, "Airborne Laser Scenarios and Concept of Operations," stated that while the ABL program evolution as currently planned is rational in its sequencing of tests, the schedule appears to have an unrealistically brief flight testing phase. The Board characterized the flight test program as "immature" and said that it needs to be structured to build high confidence in the operability of the laser system. It further stated that past experience with high-power laser systems and large beam directors suggests that new and difficult problems will surface in that phase, and that many flights and targets will be needed to sort them out. The Board suggested that the laser should be fired a reasonably large number of times (in the hundreds) with the ABL in flight before committing to a lethality demonstration and that this would serve to gain experience; establish that it is safe, reliable, and routine; and measure the critical parameters that will give a commander the confidence to use the system without hesitation. Consequently, the Board advised the Air Force to develop contingency plans to prepare for the possibility that the current success-oriented schedule is not achieved, to include ordering additional long lead targets if required, the identification of potential avenues of failure during the flight tests, and preparation of work-arounds or corrective steps prepared in advance. Congress has also raised concerns related to this issue. The conference report on the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 noted that the conferees are concerned that the Air Force plans to enter EMD without adequate time to operate, test, and evaluate the PDRR configuration. As a result, the conferees directed the Secretary of Defense to establish an independent review team to assist with the Secretary’s evaluation of the technical risk in the ABL program and his determination of whether (1) additional testing and risk reduction is necessary prior to integration of the ABL subsystems into a commercial 747-400F aircraft and (2) the fully integrated PDRR aircraft should be operated for a period of time and thoroughly tested prior to finalizing an objective design. The act directed the Secretary of Defense to report the findings of his assessment of the ABL program by March 15, 1999. The technical complexity of the ABL and related integration issues raise questions about when a second aircraft, for modification during the EMD phase, should be ordered. Current program plans call for an aircraft to be ordered about 1 year before the planned attempts to shoot down a theater missile with the PDRR aircraft. The Air Force has a contract with Boeing for the aircraft that will be used during the PDRR phase. According to ABL acquisition plans, a second 747-400 freighter will be ordered in September 2002 for the EMD phase. The ordering of the aircraft is to immediately follow the August 2002 ATP-2 meeting. However, this acquisition strategy will result in the second aircraft being ordered about 1 year prior to the scheduled demonstration of the ABL's ability to shoot down a theater ballistic missile. The ABL program has made progress in addressing some technical challenges, such as atmospheric turbulence, that we and others have reported on in the past. However, challenges will continue through the development program and we have concerns about some Air Force statements of program successes--specifically, statements related to the power output and beam quality of the flight-weighted laser module. Once these and other problems are resolved, the major program challenge will be to integrate the individual system components into a complete weapon system for testing. A major test for the program will be the flight tests during which the laser is turned on and its beam is controlled by the beam control system. Independent reviews of the ABL program by laser experts indicate that the ABL flight test plan may be too limited and too dependent on successful tests, and not allow enough time and resources to deal with potential test failures and to prove the ABL concept. The technical complexity of the ABL and related integration issues also raise questions about when a second aircraft, for modification during the EMD phase of the program, should be ordered. Current plans call for the EMD aircraft to be ordered about 1 year before the PDRR aircraft attempts to shoot down theater ballistic missiles. If the PDRR aircraft fails to prove the ABL concept, the funds expended for the EMD aircraft may be wasted. Regarding the ABL program, we recommend that the Secretary of Defense direct the Secretary of the Air Force to reconsider plans to exercise the option for the second ABL aircraft for the EMD phase of the program before flight testing of the ABL system developed during the PDRR phase has demonstrated that the ABL concept is an achievable, effective combat system. In a draft of this report, we recommended that the Secretary of Defense direct the Secretary of the Air Force to provide DOD an assessment of the need to expand the ABL flight test program. In commenting on that draft report, DOD partially concurred with our recommendation and stated that its ongoing assessment of the ABL program by an Independent Assessment Team (IAT) would constitute an appropriate assessment of the flight test program. Subsequent to DOD’s comments on our draft report, DOD completed its assessment of the ABL program and reported the results to Congress in March 1999. In its report, DOD noted the IAT’s agreement with Air Force plans to restructure the ABL program to expand testing and risk reduction activities before starting modifications to the PDRR aircraft (the first aircraft). DOD concurred with the IAT’s recommendation for more testing of the PDRR aircraft before Milestone II, which governs entry into engineering and manufacturing development. DOD stated that it will review the Air Force’s proposed restructured program and set a new Acquisition Program Baseline in the spring of 1999. During the restructuring and rebaselining effort, DOD stated that, among other things, it will revise the exit criteria for Milestone II to require more testing against threat-representative targets. DOD stated that it expects that adding flight tests to the program before the start of EMD will increase near-term costs and might delay ABL’s achievement of an initial operational capability. However, according to DOD, the added tests will ensure that the expenditures required for ABL’s EMD phase are justified. We agree with DOD’s assessment and future plans for the ABL program. Therefore, we deleted from our final report the recommendation for an assessment of the ABL flight test program. Based on DOD’s comments on our draft report that DOD would not necessarily incur unnecessary costs by proceeding with the purchase of a second ABL aircraft, we revised our recommendation to reflect the need for DOD to reconsider its planned purchase in light of the IAT’s findings and our report. We recognize that delaying the procurement of the aircraft for the EMD portion of the program until after the ABL demonstrates it can shoot down target missiles might require a change in the scheduled initial operational capability. However, such a slip would ensure that the procurement of the EMD aircraft would then be based on the additional knowledge gained in the shoot down demonstrations that the ABL design is feasible. Our recommended approach is consistent with DOD’s March 1999 report to Congress on the ABL program wherein it accepted a potential delay in the ABL’s initial operational capability in favor of obtaining additional data through increased flight tests. Our approach is also appropriate in view of the discussion in DOD’s March 1999 report on the impact of turbulence on the ABL design specification. DOD stated that optical turbulence in excess of the design specification along the slant path between the ABL and its target can reduce ABL’s maximum lethal range and increase required dwell times, even at lesser ranges. It said that some analyses of atmospheric turbulence data collected in theaters of interest to date suggest that turbulence levels well above assumed ABL design levels might occur more often than expected at the time the design levels were set. According to DOD, there are currently no clear methods for predicting the actual turbulence level along a slant path to a particular threat location at a given point in time. Thus, according to DOD, beyond trial and error, it is not clear how a correct decision can be made on whether a particular target can be successfully engaged when launched near ABL’s maximum lethal range. The Air Force is analyzing turbulence data and investigating tactical decision aids for the system to address this issue. The SBL program is about a year into a $30-million study phase to define concepts for the design, development, and deployment of an SBL proof of concept demonstrator. According to the program office, the SBL demonstrator would be the most technically complex spacecraft the United States has ever built. DOD is currently considering an acquisition strategy under which the demonstrator spacecraft would be launched in the 2010 to 2012 time frame. Congress, however, has directed that the demonstrator be launched in the 2006 to 2008 time frame. According to a senior SBL program official, the SBL readiness demonstrator will be the most complex spacecraft the United States has ever built. He also said that there is only a 50-percent chance that it will be built and deployed by 2008. According to SBL program officials, the weight and size constraints dictated by the size and weight limitations of existing and planned launch vehicles force the program to push the state of the art in areas such as laser efficiency, laser brightness, and deployable optics. DOD's programmed funding for SBL from fiscal year 1998 to 2005 totals $1.1 billion. DOD officials told us that the design, development, and deployment of an SBL readiness demonstrator would cost about $3 billion. The conference report for the fiscal year 1998 National Defense Authorization Act states that the Secretary of Defense, in an August 1997 letter to the Senate Majority Leader, confirmed that SBL technology had reached a level of maturity that could lead to a future space demonstration of a sub-scale vehicle. Consequently, the conferees directed the Air Force to promptly establish a baseline for a Space-Based Laser Readiness Demonstrator (SBLRD) to include a set of technical objectives and requirements, a contracting strategy, a system design, a program schedule, and a funding profile that would support a launch in fiscal year 2005. Further, to ensure the focus of the program remains on a fiscal year 2005 (this deployment date was later changed to the 2006-2008 time frame) launch, the conferees directed that they be consulted prior to planned variances from this launch date. In addition, the conferees directed the Secretary of Defense to report on the status of the SBL readiness demonstrator baseline and related issues to the congressional defense committees by March 1, 1998. To date, DOD has not submitted its SBL baseline report to Congress. In February 1998, the Air Force awarded two 6-month concept definition study contracts, valued at $10 million each, to Lockheed Martin and TRW as an initial step to develop SBLRD. The contractors were tasked to evaluate three strategies: a 2005/2006 launch of the SBLRD with existing technology, a 2008 launch with existing technology, and a 2008 launch infusing advanced technology. In early 1998, the Air Force's acquisition strategy was to use evaluation data from these two efforts, along with other appropriate data, to award a contract in August 1998 to develop the SBLRD. The objectives of the demonstration would be to validate the SBL as a viable option for missile defense by demonstrating SBL technology readiness and to obtain performance and operations data regarding high-power space lasers, long-range precision pointing, adjunct missions feasibility, and to explore battle management issues. When the initial acquisition strategy was provided to the Under Secretary of Defense for Acquisition and Technology in August 1998, the Under Secretary was concerned that the strategy focused only on the demonstrator and wanted to know whether the long-term program to develop and deploy the SBL would be affordable. Consequently, he directed the BMDO and the Air Force to restructure and expand the scope of the readiness demonstrator acquisition strategy to include the complete development and deployment of an SBL system. The restructuring was also to include review and assessment of other missile defense concepts such as ground-based lasers and space-based relay mirrors. In addition, the Under Secretary directed them to look for opportunities to develop technologies that would increase the affordability of the SBL by collaborating with other agencies such as the National Aeronautics and Space Administration, which is currently developing deployable optics for its next generation space telescope. In implementing this direction, BMDO and the Air Force restructured the acquisition strategy and extended the concept definition study contracts at a cost of $5 million each. In February 1999, BMDO and the Air Force announced the award of a contract for a joint venture among Boeing, Lockheed Martin, and TRW for $125 million for initiating the Space-Based Laser Integrated Flight Experiment effort that is to result in deploying the readiness demonstrator in the 2010 to 2012 time frame. According to BMDO officials, a full SBL system would not be deployed until after 2020. The future of the SBL program is unknown at this time. DOD is currently doing a comprehensive assessment of the program. That assessment will include alternative ballistic missile defense concepts, such as ground-based lasers and space-based relay mirrors. If, based on this assessment, the SBL is ultimately selected, DOD estimates that a fully operational system would not be deployed until after 2020. Accelerating the deployment date would require the maturation of some complex technologies such as deployable optics and would require a large, but yet unknown, infusion of funds into the program. The THEL is about 34 months into its $131.5 million 38-month development program. All of its components--such as the laser, the pointer tracker, and the pressure recovery system--have been built and are currently being tested. The system was scheduled to be integrated, tested, and ready to begin shoot-down tests against short-range rockets at White Sands Missile Range by December 1998. However, the shoot-down testing has been delayed 7 months due to administrative issues and technical problems with the laser and the pointer tracker. Although THEL's components have been produced, the technical challenges relating to testing and integration remain to be overcome. Initial testing of the laser has identified a problem with the chemical flow control valves. In addition, tests of the pointer tracker have identified problems with the low-power laser that is to track short-range rockets. Furthermore, integration and related testing have yet to begin. In May 1995, a predecessor program to THEL, Nautilus, was started. Nautilus was a joint U.S.-Israel program to evaluate the effectiveness of lasers for potential use as a tactical air defense system against short-range rockets in a variety of missions, including peace-keeping operations. The U.S. Army Space and Missile Defense Command (SMDC), then called the Space and Strategic Defense Command, provided primary management functions for the program. The Israel Ministry of Defense provided support to SMDC. In February 1996, the Nautilus program culminated in a successful test at the Army's High Energy Laser Systems Test Facility (HELSTF) at White Sands Missile Range, New Mexico, using the Mid- Infrared Advanced Chemical Laser and Sea Lite Beam Director to engage and destroy a short-range Katyusha rocket. In April 1996, President Clinton met with Israel's then Prime Minister Shimon Peres. At the meeting, the United States made a commitment to assist Israel to develop a Tactical High Energy Laser Advanced Concept Technology Demonstrator. This commitment, based on the success of the Nautilus program, was designed to help Israel defend its northern cities from the threat posed by Katyusha and other short-range rockets. In May 1996, TRW was awarded a contract for $89 million to design, fabricate, and test a tactical-sized deuterium fluoride chemical laser capable of defeating short-range artillery rockets. The original contract called for about a 22-month effort to design and build the system by March 1998. Israel contributed $24.7 million toward the contract cost and developed components such as the fire control radar system, laser fluid supply system, and pressure recovery system (laser exhaust system). In January 1998, the contract was modified to increase its value by $42.5 million to $131.5 million (increasing the U. S. contribution to $106.8 million) and to extend the completion date by 11 months to February 1999 for integration and rocket shoot-down testing at HELSTF. This testing was scheduled to begin in December 1998. However, testing of the laser and the pointer tracker has revealed problems that have, along with administrative issues associated with contract initiation, caused the schedule to be delayed by 7 months, to July 1999. THEL's components have been produced. However, initial testing of the laser has identified problems with the operation of chemical flow control valves and with the low-power laser that is to be used in tracking the short-range rockets. The initial tests of the laser revealed leaks in the specialized valves that control the flow of chemicals through the laser. These leaks must be corrected because they would detract from the performance of the laser. In addition, testing of the pointer tracker system disclosed a problem with the low-power laser that is to be used in tracking incoming short-range rockets. This laser is a commercial off-the-shelf item that is generally used in laboratory settings. It has been modified for use on the THEL; however, it is still undergoing tests to ensure it meets performance requirements. The valve leaks and the problems with the low-power laser in the pointer tracker system have caused a delay in the THEL test schedule. Originally, the THEL system was scheduled to be a fully integrated system that would attempt to shoot-down a Katyusha rocket at HELSTF in December 1998. Because of these unanticipated problems and administrative issues, the schedule has slipped by 7 months, to July 1999. Currently, U.S. forces do not have a validated mission requirement for the THEL as it is being designed for Israel. However, the Army has prepared a draft mission needs statement for a reconfigured mobile laser weapon that could be used by U.S. forces to shoot down a variety of targets in theater environments. A THEL official told us that the draft Army mission needs statement is being incorporated into Atlantic Command's Joint Theater and Missile Defense mission needs statement, which includes the need for a mobile high-energy laser weapon. The THEL would have to be radically modified for it to be more powerful and mobile and thus meet emerging U.S. theater defense requirements for a ground-based laser. While the THEL system being developed for Israel is designed to be transportable, it will not be mobile; THEL components must be transported by large trucks and placed on prepared concrete sites. According to laser experts at Lawrence Livermore National Laboratory, a mobile, ground-based high-energy laser weapon for U.S. use would probably necessitate using a relatively small solid-state laser (versus the much larger and heavier chemical laser being developed for the THEL), the technology for which is relatively immature. The experts said that a generation of solid-state laser research and development would be needed to develop technology to the level necessary for use in a mobile THEL-type system. A program official said that such a system would probably not be fielded until at least 2025. In commenting on a draft of this report, DOD stated that the Army is investigating four solid-state laser concepts and the availability dates and concepts may be different than the assessment provided by Lawrence Livermore National Laboratory officials. Of the three laser weapon systems that DOD is developing for use against theater ballistic missiles or short-range artillery rockets, the THEL is closest to becoming a fielded system. It is being developed in a relatively short time frame at a relatively low cost. Because THEL is a follow-on to an earlier laser weapon program, its successful development and fielding have been considered relatively low risk. However, technical problems and their associated program delays demonstrate the complex nature of developing laser weapons of this type. Lessons learned from the THEL program will be beneficial if the United States decides to develop a THEL- type system for its military forces. However, given the more demanding requirements that the U.S. will likely have, eventual success of the THEL program will not easily translate into a low-risk, problem-free U.S. program. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated February 19, 1999. 1. The Secretary of Defense submitted his report on the Airborne Laser (ABL) program to Congress in March 1999, subsequent to DOD providing comments on a draft of this report. The Secretary reported that the ABL flight-test program will be expanded. Since this action is consistent with the recommendation in our draft report, we have deleted the recommendation from the final report. 2. We agree that operational testing for the ABL program will not begin until the engineering and manufacturing development (EMD) phase and have modified the text by deleting the word operational. However, we retained the term combat system because it refers to the ABL concept and not to the program definition and risk reduction (PDRR) aircraft. 3. We have modified the report to clarify that the Space-Based Laser (SBL) is a demonstration program. 4. We have modified the report title and text to clarify that the Tactical High-Energy Laser (THEL) is not a theater ballistic missile defense system. 5. We have modified the text of the report to reflect that DOD has not yet reached a final position on the issue of atmospheric turbulence. 6. We did not assess the marketability of the 747 freighter aircraft by the Air Force if it decides to terminate or delay the ABL program. However, if after ordering the aircraft DOD decides to terminate the program, it would be liable for up to $50 million unless it can successfully sell its place in the production queue or sell the aircraft. DOD did not include in its comments an estimate to store the aircraft if the ABL program is delayed. Ted B. Baird Rich Horiuchi The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) programs to develop laser weapons for missile defense, focusing on: (1) what laser weapons are being considered for missile defense and the coordination among the program offices developing the systems; (2) the status and cost of each system; and (3) the technical challenges each system faces as determined by DOD program managers and analysts and other laser system experts. GAO noted that: (1) DOD is developing two laser weapons--the Airborne Laser (ABL) and the Space-Based Laser (SBL)--which U.S. forces intend to use to destroy enemy ballistic missiles; (2) in a joint effort with Israel, DOD is developing a ground-based laser weapon, the Tactical High Energy Laser (THEL), which Israel will use to defend its northern cities against short-range rockets; (3) ABL is funded and managed by the Air Force, SBL is jointly funded by the Ballistic Missile Defense Organization and the Air Force, and THEL is funded jointly with Israel and managed by the Army; (4) ABL, SBL, and THEL are in varying stages of development ranging from conceptual design studies to integration and testing of system components; (5) the ABL program is in the program definition and risk reduction acquisition phase and is scheduled for full operational capability in 2009, with a total of seven ABLs; (6) this schedule reflects a 1-year delay from the original schedule; (7) the Air Force estimates the life-cycle cost of the ABL to be about $11 billion; (8) the SBL program is about a year into a $30-million study phase to define concepts for the design, development, and deployment of a proof of concept demonstrator; (9) DOD estimates that it will cost about $3 billion to develop and deploy the demonstrator; (10) the future of the SBL program is unknown, pending the outcome of a DOD assessment of the program; (11) the $131.5-million THEL Advanced Concept Technology Demonstration program is about 34 months into a 38-month program; (12) system components have been built, but system testing has been delayed from December 1998 to July 1999 due to administrative and technical problems; (13) laser experts agree that the ABL, SBL, and THEL face significant technical challenges; (14) the technical complexity of the ABL program has caused laser experts to conclude that the ABL planned flight test schedule is compressed and too dependent on the assumption that tests will be successful and therefore does not allow enough time and resources to deal with potential test failures and to prove the ABL concept; (15) if DOD ultimately decides to continue the SBL program, the size and weight limitations dictated by current and future launch capabilities will force the program to push the state of the art in laser efficiency, laser power, and deployable optics; and (16) initial testing of THEL's laser has identified problems with the operation of chemical flow control valves and with the low-power laser that is to be used in tracking short-range rockets the system is being designed to defeat.
Nearly all health care providers, such as physicians and hospitals, purchase insurance that covers expenses related to medical malpractice claims, including payments to claimants and legal expenses. The most common physician policies provide $1 million of coverage per incident and $3 million of coverage per year. Today the primary sellers of physician medical malpractice insurance are the physician-owned and/or operated insurance companies that, according to the Physician Insurers Association of America, insure approximately 60 percent of all physicians in private practice in the United States. Other health care providers may obtain coverage through commercial insurance companies, mutual coverage arrangements, or state-run insurance programs, or may self-insure (take responsibility for claims themselves). Most medical malpractice insurance policies offer claims-made coverage, which covers claims reported during the year in which the policy is in effect. A small and declining number of policies offer occurrence coverage, which covers all claims arising out of events that occurred during the year in which the policy was in effect. Medical malpractice insurance operates much like other types of insurance, with insurers collecting premiums from policyholders in exchange for an agreement to defend and pay future claims within the limits set by the policy. Insurers invest the premiums they collect and use the income from those investments to reduce the amount of premium income that would have been required otherwise. Claims against a policyholder are recorded as expenses, or incurred losses, which are equal to the amount paid on those claims as well as the insurer’s estimate of future losses on those same claims. The liability associated with the portion of these incurred losses that have not yet been paid by the insurer is collectively known as the insurer’s loss reserve. In order to maintain financial soundness, insurers must maintain assets in excess of total liabilities—including loss reserves and reserves for premiums received but not yet earned—to make up what is known as the insurer’s surplus. State insurance departments monitor insurers’ solvency by tracking, among other measures, the ratio of total annual premiums to this surplus. Medical malpractice insurers generally attempt to keep their surplus approximately equal to their annual premium income. Medical malpractice insurers establish premium base rates for particular medical specialties within a state and sometimes for particular geographic regions within a state. Insurers may also offer discounts or add surcharges for the particular characteristics of policyholders, such as claim histories or whether they participate in risk-management programs. The premium rates are based on anticipated losses on claims and related expenses, expected investment income, the need to build a surplus, and, for for-profit insurers, the desire to earn a reasonable profit for shareholders. In most states the insurance regulators have the authority to approve or deny proposed changes to premium rates. For several reasons, accurately predicting losses on medical malpractice claims is difficult. First, according to a national insurer association we spoke with, most medical malpractice claims take an average of more than 5 years to resolve, including discovering the malpractice, filing a claim, determining (through settlement or trial) payment responsibilities, if any, and paying the claim. In addition, some claims may not be resolved for as long as 8 to 10 years. As a result, insurers often must estimate costs years in advance. Second, the range of potential losses is wide. Actuaries we spoke with told us that individual claims with similar characteristics can result in very different losses for the insurer, making it difficult to predict the ultimate cost of any single claim. Third, the predictive value of historical data is further limited by the often small pool of relevant policyholders. For example, a relevant pool of policyholders would be physicians practicing a particular specialty within a specific state and perhaps within a specific geographic area within that state. In smaller states, and for some of the less common but more risky specialties, this pool could be very small and provide only a limited amount of data that could be used to estimate future costs. Medical malpractice insurance is regulated by state insurance departments and subject to state laws. That is, insurers selling medical malpractice insurance in a particular state are subject to that state’s regulations for their operations within that state, and all claims within that state are subject to that state’s tort laws. Insurance regulations can vary across states, creating differences in the way insurance rates are regulated. For example, one state insurance regulator we spoke with essentially let the insurance market determine appropriate rates, while another had an increased level of review, including approving specific company rates on a case-by-case basis. NAIC assists state insurance regulators in developing these regulations by providing guidance, model (or recommended) laws and guidelines, and information-sharing tools. In response to concerns over rising premium rates, physicians, medical associations, and insurers have pushed for state and federal legislation that would, among other things, limit the amount of damages paid out on medical malpractice claims. A few states have passed legislation with such limitations over the past several years, and federal legislation is pending. On March 13, 2003, the House of Representatives passed the Help Efficient, Accessible Low-Cost, Timely Healthcare (HEALTH) Act of 2003, which includes, among other things, a limit on certain types of damages in medical malpractice claims. On March 12, 2003, a similar bill of the same name was introduced in the Senate, but as of June 2003, no additional action had been taken. Beginning in 1999 and 2000, medical malpractice insurers in our seven sample states increased their premium rates for the physician specialties of general surgery, internal medicine, and obstetrics/gynecology faster than they had since at least 1992. These specialties were the only ones for which data were available, and 1992 was the earliest year for which we could obtain comprehensive survey data. However, both the extent of these changes and the level of the premium rates insurers charged varied greatly across medical specialties, states, and even areas within states. From 1999 through 2002, one large insurer raised rates more for internal medicine than for general surgery, while another raised rates 12 times more for general surgery than for internal medicine. Changes in premium base rates among some of the largest insurers in each state ranged from a reduction of about 9 percent for obstetricians and gynecologists insured by one California company to an increase of almost 170 percent for doctors in the same specialty in one area of Pennsylvania. At the same time, premium rates for the same amount of coverage for the same medical specialty varied by a factor of as much as 17 among states—that is, the rate in one state was 17 times higher than the rate in a different state. As figure 1 shows, premium base rates varied across our seven sample states from 1992 to 1998 but for most insurers remained relatively flat. Beginning in 1999 and 2000, however, most of these insurers began increasing their rates in larger increments. Many of the increases were dramatic, ranging as high as 165 percent, although some rates remained flat. Figure 2 shows the percentage increase in premium rates for the largest insurers in our seven sample states from 1999 through 2002. In the Harrisburg area of Pennsylvania, for example, the largest insurer increased premium base rates dramatically for three specialties: obstetrics/ gynecology (165 percent), general surgery (130 percent), and internal medicine (130 percent). At the same time, the consumer price index (CPI) increased by 10 percent. However, in California and Minnesota, premium base rates for the same specialties rose between 5 and 21 percent and in some cases fell slightly. The variations in the changes in premium base rates among our sample states appears to be consistent with the changes in states outside our sample, with insurers in some states raising premium rates rapidly after 1999 and insurers in other states raising them very little. We found that premium rates quoted by insurers in our seven sample states varied across medical specialties and states. According to some of the insurers and actuaries we spoke with, the differences in rates reflect the costs associated with medical malpractice claims against physicians in particular specialties. Specialties with a high risk of large or frequent losses on medical malpractice claims will have higher premium rates. For example, in 2002 the largest medical malpractice insurer in Texas quoted a base rate for the same level of coverage of $92,000 to obstetricians and gynecologists, $71,000 to general surgeons, and $26,000 to internists. Figure 3 shows the premium rates quoted by the largest medical malpractice insurers in our sample states for these three specialties. Premium rates quoted by insurers in our seven sample states for the same medical specialty also varied across states and geographic areas within states (see fig. 3). Some of the insurers and actuaries we spoke with told us that these variations also reflect differences in insurers’ loss experiences in those venues. As figure 3 shows, the largest insurer in Florida quoted a premium base rate of $201,000 for obstetricians and gynecologists in Dade County, while the largest insurer in California quoted a premium based rate of $36,000 for similar physicians in northern California. Within Florida, the same large insurer quoted a premium base rate of $103,000 for obstetricians and gynecologists outside of Dade County—approximately 51 percent of the Dade County rate. Within Pennsylvania, the largest insurer quoted a premium base rate of $64,000 for doctors in Philadelphia— approximately 83 percent more than the rate it quoted outside the city. Insurers’ losses, declines in investment income, a less competitive climate, and climbing reinsurance rates have all contributed to rising premium rates. First, among our seven sample states, insurers’ losses have increased rapidly in some states, increasing the amount that insurers expect to pay out on future claims. Second, on the national level insurers’ investment income has decreased, so that insurance companies must increasingly rely on premiums to cover costs. Third, some large medical malpractice insurers have left the market in some states because selling policies was no longer profitable, reducing the downward competitive pressure on premium rates that existed through most of the 1990s. Last, reinsurance rates for some medical malpractice insurers in our seven sample states have increased substantially, increasing insurers’ overall costs. In combination, all the factors affecting premium rates and the availability of medical malpractice insurance contribute to the medical malpractice insurance cycle of hard and soft markets. While predicting the length, size and turning points of a cycle may be impossible, it is clear that the relatively long period of time required to resolve medical malpractice claims makes the cycles more extreme in this market than in other insurance markets. Like premium increases, annual paid losses and incurred losses for the national medical malpractice insurance market began to rise more rapidly beginning in 1998. After adjusting for inflation, we found that the average annual increase in paid losses from 1988 to 1997 was approximately 3.0 percent but that this rate rose to 8.2 percent from 1998 through 2001. Inflation-adjusted incurred losses decreased by an average annual rate of 3.7 percent from 1988 to 1997 but increased by 18.7 percent from 1998 to 2001. Figure 4 shows paid and incurred losses for the national medical malpractice market from 1975 to 2001, adjusted for inflation. Paid and incurred losses give different pictures of an insurer’s loss experience, and examining both can help provide a better understanding of an insurer’s losses. Paid losses are the cash payments an insurer makes in a given year, irrespective of the year in which the claim giving rise to the payment occurred or was reported. Most payments made in any given year are for claims that were reported in previous years. In contrast, incurred losses in any single year reflect an insurer’s expectations of the amounts that will be paid on claims reported in that year. Incurred losses for a given year will also reflect any adjustments an insurer makes to the expected amounts that must be paid out on claims reported during previous years. That is, as more information becomes available on a particular claim, the insurer may find that the original estimate was too high or too low and must make an adjustment. If the original estimate was too high, the adjustment will decrease incurred losses, but if the original estimate was too low, the adjustment will increase them. Incurred losses are the largest component of medical malpractice insurers’ costs. For the 15 largest medical malpractice insurers in 2001—whose combined market share nationally was approximately 64.3 percent— incurred losses (including both payments to plaintiffs to resolve claims and the costs associated with defending claims) comprised, on average, around 78 percent of the insurers’ total expenses. Because insurers base their premium rates on their expected costs, their anticipated losses will therefore be the primary determinant of premium rates. The recent increases in both paid and incurred losses among our seven sample states varied considerably, with some states experiencing significantly higher increases than others. From 1998 to 2001, for example, paid losses in Pennsylvania and Mississippi increased by approximately 70.9 and 142.1 percent, respectively, while paid losses in California and Minnesota increased by approximately 38.7 and 8.7 percent, respectively (see fig. 5). Because paid losses in any single year reflect primarily claims reported during previous years, these losses may not be representative of claims that were reported during the year the losses were paid. From 1998 to 2001, aggregate incurred losses increased by large amounts in almost all of our seven sample states. As shown in figure 6, the highest rates of increase in incurred losses over that period were experienced by insurers in Mississippi (197.5 percent) and Pennsylvania (97.2 percent). Even in California and Minnesota, states with lower paid losses from 1998 through 2001, insurers experienced increases in incurred losses of approximately 40.5 and 73.2 percent, respectively, over the same period. As noted above, incurred losses in any single year reflect insurers’ expectations of future paid losses associated with claims reported in the current year—that is, claims that will be paid, on average, over the next 3 and one-half years (according to one industry association). And because insurers’ incurred losses have increased recently, insurers are expecting their paid losses to increase over the next several years. According to actuaries and insurers we spoke with, increased paid losses raise premium rates in several ways. First, higher paid losses on claims reported in current or previous years can increase insurers’ estimates of what they expect to pay out on future claims. Insurers then raise premium rates to match their expectations. In addition, large losses (particularly paid losses) on even one or a few individual claims can make it harder for insurers to predict the amount they might have to pay on future claims. Some insurers and actuaries we spoke with told us that when losses on claims are hard to predict, insurers will generally adopt more conservative expectations regarding losses—that is, they will assume losses will be toward the higher end of a predicted range of losses. Further, large losses on individual claims can raise plaintiffs’ expectations for damages on similar claims, ultimately resulting in higher losses across both claims that are settled and those that go to trial. As described above, this tendency in turn can lead to higher expectations of future losses and thus to higher premium rates. Finally, an increase in the percentage of claims on which insurers must make payments can increase the amount that insurers expect to pay on each policy, resulting in higher premium rates. That is, insurers expecting to pay out money on a high percentage of claims may charge more for all policies in order to cover the expected increases. A lack of comprehensive data at the national and state levels on insurers’ medical malpractice claims and the associated losses prevented us from fully analyzing both the composition and causes of those losses at the insurer level. For example, comprehensive data that would have allowed us to fully analyze the severity of medical malpractice claims at the insurer level on a state-by-state basis did not exist. To begin with, data submitted by insurers to NAIC on the number of claims reported to insurers are not broken out by state. Rather, insurers that operate in a number of states report the number of claims for all their medical malpractice insurance policies nationwide. Also, while NAIC does collect data that can be used to measure the severity of claims paid in a single year (number of claims per state), NAIC began this effort only in 2000. As a result, we could not gather enough data to examine trends in the severity of paid claims from 1998 to 2002 at the insurer level. Similarly, comprehensive data did not exist that would have allowed us to analyze claim frequency on a state-by-state basis. As noted above, data that insurers submit to NAIC on the number of claims reported were not broken out by state prior to 2000. In addition, insurers do not submit information on the number of policies in effect or the number of health care providers insured. Finally, medical associations we spoke with in our sample states had not compiled accurate data on the number of physicians practicing within those states. As a result, we could not analyze changes in the frequency of medical malpractice claims in our sample states at the insurer level. Data that would have allowed us to analyze how losses were divided between settlements and trial verdicts or between economic and noneconomic damages were also not available. First, insurers do not submit information to NAIC on the portion of losses paid as part of settlements and the portion paid as the result of a trial verdict, and no other comprehensive source of such information exists. However, all eight insurers and one of the trial lawyers’ associations we spoke with provided certain estimates about claims. The estimates of three insurers on the percentage of claims resulting in trial verdicts ranged from 5 to 7 percent. The estimates of four insurers and 1 state trial lawyers’ association of the percentage of trial verdicts being decided in favor of the insured defendant ranged from 70 to 86 percent. The estimates of four insurers and one state trial lawyers’ association of the portion of claims resulting in payment to the plaintiff ranged from 14 to 50 percent. Second, no comprehensive source of information exists on the breakdown of losses between economic damages, such as medical costs and lost wages, and noneconomic damages, such as compensation for pain and suffering. Several of the insurers and trial lawyers’ associations we spoke with noted that settlement amounts are not formally divided between these two types of damages and that consistent, comprehensive information on trial judgments is not collected. Furthermore, while judgment amounts obtained at trial may be large, several of the insurers we spoke with said that they most often do not pay amounts beyond a policyholder’s policy limits. Data on the final amounts insurers pay out on individual judgments are not collected, although they are reported in the aggregate as part of paid losses in insurers’ financial statements. While losses on medical malpractice claims increase as the cost of medical care and the value of lost wages rise, losses in some states have far outpaced such inflation. Insurance, legal, and medical industry officials we spoke with suggested a number of potential causes for such increases. These potential causes included a greater societal propensity to sue; a “lottery mentality,” where a lawsuit is seen as an easy way to get a large sum of money; a sicker, older population; greater expectations for medical care because of improved technology; and a reduced quality of care and the breakdown of the doctor-patient relationship owing, for example, to factors such as the increasing prevalence of managed care organizations. While we could not analyze such potential causes for increased losses, understanding them would be useful in developing strategies to address increasing medical malpractice premium rates. That is, because losses on claims have such a profound effect on premium rates, understanding the reasons those losses have increased could make it easier to devise actions to control the rise in premium rates. State laws restrict medical malpractice insurers to conservative investments, primarily bonds. In 2001, the 15 largest writers of medical malpractice insurance in the United States invested, on average, around 79 percent of their investment assets in bonds, usually some combination of U.S. Treasury, municipal, and corporate bonds. While the performance of some bonds has surpassed that of the stock market as a whole since 2000, annual yields on selected bonds since 2000 have decreased steadily since then (table 1). We analyzed the average investment returns of the 15 largest medical malpractice insurers of 2001 and found that the average return fell from about 5.6 percent in 2000 to an estimated 4.0 percent in 2002. However, none of the companies experienced a net loss on investments at least through 2001, the most recent year for which such data were available. Additionally, almost no medical malpractice insurers overall experienced net investment losses from 1997 to 2001. Medical malpractice insurers are required by state insurance regulations to reflect expected investment income in their premium rates. That is, insurers are required to reduce their premium rates to consider the income they expect to earn on their investments. As a result, when insurers expect their returns on investments will be high, as returns were during most of the 1990s, premium rates can remain relatively low because investment income covers a larger share of losses on claims. Conversely, when insurers expect their returns on investments will be lower—as returns have been since around 2000—premium rates rise in order to cover a larger share of losses. During periods of relatively high investment income, insurers can lose money on the underwriting portion of their business yet still make a profit. That is, losses from medical malpractice claims and the associated expenses may exceed premium income, but income from investments can still allow the insurer to operate profitably. Insurers are not allowed to increase premium rates to compensate for lower-than- expected returns on past investments but must consider only prospective income from investments. None of the insurers that we consulted regarding this issue told us definitively how much the decreases in investment income had increased premium rates. But we can make a rough estimate of the relationship between return on investment and premium rates. When investment income decreases, holding all else constant, income from premium rates must increase by an equal amount in order for the insurer to maintain the same overall level of income. Thus the total amount of investment assets relative to premium income determines how much rates need to rise to compensate for lost investment income. Table 2 presents a hypothetical example. An insurer has $100,000 in investment assets and in the previous year received $25,000 in premium income, for a ratio of investment assets to premium income of 4 to 1. If the return on investments drops 1 percentage point and all else remains constant, the insurer must raise premium rates by 4 percent in order to compensate for the reduced investment income. If the return on investments drops by 2 percentage points, premium rates must rise by 8 percent to compensate. This relationship can be applied to the 15 largest medical malpractice insurers—countrywide—from 2001. Data show that in 2001 the insurers’ total investment assets were, on average, around 4.5 times as large as the amount of premium income they earned for that year. Applying the relationship established above and holding other factors constant, a drop of 1 percentage point in return on investments would translate into roughly a 4.5 percent increase in premium rates. As a result, if nothing else changed, the approximately 1.6 percentage point drop in the return on investments these insurers experienced from 2000 through 2002 would have resulted in an increase in premium rates of around 7.2 percent over the same 2-year period. Since 1999, the profitability of the medical malpractice insurance market as a whole has declined—even with increasing premium rates—causing some large insurers to pull out of this market, either in certain states or nationwide. Because fewer insurers are offering this insurance, there is less price competition and thus less downward pressure on premium rates. According to some industry and regulatory officials in our seven sample states, price competition during most of the 1990s kept premium rates from rising between 1992 and 1998, even though losses generally did rise. In some cases, rates actually fell. For example, during this period premium rates for obstetricians and gynecologists covered by the largest insurer in Florida—a state where these physicians are currently seeing rapid premium rate increases—actually decreased by approximately 3.1 percent. Some industry participants we spoke with told us that, in hindsight, premium rates charged by some insurers during this period may have been lower than they should have been and, after 1998, began rising to a level more in line with insurers’ losses on claims. Some industry participants also pointed out that this pricing inadequacy was masked to some extent by insurers’ adjustments to expected losses on claims reported during the late 1980s as well as their high investment income. For many insurers the incurred losses associated with the policies sold during the late 1980s turned out to be higher than the actual losses for the same policies, resulting in high levels of reserves. During the 1990s, as insurers eliminated these redundant reserves by adjusting their current loss reserves for these previous overestimates, current calendar year incurred losses fell and reported income increased. These adjustments, together with relatively high levels of investment income, allowed insurers to keep premium rates flat and still remain profitable. Beginning in the late 1990s, medical malpractice insurers as a whole began to see their profits fall. Figure 7 shows the return on surplus—also called return on equity—for the medical malpractice insurance industry as a whole. Profitability began declining faster in 1998 and in 2001 dropped considerably even as premium rates were increasing in many states, resulting in a negative rate of return, or loss. Some of the factors pushing premium rates upward were also factors in insurers’ declining profitability: higher losses on medical malpractice claims, higher reinsurance costs, and falling investment income. Medical malpractice insurers in some of our sample states have experienced particularly low levels of profitability since around 1998 (see fig. 8). The loss ratio reported here is the ratio of incurred losses, not including other expenses (often referred to as loss adjustment expenses) related to resolving those claims, to the amount of premiums earned in a given year. Loss ratios above 100 percent indicate that an insurer has incurred more losses than premium payments, a sign of declining profitability. Loss ratios in all seven sample states have increased since 1998, and except for California, all had loss ratios of more than 100 percent for 2001. This declining profitability has caused some large insurers either to stop selling medical malpractice policies altogether or to reduce the number they sell. For example, beginning in 2002 the St. Paul Companies— previously the second-largest medical malpractice insurer in the United States—stopped writing all medical malpractice insurance because of declining profitability. In 2001, St. Paul had sold medical malpractice insurance in every state and was the largest or second-largest seller in 24 states. St. Paul was not alone. Other large insurers have also stopped selling medical malpractice insurance in since 1999: PHICO Insurance Company, which sold insurance primarily in six states, including Florida, Pennsylvania, and Texas; MIIX Insurance Company, which sold insurance primarily in five states, including New Jersey and Pennsylvania; and Reciprocal of America, which sold insurance primarily in six states, including Alabama, Mississippi, and Virginia. Other insurers reduced the number of states in which they sold medical malpractice insurance: SCPIE Indemnity Company, which in March 2003 essentially stopped selling insurance outside of California, and First Professionals Insurance Company, which has said that beginning in 2003 it will essentially stop selling insurance outside of Florida. When a large insurer leaves a state insurance market, the supply of medical malpractice insurance decreases, and the remaining insurers may not need to compete as much on the basis of price. In addition, the remaining insurers are limited in the amount of insurance they can supply to fill the gap, because state insurance regulations limit the amount of insurance they can write relative to their surplus (the amount by which insurers’ assets exceed their liabilities). For mutual, nonprofit insurers, increasing the surplus can be a slow process, because surplus must generally be built through profits or by obtaining additional funds from policyholders. Commercial insurers can obtain funds through capital markets, but even then, convincing investors to invest funds in medical malpractice insurance when profits are falling can be difficult. According to industry participants and observers, as the competitive pressures on premium rates decreased, it appears that insurers were able to more easily and more quickly raise premium rates to a level more in line with their expected losses. That is, absent competitive pressure that may have caused insurers to keep premium rates at lower levels, which in hindsight were perhaps too low for the ultimate losses the insurers would have to pay, it appears that insurers were able to raise premium rates to match their loss expectations. As noted earlier, losses increased to a great extent in some states, and thus some insurers may have increased premium rates dramatically. While it appears clear that a reduction in price competition has allowed insurers to more easily and more quickly increase premium rates to a level more in line with insurers’ expected losses, we identified at least three factors that seem to suggest that these premium rates are not inconsistent with expected losses. First, if the higher premium rates were above what was justified by insurers’ expected losses, profitability would be increasing. But profits are not increasing, indicating that insurers are not charging and profiting from excessively high premium rates. Second, according to some industry participants we spoke with, physician-owned insurers have little incentive to overcharge their policyholders because those insurers generally return excess earnings to their policyholders in the form of dividends. Third, in most states the insurance regulators have the authority to deny premium rate increases they deem excessive. While the information that state regulators require insurers to submit as justification for premium rate increases varies across states, in general it includes data on expected losses. A further reason for recent increases in medical malpractice premium rates in our seven sample states was that the cost of reinsurance for these insurers has also increased, increasing the total expenses that premium and other income must cover. Insurers in general purchase reinsurance, or excess loss coverage, to protect themselves against large unpredictable losses. Medical malpractice insurers, particularly smaller insurers, depend heavily on reinsurance because of the potential high payouts on medical malpractice claims. Reinsurance industry officials and medical malpractice insurers we spoke with told us that reinsurance premium rates have increased for two reasons. First, reinsurance rates overall have increased as a result of reinsurers’ losses related to the terrorist attacks of September 11, 2001. Second, reinsurers have seen higher losses from medical malpractice insurers and have raised rates to compensate for the increased risk associated with providing reinsurance to the medical malpractice market. Some insurers and industry participants told us that reinsurance premium rates had risen substantially since 1998, with the increases ranging from 50 to 100 percent. Other insurers told us that in order to keep their reinsurance premium rates down, they increased the dollar amount on any loss at which reinsurance would begin, essentially increasing the deductible. Thus, while reinsurance rates may not have increased, the amount of risk the medical malpractice insurers carry did. One insurer estimated that while its reinsurance rates had increased approximately 50 percent from 2000 to 2002, this increase had resulted in only a 2 to 3 percent increase in medical malpractice premium rates. All of the factors affecting premium rates and availability contribute to the length and amplitude of the medical malpractice insurance cycle. Like other property-casualty insurance markets, the medical malpractice market moves through cycles of “hard” and “soft” markets. Hard markets are generally characterized by rapidly rising premium rates, tightened underwriting standards, narrowed coverage, and often by the departure of some insurers from the market. In the medical malpractice market, some market observers have characterized the period from approximately 1998 to the present as a hard market. (Previous hard markets occurred during the mid-1970s and mid-1980s.) Soft markets are characterized by slowly rising premium rates, less stringent underwriting standards, expanded coverage, and strong competition among insurers. The medical malpractice market from 1990 to 1998 has been characterized as a soft market. According to a series of studies sponsored and published by NAIC in 1991, such cycles have been present in the property-casualty insurance market since at least 1926, and until the mid-1970s lasted for an average of approximately 6 years from the peak of one hard market to the next. However, the cycle that began at the peak of the hard market in 1975 lasted for around 10 years. The current cycle has lasted for around 17 years— since 1985—and it is not yet clear that the current hard market has peaked. The medical malpractice insurance market appears to roughly follow the same cycles as the overall property-casualty insurance market, but the cycles tend to be more volatile—that is, the swings are more extreme. We analyzed the swings in insurance cycles for the medical malpractice market and for the entire property-casualty insurance markets using annual loss ratios based on incurred losses (see fig. 9). Our analysis showed that annual loss ratios for medical malpractice insurers tended to swing higher or lower than those for property-casualty insurers as a whole, reflecting more extreme changes in insurers’ expectations. Because premium rates are based largely on insurers’ expectations of losses, premium rates will fluctuate as well. The medical malpractice insurance market is more volatile than the property-casualty insurance market as a whole because of the length of time involved in resolving medical malpractice claims and the volatility of the claims themselves. Several years may pass before insurers know and understand the profits and losses associated with policies sold in a single year. As a result, insurers may not know the full effects of a change in an underlying factor, such as losses or return on investments, for several years. So while insurers in other markets that do not have protracted claims resolutions can adjust loss estimates and premium rates more quickly to account for a change in an underlying factor, medical malpractice insurers may not be able to make adjustments for several years. In the interim, medical malpractice insurers may unknowingly be under- or over-pricing their policies. When insurers do fully understand the effects of a change in an underlying factor, they may need to make large adjustments in loss estimates and premium rates. As a result, premium rates in the medical malpractice insurance market may move more sharply than premium rates in other lines of property-casualty insurance. For example, if insurers have been unknowingly overestimating their losses and overpricing their policies, as some insurers told us happened during the late 1980s, large liabilities build up to cover the losses. When the insurers realize their estimates have been too high, they must reduce those liabilities to reflect their losses accurately. Reducing liabilities also reduces incurred losses and therefore increases insurers’ income, allowing insurers to charge lower premium rates even in the face of increased losses and still maintain profitable operations—a point some insurers made about the 1990s. But when the liability account has been reduced sufficiently and income is no longer increasing as a result of this adjustment, insurers may need to raise premium rates to stay profitable. The competition that can exist during soft markets and periods of high investment income can further exacerbate swings in premium rates. As noted earlier, competition among insurers can put downward pressure on premium rates, even to the point at which the rates may, in hindsight, become inadequate to keep an insurer solvent. When the insurance market hardens, some insurers may leave the market, removing the downward pressure on premium rates and allowing insurers to raise premium rates to the level that would have existed without such competition. Because competition may have kept rates low, the resulting increase in premium rates that accompanies a transition to a hard market may be greater than it would have been otherwise. According to some industry experts, periods of high investment income can bolster the downward pressure that exists during soft markets. That is, high investment income can contribute to the increased profitability of an insurance market. This profitability can, in turn, cause insurers to compete for market share in order to take advantage of that profitability, thereby forcing premium rates even lower. In addition, according to these industry experts, high investment income allows insurers to keep premium rates low for long periods of time, even in the face of increasing losses, because investment income can be used to replace premium income, allowing insurers to meet expenses. But if interest rates drop at the same time the market hardens (and reduced interest rates can be a contributor to the movement to hard market), insurers may have to increase premium rates much more in a shorter period of time than they would have if investment income had not allowed premium rates to remain lower to begin with. While the medical malpractice insurance market will likely move through more soft and hard markets in the future, predicting when such moves might occur or the extent of premium rate changes is virtually impossible. For example, the timing and extent of the unexpected changes in the losses that some researchers believe are responsible for hard markets are virtually impossible to predict. In addition, as we have seen, many factors affect premium rates, and it is just as difficult to predict the extent of any future changes these factors might undergo. While interest rates may be high during soft markets, it is not possible to predict how much higher they might be in the future and thus what effect they might have on premium rates. Predicting changes in losses on medical malpractice claims would be even harder, given the volatility of such losses. Further, some of the factors affecting premium rates, such as losses and competition, vary across states, and the effect of soft or hard markets on premium rates in one state could not be generalized to others. Finally, other conditions affecting premium rates have changed since earlier hard and soft markets, limiting our ability to make accurate comparisons between past and future market cycles. Similarly, agreement does not exist on whether or how insurance cycles could be moderated. The NAIC studies mentioned above noted that the most likely primary causes of insurance cycles—changes in interest rates and losses—were not subject to direct insurer or regulatory control. In addition, the studies also observed that underpricing by insurers during soft markets likely increases the severity of premium rate increases during the next hard market. But they did not agree on the question of using regulation to prevent such swings in premium rates. Such regulation could be difficult, for two reasons. First, because losses on medical malpractice claims are volatile and difficult to predict, regulators could have difficulty determining the appropriate level of premium rates to cover those losses. In addition, restricting premium rate increases during hardening markets could hurt insurer solvency and cause some insurers to withdraw from a market with an already declining supply of insurance. The medical malpractice insurance market as a whole has changed considerably since the hard markets of the mid-1970s and mid-1980s. These changes have taken place over time and have been the result primarily of actions insurers, health care providers, and state regulators have taken to address rising premium rates. For example, insurers have moved from occurrence-based to claims-made policies, physicians have formed mutual nonprofit insurance companies that have come to dominate the market, hospitals and groups of hospitals or physicians have increasingly chosen to self-insure, and states have passed laws designed to slow the increase in medical malpractice premium rates. In order to more accurately predict losses and set premium rates, in the mid-1970s most medical malpractice insurers began to change the type of insurance policy they offered to physicians from occurrence based to claims made. As we have noted, claims-made policies cover claims reported during the year the policy is in effect, while occurrence-based policies cover claims arising out of events that occurred during the year in which the policy was in effect. Because claims-made policies cover only reported claims, insurers can better estimate the payouts they will have to make in the future. Occurrence-based policies do not provide such certainty, because they leave insurers liable for claims related to the incidents that occurred during a given year, including those not yet reported to the insurer. Claims-made policies can create difficulties for physicians needing or wanting to change insurers, however, because the physician rather than the insurer retains the risk of claims that have not yet been reported to the insurer. However, most companies today offer separate policies providing coverage for claims resulting from incidents that may have occurred but were not reported before the physician switched companies. The vast majority of policies in existence today are claims-made policies. In each of the seven states we studied, for example, the leading insurer’s policies were predominantly (if not exclusively) claims-made. This change in the type of policy sold means that any changes to premium rates during future hard or soft markets may differ from such changes in previous such markets. Faced with a surge in the frequency and severity of claims, many of the for- profit insurers left the medical malpractice insurance market in the mid- 1970s. At the time, medical malpractice insurance was only a small portion of most of the insurers’ overall business, so many companies chose simply to discontinue their medical malpractice lines. However, this market exodus led to a crisis of availability for physicians who wanted or needed professional liability insurance. In response to this unmet demand, physicians, often in connection with their state medical societies, joined together to form physician-owned insurance companies. Initially, physicians often needed to contribute capital in addition to their premiums so that the companies would meet state capitalization requirements. These new physician-owned insurance companies differed from existing commercial carriers in several ways. First, the physician-owned companies wrote predominantly claims-made policies, which, as previously discussed, allowed the insurers to more accurately predict losses and set premium rates. Second, in their initial years the new companies themselves enjoyed significant short-term cost savings over commercial companies. Most medical malpractice claims take several years to be resolved, and the policies offered by the physician-owned companies covered only future incidents of malpractice, so the companies had no existing claims that needed to be paid immediately. The commercial companies’ occurrence- based policies continued to provide coverage for malpractice that had occurred before the new physician-owned companies began offering policies. Thus the physician-owned companies would not incur the same level of obligations as the existing carriers for several years, allowing the physicians to pay an amount similar to the commercial premium and use much of that money as capital contributions to surplus. Physician-owned companies have several other advantages. To begin with, physician-owned companies have a cost advantage because they do not need to provide shareholders with profits. In addition, the physician-owned companies may have some underwriting advantages over the for-profit entities, such as an intimate knowledge of local doctors and hospitals and the legal customs and climate. Finally, several insurers told us that these physician-owned companies may have a different management philosophy than for-profit companies, one that places greater emphasis on risk management and thus lowers the incidence of claims. This philosophy may also extend to defending claims more aggressively than traditional insurers. Physician-owned and/or operated insurance companies have grown to dominate the medical malpractice insurance market, despite the fact that most of them have not had the same access to the traditional capital markets as for-profit insurers and therefore have had to build up their surplus through premiums and capital contributions. Although several physician-owned and/or operated insurance companies have expanded their geographic presence and lines of insurance in the last decade, most of these companies write insurance primarily in one state or a few states and usually sell only medical malpractice liability insurance. Further, many of the companies that had previously expanded have now retreated to their original area and insurance line. As a result of this continuing change in the composition of the medical malpractice insurance market, changes in premium rates in the next soft market may be different from previous markets, when commercial carriers dominated the market. Over the past several years, an increasing number of individual hospitals and consortia of hospitals and physicians have begun to self-insure in a variety of ways. Officials from the American Hospital Association estimated that 40 percent of its member hospitals are now self-insured. In states such as Florida that allow individual physicians to self-insure, individual health care providers are also insuring themselves. Other hospitals and groups of physicians are joining alternative risk-sharing mechanisms, such as risk retention groups or trusts. Although some hospitals and physicians have used these alternatives in the past, some industry experts we spoke to said that the increasing movement to such arrangements under the current market conditions indicates that some health care providers are having difficulty obtaining insurance in the traditional market. While these arrangements could save money on the administrative costs of insurance, they do not change the underlying costs of claims. Hospitals and physicians insured through these arrangements often assume greater financial responsibility for malpractice than they would under traditional insurance arrangements and thus face a potentially greater risk of insolvency. Although self-insured hospitals generally use excess loss insurance for claims that exceed a certain amount, the hospitals must pay the entire amount up to that threshold. Rather than a known number of smaller payments on an insurance policy, the hospitals risk an unknown number of potentially larger payments. And the threshold for excess loss insurance is rising in a number of states. In Nevada, for example, some hospitals’ excess loss insurance used to cover claim amounts in excess of $1 million but now covers amounts above $2 million, leaving self-insured hospitals with $1 million more exposure per claim. Self-insured physicians, who have no other coverage for large losses, risk their personal assets with every claim. Hospitals and physicians are not the only ones more at risk under these alternative arrangements. Claimants seeking compensation for their injuries may have more difficulty obtaining payments from some of these alternative entities and self-insured hospitals and physicians, for several reasons. First, these entities and the self-insured are subject only to limited public oversight, as state insurance departments do not regulate them. Further, these entities do not participate in the state-run safety nets that pay claims for insolvent insurance companies (state guaranty funds). Once such a risk-sharing consortium fails, claimants may have no other recourse but to try to enforce judgments against physicians personally. But enforcing a judgment against a physician personally is generally more difficult than obtaining payment under an insurance policy from a solvent insurance company. Data on these forms of insurance are sparse, so the extent to which physicians and hospitals are using such arrangements is difficult to measure. For example, NAIC and state insurance department data do not include information on self-insurance or on most alternative risk-sharing vehicles. In addition, one industry group has estimated that the information available from A.M. Best, a recognized industry data source, accounts for less than half the costs resulting from medical malpractice claims. Like the growth of physician-owned insurance companies, however, the growth of such forms of insurance since the previous soft market may affect the extent to which premium rates change in the next soft market. Since the medical malpractice crisis of the mid-1970s, all states have enacted some change in their laws in order to reduce upward pressure on medical malpractice premiums. Most of these changes are designed to reduce insurers’ losses by limiting the number of claims filed, the size of awards and settlements, and the time and costs associated with resolving claims. Other changes are designed to help health care providers by more directly controlling premium rates. Appendix II contains a more detailed explanation of some of the types of legal changes that some states have made, and appendix III contains more detail on the relevant laws in our seven sample states. Most of the state laws aimed at controlling premium rates attempt to reduce insurer losses related to medical malpractice claims. Many of these laws have similar provisions, the most controversial being the limitation, or cap, on subjective, nonmonetary losses such as pain and suffering (noneconomic damages). Several insurers and medical associations argue that such a cap will help control losses on medical malpractice claims and therefore moderate premium rate increases. But several trial lawyer and consumer rights associations argue that such caps will limit consumers’ ability to collect appropriate compensation for their injuries and may not reduce medical malpractice premium rates. A cap on noneconomic damages may decrease insurers’ losses on claims by limiting the overall amount paid out by insurance companies, especially since noneconomic damages can be a substantial portion of losses on some claims. Further, such a limit may also decrease the number of claims brought against health care providers. Plaintiffs’ attorneys are usually paid based on a percentage of what the claimant recovers, and according to some trial attorneys we spoke with, attorneys may be less likely to represent injured parties with minor economic damages if noneconomic damages are limited. Caps on noneconomic losses may have effects beyond reducing insurers’ costs. In theory, for example, after the frequency and severity of losses have been reduced, insurers will decrease premium rates as well. Insurers may also be better able to predict what they will have to pay out in noneconomic damages because they can more easily estimate potential losses, reducing the uncertainty that can give rise to premium rate increases. Insurers reported that economic damages (generally medical costs and lost wages), are more predictable than noneconomic damages, which are generally meant to compensate for pain and suffering and thus are very difficult to quantify. In addition to attempting to decrease losses on medical malpractice claims, two of our sample states have passed laws directly affecting premium rates and insurance regulations. In a 1988 referendum, California passed Proposition 103, which includes, among other things, a 20 percent rollback of prices for all property-casualty insurers (including medical malpractice insurers), a 1-year moratorium on premium rate increases, and a provision granting consumers the right to challenge any commercial insurance rate increases greater than 15 percent. In 1995, Texas passed legislation that required many insurance carriers, including medical malpractice insurers, to reduce rates to a level deemed by the Texas Department of Insurance to be acceptable, allowing for a reasonable profit. Texas passed the legislation in conjunction with changes to Texas’ tort system. The legislators wanted to avoid creating a windfall for insurers and believed that the companies would not lower premium rates on their own until the impact of the changes to the tort system could be actuarially determined. Interested parties debate the impact these various measures may have had on premium rates. However, a lack of comprehensive data on losses at the insurance company level makes measuring the precise impact of the measures impossible. As noted earlier, in the vast majority of cases, existing data do not categorize losses on claims as economic or noneconomic, so it is not possible to quantify the impact of a cap on noneconomic damages on insurers’ losses. Similarly, it is not possible to show exactly how much a cap would affect claim frequency or claims- handling costs. In addition, while most claims are settled and caps apply only to trial verdicts, some insurers and actuaries told us that limits on damages would still have an indirect impact on settlements by limiting potential damages should the claims go to trial. But given the limitations on measuring the impact of caps on trial verdicts, an indirect impact would be even more difficult to measure. Further, state laws differ dramatically, so comparing their impact is difficult. For example, limitations on damages can vary drastically in amount, type of damages covered, and how the limitations apply. Some states have caps of $250,000 on noneconomic damages, while other states have caps up to several times that amount. Moreover, some dollar limits change over time—for instance, because they are indexed to inflation—while others do not. Some states apply the cap to all damages, including economic damages, and some apply the cap “per occurrence” of malpractice. That is, the total amount collected by all parties injured by an act of medical malpractice cannot exceed the cap, regardless of how many physicians, hospitals, or other health care providers may be partially liable for the injuries. In contrast, for example, Nevada’s recently passed limitations on damages allow multiple plaintiffs to collect the full limit from any number of responsible defendants. The filing and resolution of medical malpractice claims is regulated, to a great extent, by states’ tort and insurance laws. Changes to such laws can thus have a great effect on both the frequency and severity of those claims, which in turn can affect premium rates. Because many states have made changes to these laws, it is difficult to predict the extent to which premium rates might change in future markets. Multiple factors have combined to increase medical malpractice premium rates over the past several years, but losses on medical malpractice claims appear to be the primary driver of increased premium rates in the long term. Such losses are by far the largest component of insurer costs, and in the long run, premium rates are set at a level designed to cover anticipated costs. However, the year-to-year increase in premium rates can vary substantially because of perceived future losses and a variety of other factors, including investment returns and reinsurance rates. Moreover, the market for medical malpractice insurance is not national, but depends on the varying framework of insurance, legal, and health care structures within each of the states. As a result, both the extent and the effects of changes in losses and other insurance-related factors on premium rates also vary by state. While losses aggregated for the industry as a whole have shown a relatively consistent upward trend over time, the loss experience of any single company is likely to vary from year to year and to increase more rapidly in some years than in others. At the same time, because of the long lag between collecting premium income and paying on claims, premium rates for the next year must be high enough to cover claims that will be reported that year, the majority of which will be paid over the next 3 to 5 years. And due to the volatility of the ultimate payouts on medical malpractice claims, it is difficult for insurers to predict the amount of those payouts with great certainty. As a result, changes in current losses can have large effects on perceived or estimated future losses and consequently on premium rates, because if insurers underestimate what will be needed to pay claims, they risk not only future profits but potentially their solvency. However, factors other than losses--such as changes in investment income or the competitive environment--can also affect premium rate decisions in the short run. These factors can either amplify or reduce the effect of losses on premium rates. For example, high expected returns on investment may legitimately permit insurers to price insurance below the expected cost of paying claims. But incorrect projections of continuing high returns could cause insurers to continue to hold prices down for too long, even though underlying losses may be rising. When such factors affect most or all medical malpractice insurers, the result appears as a period of stable or falling premium rates or a period of sharply rising rates. When they alternate, these periods may describe the soft and hard phases of the medical malpractice insurance cycle. Based on available data, as well as our discussions with insurance industry participants, a variety of factors combined to explain the malpractice insurance cycle that produced several years of relatively stable premium rates in the 1990s followed by the severe premium rate increases of the past few years. To begin with, insurer losses anticipated in the late 1980s did not materialize as projected, so insurers went into the 1990s with reserves and premium rates that proved to be higher than the actual losses they would experience. At the same time, insurers began a decade of high investment returns. This emerging profitability encouraged insurers to expand their market share, as both the downward adjustment of loss reserves and high investment returns increased insurers’ income. As a result, insurers were generally able to keep premium rates flat or even reduce them, although the medical malpractice market as a whole continued to experience modestly increasing underlying losses throughout the decade. Finally, by the mid- to late 1990s, as excess reserves were exhausted and investment income fell below expectations, insurers’ profitability declined. Regulators found that some insurers were insolvent, with insufficient reserves and capital to pay future claims. In 2001, one of the two largest medical malpractice insurers, which sold insurance in almost every state, determined that medical malpractice was a line of insurance that was too unpredictable to be profitable over the long term. Alternatively, some companies decided that, at a minimum, they needed to reduce their size and consolidate their markets. These actions, taken together, reduced the availability of medical malpractice insurance, at least in some states, further exacerbating the insurance crisis. As a result of all of these factors, insurers continuing to sell medical malpractice insurance requested and received large rate increases in many states. It remains to be seen whether these increases will, as occurred in the 1980s, be found to have exceeded those necessary to pay for future claims losses, thus contributing to the beginning of the next insurance cycle. While this explanation accounts for observed events in the market for medical malpractice insurance, it does not provide answers to other important questions about the market for medical malpractice insurance, including an explanation of the causes of rising losses over time. The data currently collected do not permit many of the analyses that would provide answers to these questions. This lack of data is due, in part, to the nature of NAIC’s and states’ regulatory reporting requirements for all lines of insurance, which focus primarily on the information needed to evaluate a company’s solvency. Most insurance regulators do not collect the data that would allow analyses of the severity and frequency of medical malpractice claims for individual insurer operations within specific states. Moreover, insurers are generally not required to submit to NAIC or state regulators data that would show how insurers losses are divided between settlements and trial verdicts or between economic and noneconomic damages. Finally, the increasing use of insurance or self-insurance mechanisms that are not subject to state or NAIC reporting requirements further complicates a complete analysis. While more complete insurance data would help provide better answers to questions about how the medical malpractice insurance market is working, other data would be equally important for analyzing the underlying causes of rising malpractice losses and associated costs. These data relate to factors outside the insurance industry, such as policies, practices, and outcomes in both the medical and legal arenas. However, collecting and analyzing such data were beyond the scope of this report. Health care providers have suffered through three medical malpractice insurance “crises” in the past 30 years. Each instance has generated competing claims about the extent of the problem, the causes, and the possible solutions. In each instance, a lack of necessary data has hindered and continues to hinder the efforts of Congress, state regulators, and others to carefully analyze the problem and the effectiveness of the solutions that have been tried. Because of the potential for future crises, and in order to facilitate the evaluation of legislative remedies put in place by various levels of government, Congress may want to consider taking steps to ensure that additional and better data are collected. Specifically, Congress may want to consider encouraging NAIC and state insurance regulators to identify the types of data that are necessary to properly evaluate the medical malpractice insurance market—specifically, the frequency, severity, and causes of losses—and begin collecting these data in a form that would allow appropriate analysis. Included in this process would be an analysis of the costs and benefits of collecting such data, as well as the extent to which some segments of this market are not captured by current data-gathering efforts. Such data could serve the interests of state and federal governments and allow both to better understand the causes of recurring crises in the medical malpractice insurance market and formulate the most appropriate and effective solutions. NAIC’s Director of Research provided us with oral comments on a draft of this report. The Director generally agreed with the report’s findings, conclusions, and matter for congressional consideration. Specifically, the Director agreed that the medical malpractice markets are not national in nature and vary widely with regard to their insurance markets, regulatory framework, legal environment, and health care structures. Furthermore, the Director stated that the medical malpractice insurance industry has shown an upward trend in losses over time and that this rise can be attributed to a variety of causes that are difficult to measure or quantify. The Director also said that he does not believe that excess profits by insurers are in evidence. The Director told us that NAIC is working on a study of the medical malpractice marketplace that he hopes will be ready for distribution in the summer of 2003. The Director stated that NAIC, like GAO, had identified many data limitations that make the study of this line of insurance difficult. As a result, the Director generally agreed with our matter for congressional consideration that Congress consider encouraging NAIC and state regulators to identify and collect additional information that could be used to properly evaluate the medical malpractice insurance market. The Director stated that while such efforts would require some additional resources, the costs would not be prohibitive and the efforts would provide needed information. The Director also provided technical comments, which we have incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen of the Senate Committee on Governmental Affairs and its Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia; the Chairman of the House Committee on the Judiciary; and the Chairman of the House Committee on Energy and Commerce. We will also send copies of this report to other interested congressional committees and members, and we will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me or Lawrence Cluff at (202) 512-8678. Additional contributors are acknowledged in appendix IV. Recognizing that the medical malpractice market can vary considerably across states, we judgmentally selected a sample of seven states in order to conduct a more in-depth review in each of those states. Except where otherwise noted, our analyses were limited to these states. We selected our sample so that we would have a mix of states based on the following characteristics: extent of recent increases in premium rates, status as an American Medical Association crisis state, presence of caps on noneconomic damages, state population, and aggregate loss ratio for medical malpractice insurers within the state. The states we selected were California, Florida, Minnesota, Mississippi, Nevada, Pennsylvania, and Texas. Within each state we spoke to one or both of the two largest and currently active sellers of medical malpractice insurance, the state insurance regulator, and the state association of trial attorneys. In six states, we spoke to the state medical association, and in five states, we spoke to the state hospital association. Due to time constraints, we did not speak to the medical or hospital associations in Texas or the hospital association in Florida. We used information obtained from these organizations to help answer each of our objectives and, as outlined below, also performed additional work for each objective. To examine the extent of increases in medical malpractice insurance rates for the largest insurers in our sample states, we reviewed annual survey data on medical malpractice premium rates collected by a private data collection company. While individual insurers determine whether to respond to the survey, we believe the data to be representative for the three medical specialties for which the company collects data—internal medicine, general surgery, and obstetrics/gynecology—because of both the number of insurers responding to the survey and the states represented by them. The premium rates collected in the survey are base rates, which do not reflect discounts or additional charges by insurers, so the actual premium rates charged by insurers can vary from the premium rates collected in the survey. We could not determine the extent to which the actual premium rates charged varied from the base rates, but among the insurers we spoke with, the actual premium rates charged in 2001 and 2002 ranged from about 50 to 100 percent of the base rates. We did not test the reliability of the survey data. To analyze the factors contributing to the premium rate increases in our sample states and other states, we examined data from state insurance regulators, the National Association of Insurance Commissioners (NAIC), A.M. Best, the Securities and Exchange Commission, and the Physician Insurers Association of America on insurers in our sample states as well as the medical malpractice insurance market as a whole. We did not verify the reliability of these data. Where possible, we obtained data from 1975 to the present. As noted earlier in this report, comprehensive, reliable data that would have allowed us to quantify the effect of individual factors on medical malpractice premium rates did not exist. We also reviewed relevant academic studies and industry guidance. In addition, we spoke with officials from the insurers and state insurance departments in our sample states, as well as professional actuarial and insurance organizations. To analyze factors that were likely to vary among states— losses on medical malpractice claims, reinsurance rates, and competition among insurers—we reviewed data for one or both of the two largest and active medical malpractice insurers in our samples states. We also reviewed aggregate data on losses for all insurers in each state as well as the U.S. medical malpractice insurance market as a whole. To analyze factors that were likely to be common among medical malpractice insurers in all states—investment income and the presence of an insurance cycle— we reviewed either A.M Best data for the 15 largest medical malpractice insurers as of 2001 (whose combined market share nationally was approximately 64.3 percent), or NAIC data for all medical malpractice insurers reporting data to NAIC. Also as noted earlier in this report, data and scope limitations prevented us from fully analyzing the factors behind increased losses from medical malpractice claims. To analyze how the national medical malpractice insurance market has changed since previous periods of rising premium rates, we reviewed studies published by NAIC; analyzed insurance industry data compiled by NAIC and A.M. Best; reviewed tort laws across all states and state insurance regulations; spoke with insurers and state insurance regulators in our sample states; and spoke with officials from national professional actuarial, insurance, legal, consumer rights, medical, and hospital organizations. We conducted our work from July 2002 through June 2003 in accordance with generally accepted government auditing standards. Each state’s tort laws generally govern the way in which medical malpractice claims or lawsuits are resolved. As discussed in this report, most state laws aimed at controlling premium rates attempt to reduce insurer losses related to medical malpractice claims. Although these laws take many different forms, they usually have at least some of the provisions summarized in this appendix. State courts have dealt differently with these kinds of provisions, and some states have found that some of these kinds of provisions are unconstitutional. The provisions summarized in this appendix are not the only ones that might impact the treatment of medical malpractice claims in states’ tort systems. Limits on Damages. Damages in medical malpractice cases usually consist of two categories, economic damages and noneconomic damages. (Although punitive damages can be available in cases of gross negligence and outrageous conduct of the health care provider, juries rarely award punitive damages in medical malpractice cases.) Economic damages generally consist of past and future monetary damages, such as lost wages or medical expenses. Noneconomic damages generally consist of past and future subjective, non-monetary loss, including pain, suffering, marital losses, and anguish. Although some states have limits on the total amount of damages recoverable in a medical malpractice suit, most states with limits, as well as pending federal legislation, have emphasized a limit only on noneconomic damages. As discussed in this report, limitations on damages can vary drastically in amount, type of damages covered, and application. As mentioned in this report, limitations on damages can impact frequency of lawsuits as well. Plaintiffs’ attorneys are usually paid based on a percentage of what the claimant recovers, and according to some trial attorneys we spoke to, attorneys may be less likely to represent an injured party with minor economic damages if noneconomic damages are limited. One consumer rights group told us that suits with limited economic damages are typical in cases where the plaintiff is not working and does not have substantial costs of future medical care. Evidence of Collateral Source Payments. At common law, or without any legislative intervention, a plaintiff would be able to recover all damages sustained from a liable defendant, even if the plaintiff were going to receive money from other sources, called “collateral sources,” like health insurance policies or Social Security. Some states have modified this common law rule with statutes that allow defendants to show that the claimant is going to receive funds from collateral sources that will compensate the claimant for damages he or she is attempting to collect from the defendant. These statutes authorize, to various extents, decreasing the defendant’s liability by the amount the claimant will receive from other sources. In the state summaries in appendix III, if a state has not modified the common law rule regarding collateral sources, the chart will say “no modification.” Joint and Several Liability. Joint and several liability is the common law rule that a plaintiff can collect the entire judgment from any liable defendant, regardless of how much of the harm that defendant’s actions caused. Some states have eliminated joint and several liability, making each defendant responsible for only the amount or share of damage he or she caused the plaintiff. Other states have eliminated joint and several liability only for noneconomic damages. Some states have eliminated joint and several liability for defendants responsible for less than a specified percentage of the plaintiff’s harm; for example, if a defendant is less than 50 percent responsible, that defendant might need to pay only for that percentage of the plaintiff’s damages. Attorney Contingency Fees. Most plaintiff attorneys are paid on a contingency fee basis. A contingency fee is one in which the lawyer, instead of charging an hourly fee for services, agrees to accept a percentage of the recovery if the plaintiff wins or settles. Some states have laws that limit attorney contingency fees. For example, in California a plaintiff’s attorney can collect up to 40 percent of the first $50,000 recovered, 33 percent of the next $50,000 recovered, 25 percent of the next $500,000 recovered, and 15 percent of any amount exceeding $600,000. Provisions that decrease attorneys’ financial incentives to accept cases could decrease the number of attorneys willing to take the cases. These limits were based on the belief that they would lead to more selective screening by plaintiffs’ attorneys to ensure that the claims filed had merit. In the state summaries in appendix III, if a state does not have limits in place specifically for attorneys in medical malpractice cases, the chart will say “no modification.” Statute of Limitations. The amount of time a plaintiff has to file a claim is known as the “statute of limitations.” Some states have reduced their statutes of limitations on medical malpractice claims. This decrease could limit the number of cases filed by claimants. Special time requirements for minors are not noted on the summaries in appendix III. Periodic Payment of Damages. Defendants traditionally pay damages in a lump sum, even if they are being collected for future time periods, such as future medical care or future lost wages. However, some states allow or require certain damages to be paid over time, such as over the life of the injured party or period of disability, either through the purchase of an annuity or through self-funding by institutional defendants. Some insurers we spoke with said that purchasing annuities can reduce insurers’ costs, and that periodic payments better match damage payments to future medical costs and lost earnings incurred by injured parties, assuring that money will be available to the injured party in the future. A consumer rights group we spoke with told us that, because periodic payments stop at the death of an injured party, there may be unsatisfied medical bills at the time of the injured party’s death. Expert Certification. Many states require that medical experts certify in one way or another the validity of the claimant’s case. These statutes are designed in part to keep cases without merit, also known as frivolous cases, out of court. Expert certification requirements also have the potential to get as many relevant facts out in the open as early as possible, so that settlement discussions are fruitful and it becomes unnecessary to take as many cases to trial, thus decreasing the claims-handling costs of the case. Arbitration. Some states have enacted arbitration statutes that address medical malpractice claims specifically. Some of these statutes require that the arbitration agreement meets standards that are designed to alert the patient to the fact that he is waiving a jury trial through the use of a specific size of font, or by specifying the precise wording that must be contained in the agreement. Although most courts have held that medical malpractice claims can properly be submitted to arbitration, litigation involving the arbitration statutes has involved issues such as whether the patient knew he was waiving the right to a jury trial, whether the patient who agrees to arbitration had appropriate bargaining strength, and whether third parties have authority to bind others to arbitration. By providing an option for arbitration, parties can avoid the larger expense of taking claims to court. However, some industry experts said that these arbitration provisions may not be binding and may result in the losing party deciding to take the case to court in any event, so arbitration can simply increase expenses without affecting the ultimate resolution of the dispute. Advanced Notice of Claim. Advanced-notice-of-claim provisions require claimants to give defendants some period of time, 90 days for example, prior to filing suit in court. Some insurers and plaintiffs’ attorneys we spoke with said that this requirement aids plaintiffs and defendants in resolving meritorious claims outside of the court system and allows plaintiffs’ attorneys to obtain relevant records to determine whether a case has merit. However, another group we spoke to said that the advanced notice of claim provision in that group’s state was ineffective. Bad Faith Claims. As mentioned in this report, some insurers we spoke with told us that they can be liable for amounts beyond an insurance policy’s limits, if the policyholder requests the insurer to settle with the plaintiff for an amount equal to or less than the policy limit, and the insurer takes the case to trial, loses, and a judgment is entered in an amount greater than the policy limits. Industry experts we spoke to said that, under those circumstances, the insurer could be liable for acting in “bad faith.” In some states, like Nevada, this bad faith claim can be brought only by the insured physician; that is, the physician can seek payment from the insurance company if the physician has paid a plaintiff beyond a policy’s limits. In contrast, in Florida, the plaintiff can sue a physician’s insurer directly for the insurer’s alleged improper conduct in medical malpractice cases. The difficulty of establishing that an insurer acted in bad faith varies according to state law. Insurers in three of our study states—Texas, California, and Florida—said that bad faith litigation was a substantial issue in their states. This appendix describes the specific medical malpractice insurance environment in each the seven sample states we evaluated for this report. (See figs.10-16.) Typical Coverage Type and Limit. This section summarizes the type of medical malpractice insurance coverage typically issued in the state, as well as the standard coverage limits of these policies. Coverage limits can range from $100,000/$300,000 to up to $2 million /$6 million. The lower number is the amount the insurer will pay per claim and the higher number is the total the insurer will pay in aggregate for all claims during a policy period. There are several types of insurance coverage available. Occurrence-based insurance provides coverage for claims that arise from incidents that occur during the time the insurance policy is in force, even if the policy is not continued. Claims that arise from incidents occurring during the policy period that are reported after the policy’s cancellation date are still covered in the future. Claims-made insurance provides coverage for claims that arise from incidents that occur and are reported during the time the insurance policy is in force. Prior acts coverage is a supplement to a claims-made policy that can be purchased from a new carrier when changing carriers. Prior acts coverage covers incidents that occurred prior to the switch to a new carrier but had not been previously reported. Tail coverage is an option available from a former carrier to continue coverage for those dates that the claims-made coverage was in effect. Regional Differences. This section notes any major regional differences in premium rates quoted by insurers within the state using the base rate for general surgery as a comparison. The Medical Liability Monitor annually surveys providers of medical malpractice insurance to obtain their premium base rates for three specialties: internal medicine, obstetrics/gynecology, and general surgery. In the state summaries, descriptions of regional differences in premium rates are based on Medical Liability Monitor information. Frequency and Severity. This section describes the extent to which insurers and state regulators we spoke with believe frequency and severity are changing in each state. Frequency is usually defined as the number of claims per number of doctors, counting doctors in different specialties as more or fewer doctors depending on the risk associated with the specialty. Severity is the average loss to the insurer per claim. Insurer Characteristics. This section describes the various types of insurers present in each of the states. In addition to traditional commercial insurance companies, the following entities or arrangements can provide liability protection: Physician insurer associations or physician mutuals are physician owned and operated insurance companies that provide medical liability insurance. Reciprocals are similar to mutuals, except that an attorney-in-fact often manages the reciprocal. Risk retention groups are insurance companies owned by policyholders. Risk retention groups are organized under federal law—the Liability Risk Retention Act of 1986. Trusts are a form of self-insurance and consist of segregated accounts of health care entities that estimate liabilities and set aside funds to cover them. Market Share. This section describes the medical malpractice market in each of the states. Recent changes in the market are also noted in this section. Joint Underwriting Association (JUA). This section details whether a state has created a JUA and the extent of its use. A JUA is a state- sponsored association of insurance companies formed with statutory approval from the state for the express purpose of providing certain insurance to the public. This section describes the regulatory scheme employed by each state. Statutory requirements generally provide that insurance rates be adequate, not excessive, and not unfairly discriminatory. The degree of regulation of medical malpractice insurance rates varies from state to state. States may have “prior approval” requirements in which all rates must be filed with the insurance department before use and must be either approved or disapproved by the department of insurance. Other states have “file and use” provisions in which the insurers must file their rates with the state’s insurance department; however, the rates may be used without the department’s prior approval. This section identifies key components of each state’s efforts to address the medical malpractice insurance situation by targeting ways in which medical malpractice claims are processed through the court system. The following legal provisions are summarized for each state: Appendix II has a description of each of these provisions, in addition to other provisions that are not summarized herein, but that might impact medical malpractice claims. For the information on state provisions in appendix III, we relied upon a summary of state tort laws compiled by the National Conference of State Legislatures (NCSL) in October of 2002. We independently reviewed selected sections of the NCSL summary for accuracy, and supplemented the NCSL information with information from interviews with industry officials. The state laws summarized herein might have changed since the date of the NCSL publication. Additionally, as noted in appendix II, the state tort laws summarized in this appendix are not the only ones that might impact the treatment of medical malpractice claims in states’ tort systems. In addition to those individuals named above, Patrick Ward, Melvin Thomas, Andrew Nelson, Heather Holsinger, Rudy Chatlos, Raymond Wessmiller, Rachel DeMarcus, and Emily Chalmers made key contributions to this report. Medical Malpractice: Effects of Varying Laws in the District of Columbia, Maryland, and Virginia. GAO/HEHS-00-5. Washington, D.C.: October 15, 1999. Medical Malpractice: Federal Tort Claims Act Coverage Could Reduce Health Centers' Costs. GAO/HEHS-97-57. Washington, D.C.: April 14, 1997. Medical Liability: Impact on Hospital and Physician Costs Extends Beyond Insurance. GAO/AIMD-95-169. Washington, D.C.: September 29, 1995. Medical Malpractice Insurance Options. GAO/HEHS-94-105R. Washington, D.C.: February 28, 1994. Medical Malpractice: Maine's Use of Practice Guidelines to Reduce Costs. GAO/HRD-94-8. Washington, D.C.: October 25, 1993. Medical Malpractice: Estimated Savings and Costs of Federal Insurance at Health Centers. GAO/HRD-93-130. Washington, D.C.: September 24, 1993. Medical Malpractice: Medicare/Medicaid Beneficiaries Account for a Relatively Small Percentage of Malpractice Losses. GAO/HRD-93-126. Washington, D.C.: August 11, 1993. Medical Malpractice: Experience with Efforts to Address Problems. GAO/T-HRD-93-24. Washington, D.C.: May 20, 1993. Practitioner Data Bank: Information on Small Medical Malpractice Payments. GAO/IMTEC-92-56. Washington, D.C.: July 7, 1992. Medical Malpractice: Alternatives to Litigation. GAO/HRD-92-28. Washington, D.C.: January 10, 1992. Medical Malpractice: Data on Claims Needed to Evaluate Health Centers' Insurance Alternatives. GAO/HRD-91-98. Washington, D.C.: May 2, 1991. Medical Malpractice: A Continuing Problem With Far-Reaching Implications. GAO/T-HRD-90-24. Washington, D.C.: April 26, 1990. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Over the past several years, large increases in medical malpractice insurance premium rates have raised concerns that physicians will no longer be able to afford malpractice insurance and will be forced to curtail or discontinue providing certain services. Additionally, a lack of profitability has led some large insurers to stop selling medical malpractice insurance, furthering concerns that physicians will not be able to obtain coverage. To help Congress better understand the reasons behind the rate increases, GAO undertook a study to (1) describe the extent of the increases in medical malpractice insurance rates, (2) analyze the factors that contributed to those increases, and (3) identify changes in the medical malpractice insurance market that might make this period of rising premium rates different from previous such periods. Since 1999, medical malpractice premium rates have increased dramatically for physicians in some specialties in a number of states. However, among larger insurers in the seven states GAO analyzed, both the premium rates and the extent to which these rates have increased varied greatly. Multiple factors, including falling investment income and rising reinsurance costs, have contributed to recent increases in premium rates in our sample states. However, GAO found that losses on medical malpractice claims--which make up the largest part of insurers' costs--appear to be the primary driver of rate increases in the long run. And while losses for the entire industry have shown a persistent upward trend, insurers' loss experiences have varied dramatically across our sample states, resulting in wide variations in premium rates. In addition, factors other than losses can affect premium rates in the short run, exacerbating cycles within the medical malpractice market. For example, high investment income or adjustments to account for lower than expected losses may legitimately permit insurers to price insurance below the expected cost of paying claims. However, because of the long lag between collecting premiums and paying claims, underlying losses may be increasing while insurers are holding premium rates down, requiring large premium rate hikes when the increasing trend in losses is recognized. While these factors may explain some events in the medical malpractice market, GAO could not fully analyze the composition and causes of losses at the insurer level owing to a lack of comprehensive data. GAO's analysis also showed that the medical malpractice market has changed considerably since previous hard markets. Physician-owned and/or operated insurers now cover around 60 percent of the market, self-insurance has become more widespread, and states have passed laws designed to reduce premium rates. As a result, it is not clear how premium rates might behave during future soft or hard markets.
This section provides information on the four primary stages of hardrock mining operations, the organizational structure of BLM and the Forest Service, and the agencies’ five-step process for reviewing mine plans. Hardrock mining operations consist of four primary stages—exploration, development, production, and reclamation. Some of these stages can take place simultaneously, depending on the characteristics of the operation. Exploration involves staking a mining claim, prospecting, and other steps, such as drilling, to locate and define the extent and value of mineral deposits. The development stage entails completing the mine plan approval process by investigating how mining will impact the environment, determining how to mitigate the risks associated with mineral extraction, and obtaining permits and authorizations associated with the entire life cycle of the mine from federal, state, local, and regulatory entities. After obtaining permits and authorizations, the mine operator constructs the mine infrastructure, such as the necessary buildings, roads, and facilities that will facilitate production. The production stage generally entails drilling, blasting, and hauling ore from mining areas to processing areas. During production, operators crush or grind the ore and apply chemical treatments to extract the minerals of value. The material left after the minerals are extracted—waste rock or tailings (a combination of fluid and rock particles)—is then disposed of, often in a nearby pile or tailings pond. In addition, some operators use a leaching process to recover microscopic hardrock minerals from heaps of crushed ore by percolating solvent (such as cyanide for gold and sulfuric acid for copper) through the heap of ore. Through this heap-leaching process, the minerals adhere to the solvent as it runs through the leach heap and into a collection pond. The mineral-laced solution is then taken from the collection pond to the processing facility, where the valuable minerals are separated from the solution for further refinement. Reclamation activities can include reshaping and revegetating disturbed areas; measures to control erosion; and measures to isolate, remove, or control toxic materials. BLM manages and oversees hardrock mining on public land through its headquarters office, 12 state offices, 49 district offices, and 126 field offices. Within headquarters, the Energy, Minerals, and Realty Management Directorate is responsible for administering the mining laws and establishing hardrock mining operations policies. The state offices manage BLM programs and land in the geographic areas that generally conform to the boundary of one or more states. Each state office is headed by a state director who reports to the Director of BLM in headquarters, and oversees the implementation of the hardrock mining program by the district and field offices. The district and field offices are responsible for the day-to-day implementation of the hardrock mining program, including reviewing proposed mine plans and inspecting approved mine operations to ensure they comply with laws and regulations. Figure 1 shows BLM-managed land and the location of BLM’s headquarters and state offices. The Forest Service oversees hardrock mining operations on the lands it manages through its headquarters office, 9 geographic regions, 174 national forests and grasslands, and its over 600 ranger districts. Within its headquarters office, the Director of Minerals and Geology Management advises the Chief of the Forest Service on issues related to the extraction of minerals from Forest Service managed lands and conducts reviews of the regions’ mineral extraction programs, including their hardrock mineral programs. The Director of Minerals and Geology Management also manages a program known as the Locatable Mineral Administrators program. This program is designed to ensure that the Forest Service employees located in various forest and ranger district offices who are responsible for the day-to-day implementation of the hardrock mineral program have sufficient training and expertise to achieve consistency and quality administering the hardrock minerals program. Under this program, Forest Service employees are to demonstrate an understanding of hardrock mining laws, regulations, and processes to be a certified Locatable Minerals Administrator. Furthermore, only employees who have been certified through this program may implement the hardrock minerals program, for example, by reviewing proposed mine plans and inspecting approved mines to ensure they comply with applicable laws and regulations. Figure 2 shows the location of Forest Service-managed lands and the Forest Service headquarters and regions. BLM and the Forest Service generally follow similar five-step processes for reviewing hardrock mine plans: (1) reviewing the completeness of the proposed plan; (2) conducting an analysis under NEPA of potential impacts to the environment, human health, and cultural and historical resources; (3) approving of the mine plan; (4) establishing a reclamation bond; and (5) authorizing mine operations. Under each agency’s regulations, before BLM and the Forest Service can perform a substantive evaluation of a mine plan, they must first determine whether the mine plan is complete and has the information specified in the regulations. To do this, the agencies review the mine plan to help determine if it meets regulatory requirements, which call for information on the operator, the proposed mine site, the proposed mine operations, and a description of the existing and proposed means of accessing the mine, among other things. BLM and the Forest Service analyze the potential impact of the proposed mine on the environment, human health, and cultural resources by conducting an analysis under NEPA. In particular, under NEPA agencies must prepare either an EA or an EIS depending on whether the proposed mine operations are expected to have a potentially significant environmental impact. The agencies are to prepare an EA to determine whether the proposed project is expected to have a potentially significant environmental impact. According to regulations implementing NEPA, an EA is intended to be a concise public document that, among other things, provides sufficient evidence and analysis for determining whether to prepare an EIS or a finding of no significant impact. It is to include brief discussions of the need for the project, alternatives, the environmental impacts of the proposed project and alternatives, and a listing of individuals and agencies consulted. The agencies are to prepare an EIS if they determine the proposed project may have significant environmental impacts. An EIS is more detailed than an EA, and NEPA regulations specify that the agency must request comments from the public on the draft EIS. An EIS must, among other things, (1) describe the environment that will be affected, (2) identify alternatives to the proposed project and identify the agency’s preferred alternative, and (3) present the environmental impacts of the proposed project and alternatives. According to BLM and Forest Service officials, while the agencies occasionally develop and produce NEPA documents, they also rely on contractors to complete the EA or EIS. Per NEPA regulations, the agencies are responsible for the content and scope of the NEPA document. For EISs, BLM regulations state that the operator must pay for BLM’s internal costs to process a mine plan that requires the preparation of an EIS. These regulations do not require operators to pay for the review of a mine plan that only requires the preparation of an EA. The Forest Service’s regulations do not require operators to pay for a review of a mine plan. Instead, the costs associated with conducting the mine plan review must be covered by the Forest Service, unless the mine operator voluntarily choses to do so. After completing the environmental review, the agency issues a decision on the mine plan. The decision document indicates whether the plan is approved as submitted, approved subject to changes or conditions, or disapproved. However, BLM and Forest Service officials told us that operators generally agree to the agencies’ changes to the mine plan that are required to meet all applicable laws and regulations. Consequently, these officials said that they were unaware of an instance where an agency had disapproved a mine plan based on the results of an environmental analysis. Before a plan may be approved, agency policies require the operator to estimate the costs associated with reclaiming the mine site once the operations have ceased. The operator typically cannot estimate this cost until the mine plan review has sufficiently progressed to determine the size and scope of the mining operations. Once the operator provides the estimate, the agency determines whether it is adequate to fully cover anticipated reclamation costs. If the agency determines that the bond is not adequate, it directs the operator to furnish a new estimate. After the reclamation cost estimate is approved, the operator must furnish the bond prior to commencing operations. Once the agency has approved the mine plan and the operator furnishes the bond, the agency authorizes operations under its jurisdiction. However, an operator may need to obtain additional permits or authorizations from other federal, state, local, and regulatory entities in order to actually begin operations. For example, operators may need to obtain a permit under Section 404 of the Clean Water Act from the U.S. Army Corps of Engineers for the discharge of dredged or fill material, such as soil from mine excavations into certain waters. From fiscal years 2010 through 2014, BLM approved 66 mine plans, and the Forest Service approved 2 mine plans for hardrock mines that varied by mineral type, mine size, and location. The length of time it took for the agencies to reach the third step of the five-step mine plan review process—the step in which the mine plan is approved—ranged from about 1 month to over 11 years and averaged approximately 2 years. Nineteen percent (13 of 68) of the approved mines are not operating as of November 2015 due to various factors. BLM and the Forest Service’s tracking of the mine plan review process is hindered by limitations with their data systems; as a result, BLM does not have adequate information, and the Forest Service does not have complete information, necessary to track the length of time to complete the mine plan review process. From fiscal years 2010 through 2014, BLM approved 66 plans for hardrock mines of various commodity types, sizes, and locations, and the Forest Service approved 2. Commodity types. Most of the mine plans that BLM and Forest Service received and approved were for gold, clay, and stone, according to agency data, and collectively these commodities accounted for 46 of the 68 total mine plans (68 percent) approved from fiscal year years 2010 through 2014 (see table 1). Mine size. The sizes of the mines proposed in these 68 plans varied greatly, ranging from 5 to 8,470 acres. The average proposed mine was approximately 529 acres, and the 68 mine plans totaled nearly 36,000 acres. Figure 3 shows the total mine acreage by state. Mine location. All of the mine plans were located in 12 western states— Alaska, Arizona, California, Colorado, Idaho, Montana, New Mexico, Nevada, Oregon, Utah, Washington, and Wyoming. Nearly half were located in Nevada or Wyoming—with 11 and 21 mine plans, respectively. Washington had the fewest—with 1 proposed mine (see fig.3). The average length of time it took BLM and the Forest Service to complete the first three steps of the mine plan review process and approve 68 mine plans from fiscal years 2010 through 2014 was approximately 2 years. However, the time varied widely, ranging from about 1 month to over 11 years, among the 68 mine plans we reviewed. For example, 1 mine plan took less than 1 month for Forest Service officials in Washington to review and approve. A Forest Service official told us this was, in part, because the mine was located in an area with existing mining operations, and the Forest Service determined that there was no need to conduct additional NEPA analyses. In contrast, another mine plan in Idaho took over 11 years for BLM to review and approve, primarily because of disagreement with the operator over what needed to be included in the mine plan, according to BLM officials. Figure 4 shows the time frames for approving these 68 mine plans. Of the 68 mine plans that BLM and the Forest Service approved over this period, 13 (19 percent) have not begun operations as of November 2015, according to the agencies’ data. For 4 of these 13 mines, the operator had not completed the fourth step of the mine plan review process— establishment of a reclamation bond, which entails furnishing bonds sufficient to fully cover estimated reclamation costs. According to BLM officials, acquiring such bonds can be difficult for some operators, particularly operators with limited financial resources. For the remaining 9 of the 13 mine plans where operations had not begun as of November 2015, the operator had completed all five steps of the mine plan review process. However, BLM and Forest Service officials said that various factors may explain why these mines have not begun operating. BLM officials noted that, in some instances, an operator may complete the mine plan review process but have difficulties finding investors or securing capital to fund the construction of the mine. In addition, BLM and Forest Service officials stated that mine operators may have met all BLM and Forest Service requirements but may be working to obtain additional permits or approvals from other federal, state, and local entities. For example, mine operators may need to obtain air and water quality permits, business licenses, and utility approvals, among other requirements. Based on a review of NEPA documents, state permitting guides, and studies of hardrock mining requirements, we identified six categories of federal permits and authorizations that mine operators may need to obtain from entities other than BLM and the Forest Service and seven categories of state and local permits and authorizations across 12 western states that may be required depending on the nature of the mining operations, as shown in tables 2 and 3. BLM field and Forest Service ranger district offices maintain records on the mine plans they review and centrally track some data on the time frames related to the mine plan review process in their automated information systems and use these data in agency reports. For example, BLM tracks the length of time required to complete the mine plan review process and reports this information in its annual budget justification. Similarly, the Forest Service tracks and reports in its annual budget justification the number of mineral permits processed in a year, which combines all types of minerals, including hardrock minerals as well as nonhardrock minerals, such as coal, oil and gas. However, limitations with the data in the systems that BLM and the Forest Service use to compile these reports hinder the agencies’ ability to track the mine plan review process. Specifically, BLM’s LR2000 system was not designed to distinguish between different types of mine plans and cannot adequately track newly proposed mine plans and mine expansions separately from other mine program activities, such as processing requests for mine plan modifications and large-scale exploration permits. In particular, the system does not contain separate codes through which different types of mine program activities could be identified. New mine plans and mine expansions are generally more complex and time-consuming to review than mine plan modifications and mine plans for exploration, according to an agency official. Distinguishing between the length of time needed to review new mine plans and mine expansions versus mine plan modifications and large-scale exploration permits entails making minor modifications to the LR2000 system, which is feasible according to BLM officials. The Forest Service’s Locatable Mineral database has codes to separately track new mine plans from other types of activities; however, because this system was initially designed as an optional tool to use, the Forest Service did not originally require its staff to use the system. As a result, when we compared data from the Locatable Mineral database against data provided by Forest Service officials, we found the database was incomplete and did not contain records for all mine plans the Forest Service reviewed. We also found that the data that were available were often missing key information, such as dates for completing certain milestones in the mine plan review process. In recognition of these types of problems, the Forest Service issued a memorandum in February 2014 requiring its staff to use the Locatable Minerals database, according to Forest Service officials. In April 2015, the Forest Service officials noted that gaps in its data remained and reiterated the need to correct these gaps in another memorandum to Forest Service staff. As of November 2015, Forest Service officials told us that they are correcting and updating incomplete information in the database. Federal standards for internal control state that control activities, such as properly recording information that would be relevant and valuable to management, are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Without modifying the system to provide adequate information necessary to track the time frames for completing the mine plan review process, BLM is limited in its ability to effectively facilitate the extraction of minerals from federal land and manage the mine plan review process. BLM and Forest Service officials we interviewed in the 23 offices (19 BLM and 4 Forest Service) we selected for our review said they have experienced one or more of the 13 key challenges we identified that affected the length of time to review the hardrock mine plans approved from fiscal years 2010 through 2014 (see table 4). BLM and Forest Service officials said they have taken actions to address the two most frequently cited key challenges—the low quality of information operators provided in their mine plans and the agencies’ limited allocation of resources for their hardrock mining programs—but the agencies could do more. Of the remaining 11 key challenges, BLM and Forest Service have taken some steps to address them, while others are not necessarily within these agencies’ control to affect. BLM and Forest Service have taken some actions to address the two most frequently cited challenges—the low quality of mine plans and limited allocation of resources—but could take additional actions regarding these challenges. Specifically, to address the low quality of mine plans, some BLM and Forest Service officials are holding meetings with operators before they begin developing their mine plans, but the agencies could do more to encourage better quality plans. To address the limited allocation of resources, BLM and Forest Service officials are leveraging existing resources, but the agencies could more fully use their authorities to collect fees and possibly expedite the time it takes to review hardrock mine plans. Of the 13 key challenges BLM and Forest Service officials said they experienced, they cited the low quality of information operators provided in their mine plans most frequently. Specifically, in 21 of the 23 locations we contacted (18 BLM and 3 Forest Service), officials said the low quality of the information operators provided in their mine plans has been a challenge during the mine plan review process and has added from 1 month to 7 years to the length of time to review plans. These agencies are responsible for ensuring the mine plan is in compliance with applicable regulations and that the information is accurate and complete. BLM officials said that when they reviewed the completeness of the proposed mine plans—the first step in the mine plan review process— they found that some mine plans were incomplete or that data needed for NEPA analyses was incorrect. When plans were of low quality, some officials said they worked with the mine operators to obtain the necessary additional information, which can require the mine operator to conduct additional analyses. However, these officials said that providing this information can take time, thereby increasing the time it takes to review and approve mine plans. In some cases, these increases can be substantial; for example, according to BLM officials, it took approximately 6 years for a mine operator to provide needed information, such as plans for reclaiming the site and addressing water quality issues. In another example, Forest Service officials said one operator did not provide additional information at the level of detail needed for the Forest Service to review the mine plan, resulting in a delay of about 18 months. One Forest Service official we contacted attributed the varying mine plan quality, in part, to the size of the mining company. The official said companies that have more resources are more likely to provide higher quality mine plans because they can dedicate these resources to the plan’s development. BLM and Forest Service have taken some actions to address this key challenge. Specifically, in nine locations we contacted, across offices in Alaska, Arizona, Colorado, Nevada, New Mexico, and Wyoming, BLM and Forest Service officials said they have requested that operators voluntarily meet with them and other relevant agencies before the operators begin developing their mine plans. During these pre-mine plan submittal meetings, officials have, for example, provided operators with information on relevant regulations, guidance on the review process and conducting baseline surveys, and examples of mine plans. In addition, three BLM state offices—Alaska, Arizona, and Nevada—have developed guidance on holding pre-submittal meetings with mine operators and other relevant agencies to help ensure critical information is collected. BLM’s surface management handbook states that BLM officials may meet with operators and other agencies before a mine plan is submitted to discuss what information to include in the mine plan and what data may be needed to support a NEPA analysis. BLM Alaska, Arizona, and Nevada officials said these pre-submittal meetings have been helpful in reducing the length of the review process. In addition, Nevada BLM state officials said the meetings have helped them improve their workforce planning efforts because they have been better able to determine the staff needed for mine plan reviews and plan accordingly. Furthermore, Nevada BLM staff said the guidance and its implementation throughout the state helps the operators working in multiple locations within the state know what to expect during the permitting process. However, some BLM and Forest Service officials in other states do not always meet with operators prior to their mine plan submittals to help mine operators to improve the quality of information in their mine plans. While BLM’s surface management handbook states that BLM officials may meet with operators and other agencies, it neither provides specific guidance on how to implement pre-submittal meetings nor does it instruct BLM offices to notify operators of the option of pre-submittal meetings. As a result, use of these meetings varies among BLM offices. Similar to BLM, the Forest Service has not developed guidance for ranger districts and mining operators on holding pre-plan submittal meetings. According to an official at one Forest Service office, it has held pre-plan submittal meetings, and the official stated this has helped streamline the process. One operator also commented on the advantages of these meetings and said it would have been helpful to know that such meetings were an option to help avoid delays and reduce costs; these costs were roughly $20,000 to $30,000 per month, according to the operator. BLM and Forest Service officials leading the hardrock mining programs said they did not think it was necessary to further encourage offices to hold pre- plan submittal meetings, leaving discretion to the regions. Federal standards for internal control state that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. In addition, internal control activities should provide reasonable assurance that the objectives of the agency are being achieved through effectiveness and efficiency of operations including the use of the entity’s resources. Without taking further actions to improve the quality of mine plan submissions by, for example, developing specific guidance to encourage offices to hold pre-mine plan submittal meetings whenever possible, BLM and the Forest Service may be missing opportunities to help expedite the mine plan review process. In 15 BLM and 4 Forest Service locations we contacted, officials said the agencies’ limited allocation of resources for their hardrock mining programs has added a few days to 1 year to the mine plan review process. This was the second most frequently cited key challenge. In particular, BLM and Forest Service officials said they do not have enough staff in certain critical positions involved in NEPA analyses, such as archaeologists and biologists. This causes “bottlenecks” in the review process and increases the length of time it takes to review hardrock mine plans, according to officials. BLM and Forest Service officials also said that they have had difficulties recruiting and retaining qualified staff, in part, because private mine operators seek similarly qualified staff and can generally offer higher salaries. Each agency has taken some actions to address this key challenge. For example, BLM has worked to leverage existing resources by collaborating with other agencies, such as by requesting assistance from U.S. Fish and Wildlife Service employees on a NEPA analysis in one BLM field office. BLM has also undertaken some workforce planning to assess its goals, resource needs, and any existing resource gaps, and the agency has increased salaries for some key positions related to the mine plan review process. BLM also charges operators fees to review EISs and, under the Federal Land Management Policy Act of 1976, can retain those fees to cover the agency’s costs. BLM uses these fees to offset the costs associated with reviewing the mine plan for which the fees were recovered, which supplement the hardrock mining program funds it receives through annual appropriations. Similarly, Forest Service officials told us they have collaborated with state agencies to leverage existing resources and have provided training for Forest Service staff to more effectively manage the mine plan review process. In addition, some Forest Service officials said they have encouraged some operators to keep their mining activities below 5 acres and under a significant level of disturbance so that they are not subject to the mine plan review process. The Forest Service also began conducting regional program reviews in 2013. For these reviews, which occur every 1 to 2 years, staff are requested to perform self-evaluations of their minerals and geology programs, including their staffing and budget planning. Depending on the findings, a program review report may include recommendations on issues, such as the need to develop workforce needs analyses. Further, a Forest Service official at a national forest said the forest requested that an operator voluntarily cover the costs associated with preparing and reviewing EISs and hiring contractors to help prepare and review NEPA documents. The operator agreed to do so, which has offset some of Forest Service’s costs associated with conducting the NEPA analysis and expedited the mine plan review process, according to a Forest Service official. However, neither BLM nor the Forest Service has used all of the tools available to address its resource limitations. Specifically, BLM has not fully used its authority to establish fees to recover some costs associated with conducting mine plan reviews, which could be used to address resource needs. A 1996 opinion by Interior’s Solicitor said that BLM has authority to “recover the reasonable processing cost of services that provide a special benefit not shared by the general public to an identifiable recipient and has an obligation to establish fees for all services for which it has cost recovery authority.” Moreover, Interior’s department-wide cost recovery guidance in its accounting handbook states Interior should recover these costs. In 2005, Interior finalized a rule that included a schedule of fees, or costs, associated with a wide range of mineral extraction programs, including reviewing mine plans for locatable minerals. This rule established fees for reviewing mine plans that involve conducting an EIS, and the preamble to the rule stated that BLM would later consider issuing a separate rule to propose fees to recover costs associated with reviewing mine plans involving an EA. However, as of December 2015, BLM has neither issued a separate rule nor set a timeline for doing so because officials said they have not made recovering costs for EAs a priority. Consequently, BLM relies on annual appropriations to cover agency costs associated with reviewing mine plans that require only an EA. By issuing a rule establishing fees associated with reviewing mine plans that involve conducting an EA, BLM may be able to cover some of its costs associated with conducting mine plan reviews, including costs to hire and retain qualified staff, and possibly expedite the time it takes to review hardrock mine plans. Similar to BLM, the Forest Service relies on annual appropriations for conducting mine plan reviews and does not make use of its authority to collect fees. Specifically, under the Independent Offices Appropriation Act, 1952 (IOAA), the Forest Service has the authority to establish a fee structure for mine plan processing activities associated with EISs or EAs, but the agency does not have the authority to retain these fees like BLM does. Further, the IOAA’s implementing guidance—Office of Management and Budget Circular A-25—notes that legislative proposals to permit fees to be retained by the agency may be appropriate. However, as of December 2015, Forest Service officials said they had not established a fee structure or made a request to retain these fees because the agency was unaware of these authorities. Without establishing fees for reviewing mine plans and the authority to retain these fees, the Forest Service may be missing opportunities to leverage additional revenue to bolster its resources to review hardrock mine plans. BLM and Forest Service officials we interviewed said they took various steps to address 5 of the other 11 key challenges we identified that have affected the length of time to review the hardrock mine plans approved from fiscal years 2010 through 2014. The remaining 6 challenges are not necessarily within the agencies’ control to affect. Agency officials told us they took some steps to help address the following five key challenges: Changing mine plans: Officials in 14 BLM and 2 Forest Service locations said operators changing mine plans that are already under review has been a key challenge, adding a few weeks to 6 years to the review process. For example, according to BLM officials, operators have substantially increased the size or scope of the proposed mine or relocated the mine boundaries, roads, and other facilities after the mine plans had been submitted, and the agency had started the review process. In one instance, the operator decided to expand the mine from 5 to 18 acres after surveys conducted to inform NEPA analyses for the mine plan had been completed. As a result, these surveys had to be redone to incorporate the characteristics of the additional acreage, which added approximately 6 months to the review in this instance, according to BLM officials. Similarly, Forest Service officials noted that changing mine plans can add up to 2 months to mine plan reviews, depending on whether the changes require officials to revise completed analyses and reports, the availability of Forest Service officials, and the number of other mine plans that were awaiting review at the time of the change. To help address this key challenge, for example, some BLM officials we spoke to said they have worked with operators to identify a larger area of land to include in their mine plan submission in the event they later decide to expand their mining operations. Quality of contractor’s work: Officials in 11 BLM and 2 Forest Service locations said the quality of work performed by some mine operators’ contractors has been a key challenge and has added 1 month to 1 year to the review process. For example, some Forest Service officials said the quality of work performed by contractors that mine operators paid to help conduct work needed for NEPA analyses of mine plans has been poor and resulted in all of the analysis needing to be rewritten. In addition, some BLM officials said that contractors hired by the operator to prepare information for a mine plan of operation have submitted out-of-date information. For example, one operator’s contractor submitted information that was 20 years old and did not account for changes that had occurred to the landscape, such as those caused by wildfires. To help address this key challenge, some BLM officials told us that they have provided a list of contractors with good reputations and an apparent understanding of the mine permitting process for mine operations, which helped improve the mine plan quality and expedite the review process. Specifically, one BLM official said they do not have to ask these contractors multiple times for additional information, which reduces the amount of time they need to spend working with the contractors to finalize their NEPA or cultural resource analyses. Quantity and quality of coordination and collaboration: Officials in 9 BLM and 2 Forest Service locations said coordination and collaboration have been limited in both quantity and quality and has resulted in adding from 2 months to 3 years to the review process. BLM and Forest Service need to coordinate and collaborate with other federal agencies, state agencies, and Native American tribes on issues such as assessing impacts to water quality, wildlife, and cultural resources. However, BLM and Forest Service officials said it can be difficult to do. For example, Forest Service officials said a federal agency delayed the review process for one mine plan because the federal agency did not provide the necessary data in a timely fashion. As a result, Forest Service officials had to redo some analyses needed for the mine plan’s EIS, which added time to the review process. To help address this key challenge, some officials said they have developed memorandums of agreement with state agencies, are holding regular meetings with these state agencies, as well as operators, and communicating and consulting with tribes. For example, BLM developed an agreement in November 2003 with the Wyoming Department of Environmental Quality to, among other things, foster federal state coordination and prevent unnecessary administrative delay while managing public lands during mining and exploration. According to BLM officials, this agreement has helped reduce duplication of agency efforts and prioritize agency work related to mine plan reviews. Balancing competing legal priorities: Officials in 4 BLM locations and 1 Forest Service location said balancing competing legal priorities has been a key challenge during the mine plan review process and has added 1 to 2 months to the review process. For example, while the General Mining Act of 1872 grants free and open access to federal lands for hardrock mining, other laws direct BLM and the Forest Service to protect the environment on the lands they manage. Specifically, the Federal Land Policy and Management Act requires BLM to prevent the “unnecessary or undue degradation” of public lands and federal regulations require the Forest Service to regulate activities to “minimize adverse environmental impacts on National Forest surface resources.” As a result, these agencies have had to balance the competing interests of providing access to lands for mining with the need to protect the environment. For example, BLM resource officials disagreed with BLM mining and minerals officials about the environmental effects of a proposed mine on the water quality in a salmon spawning area, and it took approximately 1 month to resolve, according to a BLM official. In addition, a Forest Service official said employees developing land and resource management plans sometimes do not consult with employees who work in the minerals program. As a result, this official stated that the land and resource management plans do not always take into consideration the requirements of the mining act, and the management plans need to later be revised and amended to reflect both activities. Forest Service officials told us that, to help address this key challenge, they began to provide annual training in 2000 for district, forest, and regional supervisory officials. According to these officials, this training covers issues such as the mine plan review process and statutory obligations to facilitate mining, and has helped educate employees on the importance of balancing competing priorities for the land. Federal Register notice publication process: Officials in 4 BLM locations and 1 Forest Service location said the agencies’ processes for posting Federal Register notices related to NEPA has been a key challenge during the mine plan review process and has added 1 month to 1 year to the review process. For example, some BLM field office staff said that BLM’s process for posting Federal Register notices calls for draft notices to be reviewed at many different levels of the agency, but said the process is unclear about who specifically needs to review the draft notices and how long the review will take. To address this key challenge, BLM officials started using Interior’s new electronic document tracking system, developed by the U.S. Fish and Wildlife Service, in the summer of 2013. A BLM instruction memorandum published in September 2014 directed that this system be used for Federal Register notices. These officials said Interior’s new system expedites the submission and review process because it automatically inputs dates and allows staff to electronically track their edits and forward reports, and is available to all Interior officials for use. Moreover, they said the system offers some capability to track the status of notice submissions. The remaining six key challenges are not necessarily within the agencies’ control to affect. These key challenges include the following: Mine site complexity: Officials in 13 BLM and 2 Forest Service locations said the complexity of some mine sites has been a key challenge during the mine plan review process and has added 1 week to 10 years to the review process. For example, BLM officials said mine plans that involve land where various cultural resources can be found, such as dinosaur fossils or Native American artifacts, can be challenging to review because of the need to ensure the resources are preserved before the land is disturbed. Some Forest Service officials said environmental complexities, such as the proximity of threatened or endangered species, have made it challenging to review mine plans because of the importance of ensuring these species will not likely be affected by the operations. In one instance, BLM officials said it took approximately 2 weeks to assess whether raptor habitat would be affected by the mine site location and then to develop a mitigation plan to address the potential effects. In contrast, it took approximately 10 years for Forest Service to resolve an issue related to a mine site located in a wilderness area that is habitat for threatened and endangered species, such as the grizzly bear and bull trout. As a result, an extensive analysis for the EIS had to be completed, which added time to the process. Legal issues: Officials in 8 BLM and 3 Forest Service locations said legal issues have been challenging and have added 1 month to 3 years to the review process. Both BLM and Forest Service officials said that concerns regarding possible litigation or the implications of case law have prompted them to conduct additional or more extensive NEPA analyses during the mine plan review process. For example, some Forest Service officials said that to help avoid potential legal issues, they conducted additional analyses because of the presence of threatened or endangered species. Complexity of public comments: Officials in 6 BLM and 2 Forest Service locations said that the complexity of public comments has been a key challenge that has added a few weeks to 6 months to the mine plan review process. For example, some BLM officials said addressing comments during the NEPA process regarding issues such as the mine’s potential impact on Native American resources or on air quality can add from 2 weeks to 3 months. Amount of public comments: Officials in 4 BLM and 3 Forest Service locations said that the number of public comments has been a key challenge that has added 1 month to 1 year to the review process. For example, some Forest Service officials said some mine plans have received as many as 40,000 public comments during the NEPA process on issues such as the mine’s potential impact on wildlife, public health, and traffic. Reclamation bond acquisition: Officials in 6 BLM locations said acquiring reclamation bonds has been a key challenge and has added 2 weeks to 6 months to the review process. For example, BLM officials said some operators have limited resources to dedicate to reclamation. As a result, some operators have experienced difficulty in getting bonds for reclamation, which has delayed mine plan reviews. Operator delay requests: Officials in 4 BLM locations and 1 Forest Service location said it has been a key challenge when operators request delays in processing mine plans that are already under review and has added 1 month to 1.5 years to the review process. BLM officials said that mine operators have requested delays because demand—and, subsequently prices—for the minerals associated with the proposed mine decreased to a point that operators considered it too expensive to operate the mine. Since hardrock minerals play an important role in the U.S. economy, BLM and the Forest Service have to balance the need to protect the environment with the need to make federal lands accessible for mining. These agencies rely on the mine plan review process to balance these competing priorities. BLM and Forest Service officials we interviewed reported experiencing numerous challenges affecting the length of time to complete the mine plan review process and have taken some actions to address certain challenges. However, BLM and Forest Service could take additional actions to address the two most frequently cited challenges— the low quality of mine plans of operations and the limited allocation of resources. Specifically, without taking further actions to improve the quality of mine plan submissions by, for example, developing specific guidance to encourage offices to hold pre-mine plan submittal meetings, BLM and Forest Service offices may be missing opportunities to expedite the review process. In addition, by not fully using their authority to charge fees and, in the case of the Forest Service, by not requesting authority to retain those fees, the agencies may be missing opportunities to potentially bolster their resources to expedite the amount of time it takes to review hardrock mine plans. Finally, because BLM does not have codes that allow it to track newly proposed mines and mine expansions, BLM does not have adequate information to manage and track the length of time to complete the mine plan review process. Without modifying the system to provide such information, BLM is limited in its ability to effectively oversee the extraction of minerals from federal land and manage the mine plan review process. To ensure effective oversight, strengthen internal controls, and address challenges associated with the hardrock mine plan review process, we are making two recommendations to the Secretary of Agriculture and three to the Secretary of the Interior. Specifically, we recommend that the Secretary of the Agriculture direct the Chief of the Forest Service to take actions to improve the quality of mine plan submissions by, for example, developing guidance for mine operators and agency field officials that instructs them to hold pre-plan submittal meetings whenever possible; and issue a rule that establishes a fee structure for hardrock mine plan processing activities and request the authority from the Congress to retain any fees it collects. In addition, we recommend that the Secretary of the Interior direct the Director of BLM to take actions to improve the quality of mine plan submissions by, for example, developing guidance for mine operators and agency field officials that instructs them to hold pre-plan submittal meetings whenever possible; issue a rule that assesses fees associated with reviewing hardrock mine plans that involve conducting environmental assessments; and create new codes in its LR2000 database distinguishing between different types of mine plans to help track the length of time to complete the mine plan review process. We provided a draft of this report to the Departments of Agriculture and the Interior for review and comment. The Forest Service (responding on behalf of Agriculture) generally agreed with the findings in the report and indicated that our recommendations are consistent with efforts they have underway or plan to incorporate. (See app. II for the comment letter from the Department of Agriculture). Interior generally agreed with the findings and concurred with two of the recommendations on taking actions to improve the quality of mine plan submissions and creating new codes in LR2000. Interior partially concurred with the third recommendation on issuing a rule. (See app. III for the comment letter from Interior.) Specifically, we recommended that BLM issue a rule to assess fees associated with reviewing hardrock mine plans that involve conducting environmental assessments. In its response, BLM stated that they agree that additional funds for field staff would generally be helpful and that they will undertake a review of the options to address their resource challenges, including rulemaking and potential legislation. We applaud BLM’s commitment to review a range of options to address resource challenges that may have led to increased permitting times. We continue to believe, however, that assessing fees for the review of mine plans that involve conducting environmental assessments is one step that BLM should take to address such challenges. As we noted in the report, BLM had this option under consideration since 2005, when it established fees for reviewing mine plans that involve conducting an environmental impact statement, and that the agency has an obligation to establish fees for all services for which it has cost recovery authority. Interior also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Agriculture and the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines (1) the number of mine plans the Bureau of Land Management (BLM) and the Forest Service approved from fiscal years 2010 through 2014, the time it took these agencies to complete the mine plan review process, and the extent to which these agencies track this process; and (2) the challenges, if any, that have affected the length of time for BLM and the Forest Service to complete the review process, and the actions, if any, these agencies have taken to address these challenges. To determine the number of mine plans that were approved from fiscal years 2010 through 2014, we examined data from BLM’s Legacy Rehost 2000 (LR2000) system and the Forest Service’s Locatable Minerals database—automated information systems the agencies use to track key dates and milestones in the mine plan review process. We conducted interviews with BLM and Forest Service officials familiar with these data systems to learn how these data are generated and maintained. Based on our analysis of these data, and comparisons to other publicly available information from federal agencies, we determined that these data from these databases were not sufficiently reliable to measure the time it took these agencies to complete the mine plan review process. Consequently, we worked with agency officials to collect data from BLM field offices and Forest Service ranger districts to develop a list of mine plans approved from fiscal years 2010 through 2014. To ensure that we reviewed data on comparable projects, we requested data on mine plans that were 5 acres in size or larger and were plans for new mines or mine expansions. We obtained detailed information on dates and milestones associated with these mine plans from BLM and Forest Service district and field officials. We asked BLM and Forest Service officials from the agencies’ Washington, D.C., offices to verify the accuracy and completeness of these data. We also compared these data to other publicly available sources, such as published National Environmental Policy Act documents. Based on this review, we determined that these data were sufficiently reliable for our purposes of determining the time frames for completing the mine plan review process. Using these data, we summarized descriptive information about the mine plans, such as mine location, number of acres of land disturbed, and the commodity the operator intended to mine. We also calculated and summarized the elapsed days between key milestones in the mine plan review process and the number of approved mine plans that had not begun operations. We also analyzed the extent to which the data systems used by BLM and the Forest Service reflected practices consistent with federal standards for internal control for tracking and recording events and transactions. To examine any challenges that have affected the length of time for BLM and the Forest Service to review the mine plans, and any actions officials have taken to address these challenges, we identified a list of challenges based on interviews with agency officials, industry representatives, nongovernmental organizations, an academic institution, and a review of nine studies and reports issued from 1997 through 2014 on the mine plan review process and its associated challenges. We identified these studies and reports with assistance from mining associations, industry consultants, and federal agencies. We then categorized and refined this list into 13 key challenges. From our list of mine plans approved from fiscal years 2010 through 2014, we selected 19 BLM and 4 Forest Service locations for additional interviews to ascertain the extent to which these challenges affected the time it took to review mine plans of operations. The Forest Service locations included the only 2 that were part of our list of mine plans approved from fiscal years 2010 through 2014 and 2 additional locations where officials had conducted mine plan reviews that were particularly difficult or complex, according to a senior Forest Service official. We selected locations in each of the 12 western states where hardrock mining occurs. At least one mine plan was reviewed in each of these states in this time frame. We also selected locations to ensure that the mine plans reviewed by the agency officials varied in the length of time it took for the officials to complete their review. Based on these criteria, we selected 23 BLM and Forest Service locations for additional interviews, as shown in figure 5. Because we selected a nonprobability sample of BLM and Forest Service locations, our findings on challenges that have affected the length of time to review mine plans are not generalizable to all BLM offices and Forest Service ranger districts. The officials we spoke with during these reviews had worked on approximately 74 percent of the mine plans that were approved from fiscal years 2010 through 2014. In each of these interviews, we used a standard set of questions that we developed to discuss this list of challenges with officials who review mine plans. We asked these officials to indicate whether they had experienced each of the challenges and, if so, whether the challenge affected the length of time necessary to complete the mine plan review process and the approximate length of time each challenge added to the process. In addition, we asked whether they experienced other challenges not already identified, as well as the actions they had taken to address these challenges. We then compiled and analyzed the information from these interviews and compared this information to applicable laws and regulations, federal standards for internal control, and agency handbooks and guidance to determine what ways, if any, these challenges could be further addressed. We conducted this performance audit from July 2014 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Anne-Marie Fennell, (202) 512-3841 or [email protected]. In addition to the individual above, Elizabeth Erdmann (Assistant Director), Casey L. Brown, Antoinette Capaccio, Keesha Egebrecht, Cindy Gilbert, Armetha Liles, Marcus D. Lee, Alison D. O’Neill, and Heather Salinas made significant contributions to this report. Hazardous Waste: Agencies Should Take Steps to Improve Information on USDA’s and Interior’s Potentially Contaminated Sites. GAO-15-35. Washington, D.C.: January 16, 2015. Coal Leasing: BLM Could Enhance Appraisal Process, More Explicitly Consider Coal Exports, and Provide More Public Information. GAO-14-140. Washington, D.C.: December 18, 2013. Mineral Resources: Mineral Volume, Value, and Revenue. GAO-13-45R. Washington, D.C.: November 15, 2012. Uranium Mining: Opportunities Exist to Improve Oversight of Financial Assurances. GAO-12-544. Washington, D.C. May 17, 2012. Phosphate Mining: Oversight Has Strengthened, but Financial Assurances and Coordination Still Need Improvement. GAO-12-505. Washington, D.C.: May 4, 2012. Hardrock Mining: BLM Needs to Revise Its Systems for Assessing the Adequacy of Financial Assurances. GAO-12-189R. Washington, D.C.: December 12, 2011. Abandoned Mines: Information on the Number of Hardrock Mines, Cost of Cleanup, and Value of Financial Assurances. GAO-11-834T. Washington, D.C. July 14, 2011. Hardrock Mining: Information on State Royalties and the Number of Abandoned Mine Sites and Hazards. GAO-09-854T. Washington, D.C.: July 14, 2009.
The Mining Law of 1872 encouraged development of the West by opening up federal land to exploration, extraction, and development of hardrock minerals such as gold, silver, and copper. Because mining creates the potential for serious health, safety, and environmental hazards, BLM and the Forest Service have processes for reviewing mine plans submitted by operators to help prevent and mitigate these hazards. A mine plan details the proposed mine's operations, such as the methods for mining and reclaiming the site once operations have concluded. GAO was asked to assess the mine plan review process. This report examines (1) the number of mine plans BLM and the Forest Service approved from fiscal years 2010 through 2014, among other things, and (2) challenges that have affected the length of time for BLM and the Forest Service to complete the review process, as well as actions these agencies have taken to address these challenges. GAO obtained and analyzed mine plan review data from fiscal years 2010 through 2014, and interviewed agency officials in 23 offices, representing the 12 western states where hardrock mining occurs. The results are not generalizable to all locations conducting mine plan reviews. From fiscal years 2010 through 2014, the Department of the Interior's Bureau of Land Management (BLM) and the Department of Agriculture's Forest Service approved 68 mine plans of operation. The length of time it took the agencies to approve the mine plans ranged from about 1 month to over 11 years, and averaged approximately 2 years. Of the 68 approved mine plans, 13 had not begun operations as of November 2015. Agency officials attribute this to difficulties mine operators may face, such as obtaining other required federal and state permits. BLM and Forest Service officials GAO interviewed said they experienced 13 key challenges that affected the length of time to review hardrock mine plans. The two most frequently cited were (1) the low quality of information operators provided in their mine plans and (2) the agencies' limited allocation of resources for their hardrock mining programs. To address the low quality of information in mine plans, some BLM and Forest Service officials held pre-mine plan submittal meetings with operators. However, officials do not always do so because BLM does not have specific guidance on how to implement these meetings, and Forest Service does not have any guidance instructing them to do so. Federal standards for internal control state that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. Without taking further actions to improve the quality of mine plan submissions, BLM and the Forest Service may be missing opportunities to help expedite the review process. To address the limited allocation of resources, BLM and Forest Service officials are leveraging existing resources by collaborating with other agencies, among other actions, but neither agency has fully used its authority to collect fees for conducting mine plan reviews as authorized by law. In addition, Forest Service is not authorized to retain these fees, as BLM is, but has not proposed the legislative changes that would allow it to retain fees, as is suggested by Office of Management and Budget guidance. BLM officials said the agency has not prioritized cost recovery for certain types of environmental analyses, and Forest Service officials were unaware of these authorities. By not using these authorities, BLM and Forest Service may be missing opportunities to expedite the mine plan review process. GAO recommends, among other things, that the agencies take actions to improve the quality of mine plan submissions and seek additional recovery of the costs associated with conducting mine plan reviews. The agencies generally concurred with these recommendations.
The Department of State has overall responsibility for ensuring that all proposed international agreements are fully consistent with U.S. foreign policy objectives. The Department negotiates and administers government-level S&T agreements, often referred to as “umbrella” or “framework” agreements, between the U.S. government and governments of foreign countries. The Department also delegates authority to other U.S. agencies for them to negotiate and administer government-level agreements with foreign governments in mission-specific areas, such as energy and space. Government-level agreements generally provide the protocol that multiple agencies can use to share scientific data and equipment, to exchange researchers and conduct collaborative projects, and to protect intellectual property rights. In addition, research agencies negotiate and administer agency-level agreements with their counterpart agencies in foreign governments and with international organizations to conduct international cooperative research, provide technical support, or share data and/or equipment. Agencies have the flexibility to determine the number of agreements in which they participate and to choose whether these agency-level agreements will be related to or not related to a government-level agreement. The Department of State; the Offices of Science and Technology Policy and the United States Trade Representative, within the Executive Office of the President; the Department of Commerce; and other relevant agencies review many of the proposed agreements that are legally binding, as described in the Department of State’s Circular 175. The review—known as the Circular 175 process— is designed to ensure interagency coordination; consistent treatment of issues such as access to foreign facilities, information, and expertise; and appropriate consideration of the foreign policy implications of specific agreements. While these agreements can be an indicator of national interest to cooperate in research and development, they are generally diplomatic agreements that have no associated budget authority. The U.S. government maintains these agreements to support and encourage international cooperation in science and technology. However, the government does not have a system for linking international S&T agreements with actual spending on cooperative research and development. According to a study by the Rand Corporation, the U.S. government spent more than $3 billion in fiscal year 1995 on research and development projects involving international cooperation that may or may not have been associated with specific international S&T agreements. In addition, the study states that government agencies spent as much as $1.5 billion in other activities that were not research and development but that constituted scientific or technical activities that involved significant international cooperation. In this report, we categorize the international S&T agreements into four types: (1) government-level bilateral agreements between the U.S. government and the government of another country, (2) agency-level bilateral agreements between a U.S. agency and a research agency of a foreign country that are related to a government-level agreement and provide additional details that define how each agency will cooperate, (3) agency-level bilateral agreements between a U.S. agency and a research agency of a foreign country that are not related to a government-level agreement, and (4) agency-level multilateral agreements between a U.S. agency and research agencies of international organization and/or of two or more foreign countries. Figure 1 illustrates the types of S&T agreements. During fiscal year 1997, the seven agencies we reviewed participated in 575 international S&T agreements. The number of agreements varied by agency, with the Department of Energy participating in 257 (or 45 percent) of the 575 agreements. Fifty-seven countries participated in bilateral agreements, while 8 international organizations and 10 groups of organizations and/or countries participated in multilateral agreements. Two-thirds of the agreements were agency-level bilateral agreements. Most of these were not related to government-level bilateral agreements. To be related to one of the government-level agreements, an agency-level agreement must specifically state that it is related to a government-level agreement. As figure 2 shows, 225 of the 575 agreements were agency-level bilateral agreements that did not refer to a government-level agreement, while 156 agency-level bilateral agreements did reference a government-level agreement. The 140 multilateral agreements did not have corresponding government-level multilateral agreements. App. I provides additional details on the number of agreements by type. Agency officials had different viewpoints on the relative advantages and disadvantages of developing agency-level bilateral agreements that relate to government-level agreements. At the National Institute of Standards and Technology, over 80 percent of the agency-level bilateral S&T agreements refer to existing government-level agreements. Program officials at the National Institute of Standards and Technology said that they believe that it is easier to negotiate an agency-level agreement that is related to a government-level agreement because intellectual property rights issues have already been resolved in the government-level agreement. Department of State officials agreed. Office of Science and Technology Policy officials added that having agency-level agreements related to government-level agreements provides it and the Department of State some degree of oversight to ensure that agency programs are consistent with nonproliferation, trade, and other national security interests. At the Department of Energy, on the other hand, 40 percent of the Department’s agency-level bilateral agreements are not related to a government-level S&T agreement. According to Department of Energy officials, having agency-level agreements that are related to government-level agreements under certain conditions can impose an administrative burden. Government-level agreements with some countries may require numerous meetings and reports to monitor the status of projects and actions. According to these officials, agency-level agreements related to such government agreements would also require similar meetings and reports. These meetings and reports can increase the cost and decrease the time available for actual research or project implementation. The distribution of the 575 agreements varied widely among the seven agencies we reviewed. As figure 3 shows, the number of agreements varied from 26 for the National Science Foundation to 257 for the Department of Energy. The seven agencies that we reviewed have bilateral agreements with 57 countries from almost every region of the world and multilateral agreements with 8 international organizations and 10 groups of organizations and/or countries. Figure 4 summarizes the distribution of bilateral agreements among major regions of the world. For example, in North America, the United States has a total of 34 bilateral agreements with two countries—Canada and Mexico. App. II provides specific data on the number of bilateral agreements by agency and by country. As figure 4 shows, 301 (69 percent) of the bilateral agreements are with Asian and European countries; Middle Eastern countries have the least number of agreements. Agreements with Japan, Russia, and China together account for 146 (34 percent) of the 435 bilateral agreements. Japan has the most agreements with an individual agency—28 with the Department of Energy. Almost half of the 57 countries participating in bilateral agreements are involved in both government-level and agency-level agreements. Figure 5 summarizes the number of countries that participate in different types of agreements. As shown in figure 5, U.S. agencies have signed international S&T agreements with agencies in 23 foreign countries that do not participate in government-level S&T agreements. For example, the Department of Energy and the National Aeronautics and Space Administration have a number of agreements with agencies in France and Australia. The United States has not signed a government-level S&T agreement with either country. Officials at the Office of Science and Technology Policy and the Department of State indicated that, with some countries, there may not be sufficient interest by enough agencies to warrant a government-level agreement. Figure 5 also shows that U.S. agencies have not developed agreements with seven countries that have signed government-level S&T agreements.According to officials at the U.S. agencies, their agencies do not participate in agreements with some countries because the countries are not conducting research that meets their agencies’ mission needs. The officials said that State Department officials use joint S&T agreements as one of several tools to improve foreign relations and to demonstrate diplomatic support for a country. However, these officials said that while they recognize that diplomacy and improved foreign relations may be valid reasons for signing broad S&T agreements, individual U.S. agencies will not sign agreements with other countries unless the agreements address agency research missions. In addition, National Institutes of Health and National Science Foundation officials said that agencies can informally collaborate on research projects and in other research-related activities without an international S&T agreement. U.S. agencies have also signed a total of 140 international S&T agreements with international organizations such as the International Energy Agency and the European Space Agency and groups of organizations and/or countries. Figure 6 summarizes the number of multilateral agreements by these organizations and groups. For example, U.S. agencies have 16 agreements with the European Space Agency. See appendix III for details on the agencies, organizations, and countries participating in the multilateral agreements. Figure 6 shows that 97 (about 70 percent) of the 140 multilateral agreements are with the International Energy Agency. The International Energy Agency represents the U.S. and 23 countries with common scientific interests and priorities. According to Department of Energy officials, the International Energy Agency acts as a broker for the Department of Energy whenever two or more member countries participate in an agreement. However, the participating countries may vary for each agreement, depending in part on the subject of the agreement and the countries’ interests. For example, an agreement on coal research involves Australia, Canada, and 10 other countries and the United States, while another agreement on advance fuel cells research involves Japan, Korea, and 12 other countries and the United States. More than 90 percent of the international S&T agreements active in fiscal year 1997 resulted in research projects or other research-related activities. For the agreements that did not have such results, agencies cited two reasons: funding problems of one or both parties that developed after the agreements were signed and changes in research priorities. Figure 7 shows the percentage of agency agreements that have resulted in research projects or other research-related activities since the agreements were started or last renewed. The percentage of agency agreements resulting in projects or other activities during this time ranged from 61 percent at the National Institutes of Health to 98 percent at the National Aeronautics and Space Administration. In total, 93 percent of the agency agreements, 506 in all, resulted in projects or other research-related activities such as consultations among scientists and exchanges of data and/or personnel. About 7 percent resulted in no activities. For this report, we define a research project as a set of coherent activities designed to achieve a common purpose by a specific date. We define other research-related activities as meetings, consultations, and exchanges of data and/or personnel. Agreements resulted in research projects more often—about 82 percent of the time—than in other research-related activities. See appendix IV for additional details on the results of each agency’s agreements. We did not include data on the number of research projects or other research-related activities associated with the 33 government-level agreements negotiated by the Department of State because these government-level agreements generally have associated agency-level agreements. As previously noted, U.S. agencies have developed agency-level agreements with all but seven countries that have government-level agreements. For three of these countries, four U.S. agencies have started projects under government-level agreements. These projects are funded from joint matching funds provided by the U.S. government and the participating countries to encourage international collaboration. For the remaining four countries, agencies have neither signed an agency-level agreement nor started a joint project with the country under a government-level agreement. A variety of research projects are conducted under international agreements at the U.S. agencies. For example, the National Aeronautics and Space Administration’s projects conducted under international agreements include a project to develop crew return and transfer vehicles for the International Space Station and to launch satellites to conduct research projects. The National Science Foundation’s projects include joint work in ocean drilling, and the National Institutes of Health sponsors projects to investigate potentially dangerous infectious diseases. Table 1 provides additional examples of projects and activities resulting from the agreements. Agencies’ officials told us that some agreements did not result in projects because the participating agencies of either country changed their S&T priorities or were unable to fund projects after negotiating an agreement. For example, National Institutes of Health officials said that an agreement signed late in fiscal year 1997 with Chile has not resulted in the intended projects or other activities because the Chilean science agency has not yet been able to provide the expected funding. However, the officials anticipate that projects may result from this agreement in the future. National Science Foundation officials told us that an agreement with Indonesia has not resulted in activities because of administrative problems that researchers have encountered in dealing with the country. We provided a draft of this report to the Department of State; the Department of Energy; the Department of Commerce, which includes the National Institute of Standards and Technology and the National Oceanic and Atmospheric Administration; the National Science Foundation; the National Aeronautics and Space Administration; the National Institutes of Health; and the Office of Science and Technology Policy for review and comment. We obtained comments from each of these agencies. Generally, the agencies agreed that the report accurately describes their international S&T agreements and related activities. Some of the agencies suggested technical changes to help ensure an accurate description of their international S&T agreements. In addition, the Department of State suggested changes that would clarify its role and authority. We incorporated these suggestions in our report. To determine the number and type of international S&T agreements active during fiscal year 1997, we met with officials at the Department of State’s Bureau of Oceans and International Environmental and Scientific Affairs and at selected agencies. Department of State officials provided us with data on some government-level agreements. However, detailed data on individual agencies’ agreements had to be obtained from representatives from each agency’s international S&T office. To respond to our request for information, these officials generally collected data from various units within the agency on agreements that were active during fiscal year 1997 and provided the data to us electronically. We analyzed the data to identify the number and type of agreements and the foreign participants. To determine the number of agreements that resulted in projects or other actions and the reasons some agreements have not produced these results, we obtained information from the six U.S. research agencies. The Department of State does not generally fund research projects under the broad government-level S&T agreements that it administers. In addition, we reviewed and discussed legislation with the Department of State and other agencies that was relevant to international S&T agreements. We also reviewed and discussed with the six research agencies their policies and procedures on international S&T agreements and obtained pertinent documents and reports that discussed their international activities and agreements. In general, we relied on the data the agencies provided us and did not independently verify its accuracy. However, we reviewed early drafts of the data that the agencies prepared for us and followed up with the agencies to clarify and resolve inconsistencies in all data that the agencies provided. Our review was performed from August 1998 through April 1999 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies to the Honorable William M. Daley, Secretary of Commerce; the Honorable Bill Richardson, Secretary of Energy; the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Madeleine K. Albright, Secretary of State; the Honorable D. James Baker, Under Secretary, National Oceanic and Atmospheric Administration; Raymond G. Kammer, Director, National Institute of Standards and Technology; Dr. Harold E. Varmus, Director, National Institutes of Health; Daniel S. Goldin, Administrator, National Aeronautics and Space Administration; Dr. Rita R. Colwell, Director, National Science Foundation; and Dr. Neal Lane, Director, Office of Science and Technology Policy. We will also make copies available to others on request. Please contact me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix V. Bilateral (related to government-level) Bilateral (not related to government-level) The number of S&T agreements for NASA may be understated. Agency officials provided data on agreements that were approved during fiscal years 1995 through 1997 because they did not know how many active agreements the agency had in fiscal year 1997. In addition to the broad S&T agreements, the Department of State has negotiated government-level diplomatic notes with Canada, United Kingdom, and Germany that address intellectual property rights. (continued) (Table notes on next page) These countries participate with the United States in joint funds established through government-level agreements or other arrangements to support international cooperation in research and development. State Department has a joint government-level agreement with the Czech and Slovak Republics. For this report, we have counted it as one agreement with the Czech Republic. Croatia, the Former Yugoslav Republic of Macedonia, and Slovenia have started joint projects with the United States under government-level S&T agreements. Groups of organizations and/or countries (Table notes on next page) Number and percent of agreements by agency Pct. No. Pct. No. Pct. No. Pct. No. Pct. No. Mindi Weisenbloom The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the U.S. government's international science and technology (S&T) agreements that support and encourage international cooperation in research and development, focusing on the: (1) number of international S&T agreements active during fiscal year (FY) 1997; and (2) number of these agreements that resulted in research projects or other activities. GAO noted that: (1) during FY 1997, the 7 agencies GAO reviewed participated in 575 international science and technology agreements with 57 countries, 8 international organizations, and 10 groups of organizations or countries; (2) 54 of the agreements were between the U.S. government and the government of another country; (3) the remaining 521 agreements were signed by representatives of an U.S. agency and representatives of an agency of a foreign government(s) or international organization; (4) more than 90 percent of the international science and technology agreements resulted in research projects or other research-related activities, such as consultations among scientists and exchanges of data and personnel; (5) the percentage of agency agreements that resulted in projects and other activities ranged, by agency, from 61 percent at the National Institutes of Health to 98 percent at the National Aeronautics and Space Administration; and (6) agencies' officials told GAO that changes in either country's science priorities or inability to fund projects after negotiating an agreement are frequently the reasons some agreements do not result in research projects.
Dental care is a key component of the health care services provided by the military to active duty servicemembers, and a key benefit provided to those who are eligible to enroll for health care coverage through TRICARE. DHA’s TRICARE Dental Care Office administers and oversees the TRICARE dental care programs. The ADDP supplements the dental services available to active duty servicemembers through DTFs with the goal of maintaining readiness for deployment. Active duty servicemembers are required to have a dental examination annually, during which their dental readiness is assessed. Dental readiness is a prerequisite for deployment and must be maintained among servicemembers who have been deployed. Active duty servicemembers may obtain necessary dental care at no cost to them at a DTF or through the ADDP. The majority of active duty servicemembers’ dental care is provided in DTFs, which are staffed with general dentists and, in some cases, dental specialists. If a necessary service cannot be provided in a timely way through an accessible DTF, the servicemember may obtain dental care from a private dentist through the ADDP. To facilitate reasonable access to care, the ADDP includes a network of dental care providers throughout the United States and its territories. ADDP is not an insurance program, and active duty servicemembers do not pay premiums for their dental care, do not share in the costs of the care, and do not face any annual or lifetime maximums on the cost of that care. Prior to establishing the ADDP, DOD paid the full cost of necessary dental care provided to active duty servicemembers who were referred to private dentists, and the prices for such privately provided dental services were uncontrolled. By negotiating standard prices for dental services with a private insurance carrier, the ADDP was intended to contain the costs of providing necessary care to active duty servicemembers. The TDP is a dental insurance program that is available to dependents of active duty servicemembers, and to members of the National Guard and Reserve and their dependents.sharing premiums, certain procedure fees, and costs above annual and lifetime maximums. Enrollees’ share of premiums varies with the status of the enrollee. For example, DOD pays: Enrollees are responsible for cost- 100 percent of the premiums of survivors of active duty servicemembers who died while on active duty and to eligible dependents of certain Reserve members; Up to 60 percent of the premiums of the dependents of active duty service members and of certain Reserve members; and 0 percent of the premiums of certain Reserve members who are not on active duty and their dependents. The TRDP is a dental insurance program that is available to retired uniformed service members and their dependents. DOD does not contribute to paying the costs of this program. Enrollees are responsible for the full premium, any cost-sharing fees, and costs above the annual and lifetime maximums. For information about the contracts for these programs, see table 1. The acquisition process for DHA’s dental services contracts includes three main phases, each of which are governed by federal and department-level requirements. The phases include (1) acquisition planning, (2) RFP, and (3) award. (See fig. 1.) Acquisition Planning. Federal regulations require agencies to perform acquisition planning activities for all contracts to ensure that the government meets its needs in the most effective, economical, and timely manner possible. In the acquisition planning phase, DHA officials are to develop a strategy and plan to define and fulfill contract requirements in a timely manner and at a reasonable cost. Federal regulations also require that acquisition planning include market research, which can involve the development and use of requests for information (RFI). An internal working group consisting of a dental program manager, contracting officer, contracting officer’s representatives, and a requirements specialist review information gathered during the acquisition planning process to determine contract requirements, according to DHA officials. RFP. In the RFP phase, DHA officials issue the RFP and receive proposals. Award. In the award phase, DHA officials are responsible for evaluating the proposals and awarding the contract to the offeror presenting the best value to the government based on a combination of technical, cost, and performance-based factors. After the contract is awarded, the contracting officer’s representative is responsible for the day-to-day monitoring of contractor activities to ensure that the services are delivered in accordance with the contract’s performance standards. Each dental service contract includes quality assurance standards for provider access, claims processing, and customer service (telephone coverage and correspondence timeliness) against which the contractor’s performance is assessed. Contractors are required to meet these standards. Although DHA officials use a variety of methods to monitor contractors’ performance, the primary method of monitoring performance is through monthly reports submitted by each contractor to DHA. To develop requirements for each of its current dental services contracts, DHA officials analyzed market research, data from contractors’ past performance, legislation, independent cost estimates, and other information. DHA officials used this information to align the contracts’ requirements with contract goals to deliver high quality dental services in a cost effective manner, and to facilitate access to care. Market Research. As part of its development of contract requirements, DHA officials gathered information through market research and analyzed it to determine the capabilities within the dental services market to satisfy the agency’s needs. DHA’s market research included soliciting information from current and potential dental services contractors. To do this, DHA officials issued RFIs and draft RFPs for comment. These documents included questions related to potential benefit changes—such as how the offeror would implement a specific benefit—and potential data requirements—such as how the offeror would submit required data to DHA. In addition to RFIs and draft RFPs, DHA’s market research activities included one-on-one meetings with dental services contractors. DHA officials used information from these market research activities to revise contract requirements. For example, according to DHA officials, feedback from contractors indicated that DOD’s contract requirements related to information security were costing dental contractors (and DOD) a substantial amount of money. Partly as a result of contractors’ feedback, DHA determined that it would be more economical for contractors to comply with the information security standards used in the commercial sector, according to these officials. In all three new contracts, DHA officials therefore required contractors to comply with commercial information security requirements instead of those developed by DOD. DHA officials also used market research to determine the technical feasibility of potential contract requirements. For example, after encountering delays in treatment preauthorization decisions due to poor quality radiographs (commonly known as x-rays), the RFI that DHA issued for the ADDP contract included questions to determine the feasibility of the electronic submission of radiographs. According to DHA officials, dental technology has progressed to allow for easy electronic submission of this data, resulting in better radiograph quality. Partly as a result of information collected through the RFI, DHA officials incorporated a requirement into the new ADDP contract for the contractor to submit radiographs electronically when requesting pretreatment authorization from DHA. This requirement was intended to increase the quality of the diagnostic information and thus DHA’s efficiency in making preauthorization decisions. Performance Monitoring. DHA officials analyzed information about contractors’ past performance, including contractors’ monthly reports and claims payment data, to assess and revise contract requirements for future contractors. DHA uses a variety of methods to monitor performance, primarily relying on their review of monthly reports submitted by each contractor to DHA, which reflect how well the contractor is performing against the performance standards. They use this information to assess and revise requirements for each future dental services contract. For example, according to DHA officials, before issuing the RFP for the current TRDP contract, DHA officials’ review of the then current TRDP contractor’s performance against the existing contract’s network access standard indicated that the contractor consistently exceeded the standard. As a result, DHA raised the network access standard in the new TRDP contract from 90 to 99 percent, thus requiring that 99 percent of enrollees living within the United States, District of Columbia, Puerto Rico, Guam, and the U.S. Virgin Islands have access to a network general dentist within a specified distance of their primary residence. DHA officials also used other performance monitoring information, such as claims payment data submitted by contractors, when developing contract requirements. For example, DHA officials’ review of the ADDP contractor’s claims payment data confirmed reports they received from DTFs that some servicemembers were being treated twice (and DHA paid more than they would have otherwise) for the same dental problem because they were treated by general dentists, not dental specialists, and the initial treatment was not successful, according to DHA officials. Partly as a result of their review of this information, DHA officials incorporated a new requirement in the new ADDP contract that requires that 90 percent of all DTF-referred endodontic procedures (such as root canals) or oral surgeries be completed by an endodontist or oral surgeon, respectively. Legal requirements. DHA officials reviewed laws relevant to each dental services contract to identify changes required by statute. DHA officials used the information to determine whether any changes to the benefit or eligibility structures of the contracts would be required. For example, DHA added a survivor benefit to the TDP contract as a result of a legislative change. In addition, the Transition Assistance Management Program, which provides 180 days of premium-free transitional health care benefits after regular TRICARE benefits end, was added to the ADDP contract as a result of a legislative change. Independent Cost Estimates. DHA officials reviewed independent cost estimates for new benefit requirements they were considering to assess cost efficiency. Specifically, prior to incorporating new benefit requirements into the TRDP and TDP contracts, DHA obtained and reviewed cost estimates from a private consulting firm to determine the impact of these benefits on enrollees’ monthly premiums. For example, DHA requested cost estimates for increasing the TDP contract’s maximum lifetime orthodontic benefit requirement from $1,500 to $1,750 and from $1,500 to $2,000. The estimates indicated that monthly premiums would increase 65 cents and $1.30, respectively. Based in part on the cost estimate, DHA increased the benefit requirement in the final TDP contract so that the contractor must provide coverage for benefits up to $1,750. According to DHA officials, they do not incorporate benefits if doing so would result in large increases in monthly premiums. Other Sources of Information. Other sources of information DHA officials reviewed prior to determining whether to add, change, or eliminate requirements from their dental service contracts included Lessons learned from previous procurements for other health services. We previously reported that DHA incorporated lessons learned into the RFP for the dental services contracts as a result of challenges to DHA’s contract award decisions for certain managed care support contracts. Specifically, in drafting the RFP for the TDP contract, officials more clearly defined how DHA officials planned to assess the evaluation factors when awarding the contract. Current dental services contract requirements. DHA officials told us that they review current contract requirements and solicit feedback on them from stakeholders, including officials from various branches of the military and organizations representing beneficiaries, before each new solicitation. For example, DHA officials conducted a forum with military services officials through which they identified potential new requirements for the ADDP contract, including a referral tracking system that would indicate authorized care that was not completed and thereby allow military commanders ready access to information about servicemembers’ dental readiness. As a result of this and other information they reviewed, the new ADDP contract required the contractor to develop a referral tracking system and train DHA staff on its use. Dental best practices and changes in the professional practice of dentistry. DHA officials review industry best practices and changes in the practice of dentistry to determine new contract requirements. For example, when developing new benefits for the current TDP contract, DHA officials researched common dental insurance benefits and best practices and solicited feedback from the American Dental Association’s Council on Government Affairs. In addition, DHA officials told us that they used their knowledge of changes in the field of dentistry, including dentistry’s increased use of digital technology, to help them identify the electronic submission of radiographs as a potential solution to the previously discussed problem of poor radiograph quality. DHA uses separate contracts for different beneficiary groups, in part, because the programs that serve them are funded differently. For example, the TRDP contract is separate from the ADDP and the TDP contracts because the government does not contribute any funds for the TRDP, but does contribute funds for the ADDP and TDP. Other factors that contributed to DHA’s decision to use separate contracts for its different beneficiary groups included differences between programs, such as differences in their purposes, covered dental services, and network access standards. To provide assurance that government funds are not expended for the TRDP, the administrative costs associated with the TRDP must be kept separate from the administrative costs associated with government- funded programs. DHA officials told us that they discussed this issue with potential contractors, who said that they would have to operate the programs separately. DHA officials determined that there would be minimal cost savings or efficiencies from combining contracts under these circumstances. Because the TRDP is not directly supported by DOD funds, DHA is exploring the possibility of shifting the option for military retirees to purchase dental insurance, which is currently provided through the TRDP, to the Federal Employee Dental and Vision Insurance Program (FEDVIP), which is administered by the Office of Personnel Management (OPM).Unlike the TRDP, the FEDVIP allows enrollees to select from among several plan options. DHA and OPM are in the preliminary process of determining the viability of this plan. DHA officials said that this option has both advantages and disadvantages. The primary advantages would include: lowering the workload of staff within the TRICARE Dental Care Office, thereby allowing them to devote more of their time to administering and overseeing the remaining two programs; and allowing retirees greater flexibility to choose among insurance plans that differ in their premiums and coverage options. The primary disadvantages of shifting the option to purchase dental insurance to the FEDVIP would include: the loss, for retirees, of the increased ease in use that results from similarities that have been built into the various TRICARE programs, such as similarities across programs in educational materials; potentially higher premiums if the enrollee selects a plan with more extensive dental benefits; and potential resistance to the change among military retirees. DHA officials told us that it is too soon to determine whether it would be possible to shift the TRDP to the FEDVIP. If it is found to be a viable option, legislative action would be necessary for OPM to open the FEDVIP to military retirees and for DOD to terminate the TRDP. Among other factors, differences in how the ADDP and TDP programs are funded also influenced DHA’s decision to use separate contracts for these programs. For example, DOD pays all costs for necessary care provided to active duty servicemembers through the ADDP upon receipt of invoices for individually priced services. In contrast, the TDP is an insurance program: DOD and TDP beneficiaries share in the costs of premiums, which are paid to the contractor; and the contractor is at risk for payment to providers. Thus, unlike the ADDP contractor, the TDP contractor bears the risk of loss if total costs through the program are greater than predicted. (If total costs are lower than predicted, the contractor would earn a larger profit than expected.) DHA officials told us that they consulted with potential contractors to identify advantages and disadvantages of merging the ADDP and TDP contracts and they concluded that the disadvantages of combining these two contracts, which are largely due to the differences in funding, outweighed the potential advantages. The potential advantages of combining the contracts that DHA officials and contractors identified included: enhanced continuity of care when individuals switch from reserve to fewer instances in which two contractors would need to work together to reconcile payments when an error has been made about whether someone is on active or reserve duty because a single contractor would be responsible for both programs; greater leverage in fee negotiations because the pool of potential enrollees would be larger; and slight gains in the efficiency of contract administration, particularly for monitoring contractors’ performance, thereby allowing TRICARE Dental Care Office staff more time to devote to their other responsibilities. The potential disadvantages of combining the contracts that DHA officials and contractors identified included: a reduction in competition if carriers do not want to participate in one or the other program, do not want to manage two different programs simultaneously under the same contract, or do not want to undertake a contract of the resultant size; greater difficulty in selecting the best contractor to award the contract, as one offer may be a better match for the ADDP requirements and another offer a better match for the TDP requirements; the potential for confusion among beneficiaries, dentists, and contractor staff because of differences between the programs (such as different benefits and payment requirements); and the potential for a compromise on quality if the contractor is not able to meet the requirements of both programs simultaneously and well. In the past, there was an interval during which a single contractor held both the ADDP and TDP contracts, and DHA officials told us that there were problems with this arrangement, including obstacles to care; operational challenges; and confusion among beneficiaries, the contractor, and the military DTFs. They said that these problems negatively affected both the delivery of dental care and the reputation of the military health system. Having just awarded the ADDP contract, DHA officials stated that they are not exploring options for combining these contracts. They noted that the government would have to either terminate the ADDP contract early or sole-source the TDP contract to extend it until the ADDP contract expires. They also stated that having to re-solicit a contract would be inefficient. We requested comments on a draft of this product from DOD. In its written comments, reproduced in appendix I, DOD concurred with the findings of the report and stated that the review provided a critical examination of DOD’s contracting initiatives supporting the ADDP, TDP, and TRDP. DOD also provided a technical comment, which was incorporated. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Assistant Secretary of Defense (Health Affairs), and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, key contributors to this report were Kristi Peterson, Assistant Director; Kristen Joan Anderson; Jacquelyn Hamilton; Jennel Lockley; and Drew Long.
DOD offers comprehensive health care coverage to millions of beneficiaries through TRICARE, a system of health care that DOD purchases from private insurers to supplement the health care that DOD provides through its military and dental treatment facilities (DTF). Purchased dental services are provided through separate programs for different groups of beneficiaries: (1) the ADDP provides dental care to active duty servicemembers who do not have ready access to a DTF; (2) the TDP provides dental insurance to eligible dependents of active duty servicemembers, and to National Guard and Reserve members and their dependents; and (3) the TRDP provides dental insurance to retired uniformed service members and their dependents. DHA is responsible for awarding, administering, and overseeing contracts with private insurers for these programs. Senate Report 112-173, which accompanied the Senate Committee on Armed Services' version of the National Defense Authorization Act for fiscal year 2013, mandated that GAO review DOD's private sector care contracts, including its contracts for dental services. GAO examined (1) how DHA developed the requirements for its current dental services contracts and (2) the reasons for DHA's use of separate contracts for different beneficiary groups. GAO reviewed relevant laws, regulations, and DOD contracts and acquisition planning documents. GAO also interviewed DHA officials and other stakeholders. GAO made no recommendations. To develop requirements for its current dental services contracts, officials from the Department of Defense's (DOD) Defense Health Agency (DHA) analyzed market research, data from contractors' past performance, legislation, independent cost estimates, and other information. DHA officials used this information to align the contracts' requirements with contract goals to deliver high quality dental services in a cost effective manner and to facilitate access to care. Market research: DHA officials gathered information through market research and analyzed it to determine the capabilities within the dental services market to satisfy the agency's needs. Performance monitoring: DHA officials analyzed information about contractors' past performance, including claims payment data, to assess and revise contract requirements. Legal requirements: DHA officials reviewed laws relevant to each dental services contract to identify changes required by statute. Independent cost estimates: DHA officials reviewed cost estimates for new benefit requirements they were considering for the TRICARE Retiree Dental Program (TRDP) and TRICARE Dental Program (TDP) contracts to assess cost efficiency. Other sources of information: DHA officials reviewed lessons learned from previous procurements, current dental services contract requirements, and dental best practices and changes in the professional practice of dentistry. DHA uses separate contracts for different beneficiary groups in part because the programs that serve them are funded differently. The TRDP contract is separate from the TRICARE Active Duty Dental Program (ADDP) and the TDP contracts because the government does not contribute any funds for the TRDP, but does contribute funds for the ADDP and TDP. To provide assurance that government funds are not expended for the TRDP, contractors said they would have to operate the programs separately. As a result, DHA officials determined that there would be minimal cost savings from combining contracts. Differences in how the ADDP and TDP programs are funded also influenced DHA's decision to use separate contracts for these programs. DOD pays all costs for necessary care provided to active duty servicemembers through the ADDP. In contrast, the TDP is an insurance program: DOD and TDP beneficiaries share in the costs of premiums, which are paid to the contractor; the contractor is at risk for payment to providers. DHA officials concluded that the disadvantages of combining these two contracts outweighed the potential advantages. Other factors that contributed to DHA's decision to use separate contracts for different beneficiary groups included differences in program purposes, dental services, and network access standards. In comments on a draft of this report, DOD agreed with its findings and provided a technical comment, which was incorporated.
An adequate supply of health care professionals is necessary to ensure access to needed health care services. HRSA estimated that there were approximately 780,000 physicians and 261,000 physician assistants and advanced practice registered nurses engaged in patient care in 2010. Part of maintaining an adequate health care workforce involves projecting the future supply of health care professionals and comparing that supply to the expected demand for health care services to determine whether there will be enough providers to meet the demand. Such projections can provide advance warning of shortages or surpluses so that health care workforce policies, such as funding for health care training programs, can be adjusted accordingly. In its 2008 physician workforce report, HRSA noted that due to the long time needed to train physicians and to make changes to the medical education infrastructure, policymakers and others need to have information on the adequacy of the physician workforce at least 10 years in advance. We have also previously reported that producing supply and demand projections on a regular basis is important so that estimates can be updated as circumstances change. Health care workforce projections typically measure the supply of health care professionals and the demand for services in a base year and predict how each will change in the future given expected changes in the factors that affect supply and demand. On the supply side, “stock and flow” models are commonly used; these models start with the current number of health care professionals, add new entrants to the workforce, such as students who complete their medical training, and subtract providers who are expected to leave the workforce, such as those who retire. Factors influencing supply include the capacity of educational programs to train new health care professionals, the number of patients that health care professionals are able to care for, and attrition rates. On the demand side, a utilization-based approach is often used, which measures the current utilization of health care services and projects that pattern of utilization forward, making adjustments as the population receiving services changes over time. Factors affecting demand include economic conditions, population growth, and changing population demographics such as aging or an increase in insurance coverage. At the federal level, HRSA is responsible for monitoring the supply of and demand for health care professionals and disseminating workforce data and analysis to inform policymakers and the public about workforce needs and priorities. To meet this responsibility, HRSA conducts and contracts for health care workforce research to document and project shortages and to examine trends that influence the supply and distribution of health care professionals, as well as the demand for their services. In 2008, HRSA issued a physician workforce report containing national supply and demand projections for physicians through 2020, which was based on the agency’s physician workforce models: the Physician Supply Model (PSM) and Physician Requirements Model (PRM). Using these models, HRSA projected a shortfall of approximately 49,000 FTE physicians by 2020. The PSM is a “stock and flow” model, which projected the future supply of physicians by taking the number of physicians from a base year (2000), adding new entrants, and subtracting physicians lost through retirement, disability, or death. The PSM projected both active supply (the number of individual physicians) and the effective supply (the number of FTE physicians accounting for the number of hours worked). The number of FTEs was determined by the average number of hours worked for physicians in each specialty by gender and age group. The PRM is a utilization-based model, which projected the demand for physicians starting with the utilization of health care services in 2000 by age, sex, geographic location, and insurance coverage type (see fig.1). The PRM assumed that supply and demand were in equilibrium in 2000, that is, that there were enough physicians to meet the demand for health care services. The PRM’s baseline projection also assumed that patterns of utilization would not change, although HRSA created some alternative scenarios showing how utilization might change (and therefore affect demand) because of factors such as economic growth. HRSA also included a scenario that accounted for the effect of nonphysician providers, such as nurse practitioners and physician assistants, who may offset the demand for physicians by providing services that otherwise would have been provided by physicians. HRSA and others have noted that a drawback of the utilization-based approach, which carries forward current utilization patterns, is that when calculating the number of physicians needed, any current imbalances in the system, such as populations that may be underserved, or any overutilization of health care services are also carried forward. HRSA’s 2008 physician workforce report predates PPACA, which was enacted in 2010. PPACA contains provisions that have the potential to affect both health care workforce supply and demand, which increases the uncertainty of health care workforce projections (see table 1). Several health care workforce researchers have published estimates of the effects of PPACA insurance coverage expansions on workforce supply and demand. These studies found varying estimates for the number of additional primary care providers required to meet the needs of the newly insured population, ranging from 4,300 to 8,000 providers. The variations in these projections are the result of differences in methodologies and assumptions used in modeling. AAMC also increased its overall projection of physician shortages for 2025 by 6,200 FTE physicians, on the basis of expected increases in health care demand as a result of greater rates of insurance coverage under PPACA, among other factors. PPACA provides for the establishment of new delivery models such as accountable care organizations (ACO) and patient- centered medical homes (PCMH). ACO models consist of integrated groups of providers who coordinate care for a defined patient population in an effort to improve quality, reduce costs, and share in any savings. The PCMH model is a way of organizing and delivering primary care that emphasizes comprehensive, coordinated, accessible, and quality care built on strong patient-provider relationships. Such models also encourage shifting care provision to nonphysician providers, potentially decreasing the need for additional physicians. Some researchers have stated that they expect new delivery models, such as ACOs, will have a significant and lasting effect on the broader health care marketplace, though research shows conflicting results as to how these new delivery models will affect the supply of and demand for health care professionals. PPACA also mandated the establishment of the National Health Care Workforce Commission and required HHS to establish the National Center for Health Workforce Analysis (NCHWA) to collect and analyze data on the health care workforce and evaluate workforce adequacy. The National Health Care Workforce Commission was charged with conducting analyses of health care supply and demand and submitting annual reports to Congress that included recommendations. However, this commission has not received appropriations and therefore has not met since it was appointed. In 2010, HRSA established NCHWA within its Bureau of Health Professions. NCHWA is responsible for developing and disseminating accurate and timely data and research on the health care workforce, among other things. In 2012, NCHWA produced a timeline for updating HRSA’s workforce projections, as we recommended. Since its last published report on physician supply and demand in 2008, HRSA has initiated work to produce new workforce models and reports, but has not published any new reports containing national workforce projections. Specifically, HRSA has missed one of its publication goals for new workforce projections and has created a revised timeline that postpones future publications. Given that HRSA’s 2008 report was based on 2000 data, the most recent projections available from HRSA to Congress, researchers, and the general public to inform health care workforce policy decisions are based on data that are more than a decade old. From 2008 to 2012, HRSA awarded five contracts to three research organizations to update or create new workforce projection models, generate new national workforce projections, and produce reports. (See table 2 for a summary of the contracts and their status.) As of July 2013, HRSA had received three reports resulting from these contracts, and two more reports were under development. Contractor A delivered the first report, which includes projections for the primary care workforce to 2020, in July 2010, but HRSA was still reviewing and revising the draft as of July 2013. HRSA officials said that this primary care report has required extensive consultation with other HHS components to ensure that the methods used were consistent with other ongoing workforce-related work within the department. In addition, officials said that significant revisions were required to incorporate the effects of PPACA. Contractor B delivered the second report, which updated HRSA’s 2008 physician workforce projections using more recent data, in February 2011. However, according to HRSA officials, the agency decided not to publish this report because it did not incorporate nonphysician providers, which they have since determined should be accounted for when assessing the adequacy of the health care However, officials also said that research conducted under workforce.this contract regarding the health care workforce effects of PPACA was incorporated into HRSA’s later projection models. The third report, the clinician specialty report, which projects the supply of and demand for health care professionals by specialty through 2025, was delivered in November 2012 and is still under HRSA’s review. HRSA has missed one of its timeline goals for finalizing its review and publishing new reports containing national projections and has created a new timeline that postpones publication dates for this and two other health care workforce reports. Although HRSA’s original timeline stated that the clinician specialty report would be published in December 2012, HRSA’s revised timeline states that this report is expected to be published in the summer of 2014. The revised timeline also included new publication dates for HRSA’s report on the primary care workforce and for reports based on its new microsimulation models. (Table 3 shows HRSA’s original and revised timelines for publication.) HRSA attributed the delay in publishing the clinician specialty report to data challenges and modeling limitations. For example, HRSA officials cited limited research and data on the effects of new health care delivery models being funded and tested in response to PPACA. HRSA officials told us that new models such as ACOs and PCMHs have not yet been studied adequately to know whether they will increase or decrease the demand for health care professionals. It may be several years before relevant data are available. According to a health care workforce researcher we interviewed, there is going to be an inevitable lag in obtaining data given that some delivery system models, such as ACOs, are still being set up, and time will be needed to collect and analyze data and publish any findings. In addition, other researchers have pointed out that workforce-relevant data are not being systematically collected from new models being supported and tested in response to PPACA. HRSA officials also have cited challenges due to limited research on nonphysician providers. For example, more research is needed to determine how much nonphysician providers offset the demand for physicians across different specialties. In our review of HRSA’s clinician specialty models, we observed that for some specialties, the addition of nonphysician providers has the potential to turn projected shortages into surpluses. Another challenge HRSA officials said they need to address stems from an inherent limitation of utilization-based models, namely, that they project forward the utilization patterns of the past and therefore do not adequately account for rapid changes in the health care system. HRSA officials said that this modeling limitation has caused surpluses and shortages that do not reflect anticipated workforce trends and require time to analyze. For example, officials explained that when a provider specialty is in shortage, utilization is by definition low. If investments are made to increase the supply of the specialty in shortage, then the model carries forward the past low utilization and incorporates increased provider supply, which consequently projects a surplus for the future because there will be more providers than were utilized in the past. HRSA officials said that the agency does not have a standard written work plan or set of procedures for accomplishing the tasks necessary to prepare a report for publication after final reports are delivered from contractors. Officials said that although there are general policies that guide review, the specific steps of the review process and which internal and external officials participate are determined on a case-by-case basis. In addition, officials said that it is common for milestone dates to change depending on the complexity of the issues raised during review. According to HRSA officials, the dates included in their timeline were based solely on when they expected to receive models and reports from contractors and did not account for the continued analytical work involved in reviewing delivered products. For example, according to HRSA officials, the original 2013 goal dates for publishing reports based on the new microsimulation models had to be postponed until 2014 because they will require additional time to review after they are received. Without standard procedures, agency officials may not be able to accurately predict how long products will take to review or to monitor their progress through the review process to ensure they are completed in a timely manner. However, HRSA officials have stated that once the microsimulation models are completed, these models will offer the ability to more easily update projections as new data become available and should result in more routine and frequent reporting. The federal government has made significant investments in health care professional training programs to help ensure that there is a sufficient supply of health care professionals to meet the nation’s health care needs. Health care workforce projections play a critical role in providing information on future shortages or surpluses of health care professionals so that policies can be adjusted, including targeting health care training funds to the areas of greatest need. We recommended in 2006 that HRSA, as the federal agency designated to monitor the supply of and demand for health care professionals, develop a strategy and establish time frames to more regularly update and publish national workforce projections for the health professions. While HRSA created a timeline for publishing new projection reports in 2012, the agency has since revised its timeline to postpone publication of two other health care workforce reports after failing to meet its December 2012 publication goal for a clinician specialty report projecting the supply of and demand for health care professionals through 2025. Other reports that have been delivered by contractors since HRSA published its last report in 2008 have either been set aside or are still being reviewed. In the case of the primary care workforce report containing projections to 2020, review has been ongoing for 3 years. If this report were published in 2013, it would project only 7 years into the future. HRSA itself has stated that physician workforce projections should be completed at least 10 years in advance to provide enough time for policy interventions to influence the size and composition of the workforce. In the absence of published projections, policymakers are denied the opportunity to use timely information from HRSA to inform their decisions on where to direct billions of dollars in training funds. It is also important to update projections on a regular basis so that changing circumstances, such as the enactment of PPACA or the growth in nonphysician providers, can be incorporated. Currently, the most recent projections available from HRSA are based on patterns of utilization and care delivery in 2000, predating PPACA by a decade. HRSA is now making larger financial investments in new workforce projection models, but in the absence of standard written processes specifying how the reports resulting from these models will be reviewed, HRSA may be hindered in its ability to monitor the development of these reports and ensure that they are published in keeping with its revised timeline. We recommend that the Administrator of HRSA take the following three actions: Expedite the review of the report containing national projections to 2020 for the primary care workforce to ensure it is published in the fall of 2013 in accordance with HRSA’s revised timeline. Create standard written procedures for completing the tasks necessary to review and publish workforce projection reports delivered from contractors; such procedures may include a list of necessary review steps, estimates of how long each step should take to complete, and designated internal and external reviewers. Develop tools for monitoring the progress of projection reports through the review process to ensure that HRSA’s timeline goals for publication are met. We provided a draft of this report to HHS for review. HHS’s comments are reprinted in appendix I. HHS also provided technical comments, which we incorporated as appropriate. In its comments, HHS agreed with our recommendations and described actions that the department is taking to implement them. In response to our first recommendation to expedite the review and publication of a report containing primary care workforce projections, HHS said that it expects to release the report on schedule in the fall of 2013. Regarding our second recommendation to develop standard written procedures for report review, HHS said that HRSA has developed a framework for report development based on project management principles that is being made available electronically to all HRSA employees. According to HHS, this framework facilitates planning and provides guidance throughout the report development process. Concerning our third recommendation to develop tools for monitoring the progress of reports through the review process, HHS said that HRSA has created a computer-based tool capable of generating estimated time ranges for completing each step in the report development and review process, which it anticipates will allow for better oversight of report timelines. In addition to these agencywide efforts, HHS said that BHPr is in the process of developing a review process specifically for proposed workforce studies and contracts that will emphasize more comprehensive review in the early stages of development with the aim of reducing the time needed for final report review. In addition to addressing our recommendations, HHS commented that our draft report did not discuss a number of other workforce-related activities undertaken by NCHWA, such as data collection efforts and the production of reports that do not include national projections. For example, in 2012 NCHWA fielded a survey to collect nationally representative data on nurse practitioners. While we agree with HHS that such activities are important, they are not a substitute for regularly producing updated national projections. We did not include information in this report on other HRSA reports not containing national projections, or HRSA's data collection efforts, because the scope of our review was limited to national projections of the supply of and demand for physicians, physician assistants, and APRNs. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Administrator of HRSA and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. In addition to the contact named above, Martin T. Gahart, Assistant Director; Hannah Locke; Elise Pressma; and Jennifer Whitworth made key contributions to this report.
For over a decade, government, academic, and health professional organizations have projected national shortages of health care professionals, which could adversely affect patients' access to care. However, there is little consensus about the nature and extent of future shortages, partly because of the complexity of creating projections and uncertainty about future health care system changes. Up-to-date workforce estimates are essential given the significant federal investment in health care training programs. Within HHS, HRSA is responsible for monitoring health care workforce adequacy; to do this, HRSA conducts and contracts for workforce studies. GAO was asked to provide information about health care workforce projections. This report examines the actions HRSA has taken to project the future supply of and demand for physicians, physician assistants, and advanced practice registered nurses since publishing its 2008 report. GAO reviewed HRSA's contract documentation, select delivered products, and timeline goals for publication. GAO also interviewed HRSA officials, workforce researchers, and provider organizations. Since 2008, the Health Resources and Services Administration (HRSA) within the Department of Health and Human Services (HHS) has awarded five contracts to research organizations to update national workforce projections, but HRSA has failed to publish any new reports containing projections. As a result, the most recent projections from HRSA available to Congress and others to inform health care workforce policy decisions are from the agency's 2008 report, which is based on data that are more than a decade old. While HRSA created a timeline for publishing new workforce projection reports in 2012, the agency missed its goal to publish a clinician specialty report by December 2012 projecting the supply of and demand for health care professionals through 2025. HRSA officials attributed the delay in publishing this report to data challenges and modeling limitations. HRSA has also revised its timeline to postpone publication of two other health care workforce reports, as shown in the table below. HRSA officials said that the agency does not have standard written procedures for preparing a report for publication after final reports are delivered from contractors, which may impede its ability to accurately predict how long products will take to review and monitor their progress through the review process. GAO recommends that the Administrator of HRSA expedite the review and publication of HRSA’s report on national projections for the primary care workforce, create standard written procedures for report review, and develop tools to monitor report review to ensure timeline goals for publication are met. HHS agreed with GAO’s recommendations and provided technical comments.
Historically, electric utilities operated as regulated monopolies and were thus required to provide electricity service to all customers within their power service areas in exchange for exclusive service territories. Two key laws—the Public Utilities Regulatory Policies Act of 1978 (PURPA) and the Energy Policy Act of 1992—have resulted in an increasingly competitive wholesale electricity market. PURPA authorized operation of electric power-generating entities that were exempt from many federal regulations. Called “independent power producers” (IPPs), these entities typically use new technologies, such as natural gas-fired generation units, to produce power. The Energy Policy Act of 1992 required that a utility make its transmission lines accessible to other utilities (called “open transmission access”). This open access has enabled wholesale customers to obtain electricity from a variety of competing suppliers, even if that power must be transmitted over lines owned by another utility—referred to as the “wheeling” of power. This ability to wheel power has resulted in increasing wholesale competition in the electricity industry across the United States, with competition becoming intense in some areas. As a result, wholesale electricity rates have decreased in many parts of the country over the last several years, which has impacted, to varying degrees, the PMAs, TVA, and RUS borrowers. On a retail basis, the traditional regulated utility monopoly still exists in most states. However, issues relating to retail open access are being addressed on a state-by-state basis and in the Congress, with end-use customer choice expected to result. Electricity generation, transmission, and distribution in the United States involves several government entities, including the following: RUS, an entity within the Department of Agriculture (USDA), provides direct or guaranteed loans primarily to rural electric cooperatives that market power on a wholesale and retail basis. Federal PMAs within the Department of Energy (DOE) market wholesale power generated primarily at federal water projects. The U.S. Army Corps of Engineers (Corps) in the Department of Defense and the Bureau of Reclamation (Bureau) in the Department of the Interior both operate multipurpose water projects, many of which generate electric power. Other multipurpose project purposes include flood control, navigation, irrigation, and recreation. The Corps and Bureau allocate power-related costs and some irrigation and other nonpower costs for repayment through the PMAs’ power revenues. The Corps and the Bureau are referred to as the operating agencies. TVA, a multipurpose, independent government corporation generates and transmits electricity, primarily on a wholesale basis, to distributors. To some extent, these entities interact with each other in the electricity market. For example, the PMAs sell power to some rural electric cooperatives financed by RUS, and Southeastern sells power to TVA. TVA also sells power to rural electric cooperatives. In aggregate terms, federal power generation represents about 10 percent, and rural electric cooperative generation represents about 4 percent, of all generating capability in the United States. In this report, we focus our discussions on the PMAs rather than the operating agencies. This is because the PMAs are responsible for marketing power produced at federal facilities and setting rates to recover the federal government’s costs associated with the power production. For a more detailed discussion of competition in the electricity market and the federal entities involved, see appendix I in volume 2 of this report. For fiscal year 1996, we estimate that the federal government incurred $2.5 billion in net costs, including about $982 million in RUS loan write-offs,from the electricity-related activities of RUS and the PMAs. We estimate that cumulative net costs for fiscal years 1992 through 1996 were about $8.6 billion in constant 1996 dollars. Currently, the revenues earned by these entities do not cover the full cost of their operations. As a result, the federal government rather than RUS borrowers and PMA ratepayers bears these costs. TVA generally recovers all power-related costs from its ratepayers. To define the full cost to the PMAs and TVA of generating, transmitting, and/or marketing federal power and to RUS of providing loans and loan guarantees to its electricity borrowers, we referred to Office of Management and Budget (OMB) Circular A-25, User Fees, industry practice and federal accounting standards. Applying the definitions used in these contexts, the full cost of generating, transmitting, and marketing power or providing loans and loan guarantees would include all direct and indirect costs incurred by RUS, the PMAs, TVA, and other entities directly involved in supporting RUS, PMA, and TVA operations. We estimated cumulative net costs for fiscal years 1992 through 1996 because information for these years was readily available. These cumulative net cost calculations as well as those for fiscal year 1996 were intended to measure the net cost to the federal government, on an accrual basis, of the electricity-related activities of RUS, the PMAs, and TVA. It is important to note that RUS, the PMAs, and TVA were generally following applicable laws and regulations regarding recovery of costs. Table 1 summarizes the net costs by type for each entity for fiscal year 1996 and cumulatively for the 5 years ending with fiscal year 1996 (in constant 1996 dollars). Each of the listed costs is discussed in detail following the table. A net financing cost to the federal government exists in the RUS electric program because the annual interest income received from RUS borrowers is substantially less than the federal government’s annual interest expense on funds provided to the borrowers. Interest income is affected by favorable rates and terms given to some borrowers and also by financially troubled RUS borrowers that have missed scheduled loan payments. According to RUS reports, about $10.5 billion is owed by 13 financially stressed wholesale producers that we refer to as Generation and Transmission Cooperatives (G&T) borrowers. We estimate that the net financing cost (interest expense minus interest income) to the federal government for the RUS electric program for fiscal year 1996 was about $874 million. Cumulatively over the last 5 years, we estimate that the net financing costs totaled about $3.8 billion (in constant 1996 dollars). Financially stressed borrowers’ failure to make scheduled payments has had a significant impact on the federal government’s interest income. For example, one G&T borrower, Cajun Electric, has not been required to make interest payments on its $4.2 billion debt since filing for bankruptcy in December 1994. In addition, Cajun made total principal payments of only about $19 million from December 1994 through the end of fiscal year 1996. Based on Cajun’s contractual interest rate of about 8.6 percent, the federal government has forgone interest income of about $30 million per month, or about $1 million per day, since December 1994. In the meantime, the federal government continues to incur interest expense on financing related to this borrower. A detailed discussion of the net financing costs related to RUS is presented in appendix V of volume 2. RUS has recently written off, under Department of Justice (DOJ) authority, a substantial dollar amount of loans to rural electric cooperatives. The most significant loan write-offs are related to two G&T borrowers. In fiscal year 1996, about $982 million of one G&T borrower’s loans was written off and forgiven because the G&T was unable to sell its electricity at a price sufficient to service its RUS loans due to an investment in an uneconomical nuclear plant. In the early part of fiscal year 1997, loans to another G&T borrower were written off and forgiven for a loss of about $502 million because the borrower was unable to recover costs for a coal-fired generating plant built to satisfy anticipated demand that did not materialize. The total amount of write-offs during fiscal years 1992 through 1996 was about $1.05 billion (in constant 1996 dollars)—with $0.5 billion of additional write-offs in the early part of fiscal year 1997. The federal government also incurs costs for the electricity-related portion of RUS’ appropriation for administrative expenses and for RUS employee pension and postretirement health benefits. In addition, attorneys at DOJ spend substantial amounts of time litigating on behalf of RUS during loan restructuring or bankruptcy proceedings. These estimated net costs amounted to $1 million for benefits and $22 million in other charges for fiscal year 1996 and $3 million and $112 million, respectively, for fiscal years 1992 through 1996 (in constant 1996 dollars). These other net costs are discussed in appendix IV of volume 2 to this report. The net financing cost for the PMAs results primarily from appropriated debt provided by the federal government at low interest rates with favorable repayment terms. Appropriated debt carries a fixed interest rate with no ability for Treasury to call the debt. Although PMAs are generally required to pay off highest interest rate debt first, they cannot refinance the debt. Thus, Treasury bears the risk of increases in interest rates and PMAs, to some degree, bear the risk of decreases in interest rates. The interest rates on outstanding PMA appropriated debt are substantially below the rates Treasury incurs to provide funding to the PMAs and other federal programs. Thus, interest income earned by Treasury on the appropriated debt is less than Treasury’s interest expense, which it incurs to finance this debt. The PMAs have accumulated substantial amounts of appropriated debt at low interest rates primarily because, in accordance with applicable guidance, they repay high interest rate debt first and because PMA appropriated debt incurred prior to 1983 was generally at below-market interest rates in effect at the time. We estimate that the net financing cost for the three PMAs’ appropriated debt for fiscal year 1996 was $208 million and for BPA, $377 million. Cumulatively, for fiscal years 1992 through 1996, we estimate that the net financing cost in constant 1996 dollars has been over $1.1 billion for the three PMAs and nearly $2 billion for BPA. Table 2 shows the differences in the interest rates paid by the PMAs, Treasury’s cost of funds, and the components of our estimates. As a result of legislation passed in 1996, BPA’s appropriated debt was restructured from $6.85 billion, with an average interest rate of 3.5 percent, to $4.29 billion, with an average interest rate of 7.1 percent. According to BPA’s 1996 final rate proposal, the restructuring “is intended to permanently eliminate subsidy criticisms directed at the relatively low interest rates assigned to historic Federal Columbia River Power System (FCRPS) appropriations.” The legislation required that the present value of the new principal balance equal the present value of the principal and interest payments that would have been made if restructuring had not occurred, plus an additional $100 million. The legislation also required that the interest rate applicable to the new principal balance (including the additional $100 million) be set to approximate the prevailing interest rate on Treasury debt of comparable maturity issued at the time of the restructuring. The dates at which the segments of appropriated debt become due are not changed by the legislation. As was the case before the restructuring, the due dates extend through the year 2046 and average about 26 years remaining. Because the restructuring was not effective for fiscal year 1996, this transaction did not change the $377 million estimated net financing cost on BPA appropriated debt for fiscal year 1996. In the future, with the exception of the $100 million, if BPA repays appropriated debt at maturity, the net present value of future financing costs to the federal government will also remain unchanged. BPA also had $2.5 billion of medium- and long-term debt held by Treasury in the form of BPA bonds. Interest rates on this debt are set based on debt with similar terms issued by U.S. government corporations. This debt matures in fiscal years 1997 through 2034, with $346.2 million maturing by the year 2000. Based on our review of the terms of this debt, we believe there is no net cost to the federal government. The federal government incurs a portion of the cost for Civil Service Retirement System (CSRS) pensions and substantially all of the cost for postretirement health benefits for current PMA and operating agency employees. For fiscal year 1996, we estimate that the net cost to the federal government of providing these benefits was about $16 million for the three PMAs and almost $21 million for BPA. Cumulatively, for fiscal years 1992 through 1996, we estimate that the net cost in constant 1996 dollars was $82 million for the three PMAs and $110 million for BPA. Recovery of the full annual cost of pension and postretirement health benefits is planned by Southeastern, Southwestern, and Western starting in fiscal year 1998. BPA plans to begin recovering some of these costs in 1998, with full recovery planned beginning in 2002. Consistent with current policies and law, the PMAs do not plan to recover pre-fiscal year 1998 net costs. We found that all of the PMAs had incurred costs and/or had costs allocated to them by the operating agencies for projects that were completed, under construction, or cancelled, for which the full costs were not being recovered. In some cases, this was because the power-generating projects had never operated as designed. In accordance with DOE guidance, the PMAs set rates that exclude the costs of nonoperational parts of power projects, including capitalized interest. For example, at the Russell Project, partially on line since 1985, litigation over excessive fish kills has kept four of the eight turbines from becoming operational. As a result, over one-half of the project’s construction costs—about $500 million—have been excluded from Southeastern’s rates. The net costs relating to these construction projects for fiscal year 1996 represent capitalized or unpaid interest incurred in that year. We estimate that for fiscal year 1996, the net cost to the federal government for the projects we identified is $30 million for the three PMAs and $0.2 million for BPA. Cumulatively, from fiscal years 1992 through 1996, we estimate that the net cost in constant 1996 dollars is about $138 million for the three PMAs and $1.2 million for BPA. The PMAs have stated that in most of these instances, including Russell, these net costs will be recovered in future years. The PMAs incur a number of other net costs including environmental mitigation, irrigation, deferred payments, and interest expense on store supplies totaling approximately $157 million cumulatively for fiscal years 1992 through 1996 in constant 1996 dollars. A net recovery totaling approximately $69 million existed for fiscal year 1996 resulting from Western’s repayments of interest and O&M expenses which had been deferred in prior years. These other net costs are discussed in appendix IV of volume 2 of this report. Unlike the PMAs’ appropriated debt, TVA’s appropriated debt has terms that provide Treasury full reimbursement for its related financing costs. Substantially all of TVA’s appropriated debt was incurred prior to the 1959 self-financing amendments to the TVA Act. The Tennessee Valley Authority Act of 1933, as amended, requires TVA to make fixed annual payments of principal to Treasury and pay interest at an annually calculated Treasury interest rate on the outstanding balance. In accordance with the TVA Act, the interest rate is what Treasury pays on its total marketable public obligations issued—6.87 percent for fiscal year 1996. The terms of this debt include resetting of the interest rate annually, which is a short-term debt feature, and a principal repayment term of over 50 years, which is characteristic of long-term debt. Consequently, we believe that the terms of this debt, including the use of Treasury’s total average interest rate for all debt, result in no net cost to the federal government. As of September 30, 1996, TVA also had $3.2 billion of long-term debt that was held by the Federal Financing Bank (FFB). This debt matures at various dates from fiscal years 2003 through 2016 and bears interest rates ranging from about 8.5 percent to 11.7 percent. Because the interest rate on TVA’s FFB debt is based on the rate Treasury pays plus a one-eighth of 1 percent administrative fee, we believe there was no net financing cost to the federal government for this debt in fiscal years 1992 through 1996. Recently, TVA asked the FFB to allow it to repay this debt before its maturity dates. However, TVA was not willing to incur the prepayment premiums required under the terms of the existing loan contracts with FFB. In 1995, the Congressional Budget Office (CBO) was asked to review proposed legislation that would have authorized TVA to prepay $3.2 billion in loans made by the FFB without paying the prepayment premiums. CBO estimated that enacting such legislation in 1996 would have increased federal outlays by about $120 million per year through 2002 with declining amounts thereafter until the last notes matured in the year 2016. The estimated cost reflects the net effect of the refinancing on both Treasury and TVA. This proposed legislation was never introduced. TVA has its own pension and postretirement health benefit plans, which are funded through TVA’s electricity rate charges. TVA’s postretirement health plan covers all TVA employees while its pension plan covers all employees except for a small number covered by federal plans. As of September 30, 1996, TVA had about 163 staff employed in its power program that were part of the federal government’s pension plans. As with most other federal agencies, TVA does not currently reimburse the federal government for the full cost of the benefits of employees covered by the CSRS. We estimate that the net cost to the federal government for these benefits was about $0.7 million in fiscal year 1996 and about $4 million for fiscal years 1992 through 1996 in constant 1996 dollars. The federal government has financial exposure stemming from its over $84 billion of direct and indirect financial involvement in the electricity-related activities of RUS, the PMAs, and TVA. Comparatively high debt and fixed costs resulting from factors such as investments in uneconomical construction projects have left federal electricity-related entities vulnerable, in varying degrees, and results in risk of future losses to the federal government. The federal government’s risk of future losses is directly related to the ability of the RUS borrowers, the PMAs, and TVA to set their rates in a competitive and/or regulated market at a level sufficient to recover all of their costs. The federal government faces financial exposure because of direct and indirect financial involvement in the electricity-related activities of RUS, the PMAs, and TVA. As of September 30, 1996, the federal government had over $53 billion of primarily direct lending to RUS borrowers, the PMAs, and TVA and appropriated debt owed by the PMAs and TVA. The federal government would incur a future loss on this direct involvement to the extent that RUS borrowers, the PMAs, or TVA failed to make payments on federal debt. As of September 30, 1996, the federal government also had indirect financial involvement of over $31 billion—primarily TVA bonds and BPA’s nonfederal debt. Although the TVA bonds and BPA’s nonfederal debt are not explicitly guaranteed by the federal government, the financial community generally views them as having an implicit federal guarantee. For this indirect involvement, the federal government would incur future losses if it incurred unreimbursed costs as a result of actions it took to prevent default or breach of contract by the federal entity on nonfederal debt. Table 3 shows the federal government’s direct and indirect financial involvement in RUS, the three PMAs, BPA, and TVA. In assessing risk to the federal government, we used the criteria for contingencies from Statement of Federal Financial Accounting Standards (SFFAS) No. 5, Accounting for Liabilities of the Federal Government. According to SFFAS No. 5, “A contingency is an existing condition, situation, or set of circumstances involving uncertainty as to possible gain or loss to an entity. The uncertainty will ultimately be resolved when one or more future events occur or fail to occur.” When a loss contingency exists, the likelihood that the future event or events will confirm the loss or the incurrence of a liability can range from probable to remote: Probable: The future confirming event or events are more likely than not to occur. Reasonably possible: The chance of the future confirming event or events occurring is more than remote but less than probable. Remote: The chance of the future event or events occurring is slight. We assessed risk of loss for RUS, which is essentially a lending operation, based on a review of the loan portfolio, an assessment of the production costs of key borrowers relative to their respective markets, and consideration of state regulatory actions. For the three PMAs, BPA, and TVA, we considered the cost of electricity production and rates, key financial ratios, generating mix, competitive environment, management actions, and legislative and other factors. The risk factors we used to assess risk of loss to the federal government from its electricity-related activities are consistent with those used by the bond rating services to assess credit risk for nonfederal utilities. In a competitive market for a relatively homogeneous product like electricity, being among the lowest cost producers is generally the most important factor in determining competitive position. As discussed below, average revenue per kilowatthour (kWh) is a reasonable indicator of power production costs. Thus, because RUS borrowers and the PMAs are subject to some wholesale competition, one of the key factors we looked at in assessing the risk described in this section of the report was these entities’ average revenue per kWh for wholesale sales compared to nonfederal utilities. The average revenue per kilowatthour for wholesale sales (sales for resale) is referred to in this report as average revenue per kWh. This average is calculated by dividing total revenue from the sale of wholesale electricity by the total wholesale kWhs sold. Because the PMAs, publicly-owned generating utilities (POGs), and rural electric cooperatives generally recover costs through rates with no profit, average revenue per kWh should reflect the PMAs’, POGs’, and rural electric cooperatives’ power production costs. For investor-owned utilities (IOUs), average revenue per kWh should reflect power production cost plus the regulated rate of return. Given that a large portion—an average of 79 percent over the last 5 years—of IOU rate of return (net income) is paid out in common stock dividends, which is a financing cost, average revenue per kWh also approximates power production costs for IOUs. During fiscal year 1996 through July 31, 1997, RUS has written off about $1.5 billion in electricity loans. As of September 30, 1996, $10.5 billion of the $32.3 billion total electricity portfolio relates to loans to G&Ts that are in bankruptcy or otherwise financially stressed. The total principal outstanding on G&T loans is approximately $22.5 billion, or about 70 percent of the RUS electric loan portfolio. Distribution borrowers make up the remaining 30 percent of the electric loan portfolio. At the time of our review, there were 55 G&T borrowers and 782 distribution borrowers. Our review focused on the G&T loans since they make up the majority of the portfolio in terms of dollars and generally pose the greatest risk of loss to the federal government. It is probable that the federal government will continue to incur substantial losses on the loans to financially stressed G&T borrowers. It is also probable that additional future losses will be incurred on loans to G&T borrowers that are not currently troubled but will become financially stressed due to high production costs and competitive and/or regulatory pressures. Under DOJ authority, RUS has recently written off a substantial dollar amount of loans to rural electric cooperatives. The most significant write-offs related to G&T loans. In fiscal year 1996, one G&T made a lump sum payment of $237 million to RUS in exchange for RUS writing off and forgiving the remaining $982 million of its RUS loan balance. This borrower’s financial problems stemmed from its participation in a nuclear plant construction project that experienced lengthy delays as well as severe cost escalation. When construction of the plant began in 1976, its total cost was projected to be $430 million. However, according to the Congressional Research Service, the accrued expenditures by 1988 were $3.9 billion as measured in nominal terms (1988 dollars). These cost increases are due primarily to changes in Nuclear Regulatory Commission (NRC) health and safety regulations after the Three Mile Island accident. The remaining increases are generally due to inflation over time and capitalization of interest during the delays. In the early part of fiscal year 1997, another G&T borrower made a lump sum payment of approximately $238.5 million in exchange for forgiveness of its remaining $502 million loan balance. The G&T and its six distribution cooperatives borrowed the $238.5 million from a private lender, the National Rural Utilities Cooperative Finance Corporation. The G&T had originally borrowed from RUS to build a two-unit coal-fired generating plant and to finance a coal mine that would supply fuel for the generating plant. The plant was built in anticipation of industrial development from the emerging shale oil industry. However, the growth in demand did not materialize and there was no market for the power. Although the borrower had its debt restructured in 1989, it still experienced financial difficulties due to a depressed power market. RUS and DOJ decided that the best way to resolve the matter was to accept a partial lump sum payment on the debt rather than force the borrower into bankruptcy. The total amount of debt written off for the entire RUS electricity loan portfolio between fiscal years 1992 and 1996 was about $1.05 billion (in constant 1996 dollars)—with $0.5 billion in additional write-offs in the early part of fiscal year 1997. It is probable that RUS will have additional loan write-offs and therefore that the federal government will incur further losses in the short term from borrowers that RUS management has identified as financially stressed. According to RUS reports, about $10.5 billion of the $22.5 billion in G&T debt is owed by 13 financially stressed G&T borrowers. Of these, 4 borrowers with about $7 billion in outstanding debt are in bankruptcy. The remaining 9 borrowers have investments in uneconomical generating plants and/or have formally requested financial assistance in the form of debt forgiveness from RUS. According to RUS officials, these plant investments became uneconomical because of cost overruns, continuing changes in regulations, and soaring interest rates. These investments resulted in high levels of debt and debt-servicing requirements, making power produced from these plants expensive. Since cooperatives are nonprofit organizations, little or no profit is built into their rate structure, which helps keep electricity rates as low as possible. However, the lack of retained profits generally means the cooperatives have little or no cash reserves to draw upon. Thus, when cash flow is insufficient to service debt, cooperatives must raise electricity rates and/or cut other costs enough to service debt obligations, or default on government loans. This was the scenario for the previously discussed write-offs in fiscal year 1996 and through July 31, 1997. Additional write-offs are expected to occur. For example, according to RUS officials, the agency may write off as much as $3 billion of the total $4.2 billion debt owed by Cajun Electric, a RUS borrower that has been in bankruptcy since December 1994. Cajun Electric filed for bankruptcy protection after the Louisiana Public Service Commission disapproved a requested rate increase and instead lowered rates to a level that reduced the amount of revenues available to Cajun to make annual debt service payments. Several factors contributed to Cajun’s heavy debt, including its investment in a nuclear facility which experienced construction cost overruns and its excess electricity generation capacity resulting from overestimation of the demand for electricity in Louisiana during the 1980s. In addition to the loans to financially stressed borrowers, RUS has loans outstanding to G&T borrowers that are currently considered viable by RUS but may become stressed in the future due to high production costs and competitive or regulatory pressures. We believe it is probable that the federal government will incur losses eventually on some of these G&T loans. We believe the future viability of these G&T loans will be determined based in part on the RUS cooperatives’ ability to be competitive in a deregulated market. To assess the ability of RUS cooperatives to withstand competitive pressures, we focused on the average revenue per kWh of 33 of the 55 G&T borrowers with about $11.7 billion of loans outstanding as of September 30, 1996. We excluded 9 G&Ts that only transmit electricity and the 13 financially stressed borrowers. Our analysis shows that for 27 of the 33 G&T borrowers, average revenue per kWh was higher in their respective regions than IOUs, and 17 of the 33 were higher than POGs. Additionally, as shown in figure 1, in 1995, RUS cooperatives’ average revenue per kWh was higher than IOUs in all of the eight primary North American Electric Reliability Council (NERC) regions in which the cooperatives operate. The relatively high average production costs indicate that the majority of G&Ts may have difficulty competing in a deregulated market. RUS officials told us that several borrowers have already asked RUS to renegotiate or write off their debt because they do not expect to be competitive due to high costs. However, RUS officials stated that they will not write off debt solely to make borrowers more competitive. As with the financially stressed borrowers, some of the G&T borrowers currently considered viable have high debt costs because of investments in uneconomical plants. In addition, according to RUS officials, two unique factors cause cost disparity between the G&Ts and IOUs. One factor is the sparser customer density per mile for cooperatives and the corresponding high cost of providing service to the rural areas. A second factor has been the general inability to refinance higher cost FFB debt when lower interest rates have prevailed. However, RUS officials said that recent legislative changes that enable cooperatives to refinance FFB debt with a penalty may help align G&T interest rates with those of the IOUs. In the short term, G&Ts will likely be shielded from competition because of the all-requirements wholesale power contracts between the G&T and their member distribution cooperatives. With rare exceptions, long-term contracts obligate the distribution cooperatives to purchase all of their respective power needs from the G&T. In fact, RUS requires the terms of the contracts to be at least as long as the G&T loan repayment period. However, wholesale power contracts have been challenged recently in the courts by several distribution cooperatives because of the obligation to purchase expensive G&T power. According to RUS officials, one bankrupt G&T’s member cooperatives are currently challenging their wholesale power contracts in court in order to obtain less expensive power. RUS officials believe that the long-term contracts will come under increased scrutiny and potential renegotiation or court challenges as other sources of less expensive power become available. Wholesale rates under these contracts are currently set by a G&T’s board of directors with approval from RUS. In states whose commissions regulate cooperatives, the cooperative must file a request with the commission for a rate increase or decrease. Several of the currently bankrupt borrowers were denied requests for rate increases from state commissions. However, RUS officials indicated they do not expect G&Ts to pursue rate increases as a means to recover their costs because of the recognition of declining rates in a competitive environment. RUS officials also acknowledge that borrowers with high costs are likely to request debt forgiveness as a means to reduce costs in order to be competitive in the future. As discussed above, denials of requested rate increases by state commissions culminated in several G&Ts filing for bankruptcy. Eighteen of the RUS G&T borrowers operate in states where regulatory commissions must approve rate increases. These commissions may deny a request for a rate increase if they believe such an increase will have a negative impact on the region. According to RUS officials, some commissions have denied a rate increase to cover the costs of projects that the commission had previously approved for construction. Therefore, G&Ts with high costs may be likely candidates to default on their RUS loans, even without direct competitive pressures. At September 30, 1996, the three PMAs had $5.4 billion of appropriated debt, and Western had an additional $1.6 billion of irrigation debt and about $165 million of nonfederal debt. The three PMAs market power that is substantially lower in cost than nonfederal utilities and thus, in the current operating environment, are competitively sound overall. However, all three PMAs have one or a few projects or rate-setting systems with problems that, taken as a whole, make risk of some loss to the federal government probable. As shown in figure 2, Southeastern, Southwestern, and Western have production costs that average more than 40 percent below IOUs and POGs in the primary NERC regions in which the PMAs operate. The three PMAs are low-cost marketers of power for several key reasons. First, the three PMAs market power produced primarily at hydropower dams built 30 to 60 years ago and run primarily by the operating agencies. These hydropower dams are currently a low cost energy source compared to coal and nuclear fuels, which are the primary energy sources used by other utilities. Another key advantage for the three PMAs is that as federal agencies, they generally do not pay taxes. In contrast, IOUs do pay taxes. According to the Energy Information Administration (EIA), IOUs paid taxes averaging about 14 percent of operating revenues in 1995. POGs, as publicly owned utilities, typically do not pay income taxes because they are units of state or local governments. However, many POGs do make payments in lieu of taxes to local governments. A study of 670 public distribution utilities showed that the POGs’ median net payments and contributions as a percent of electric operating revenue were 5.8 percent. Finally, as previously mentioned, the three PMAs did not recover nearly $185 million of costs in fiscal year 1996 associated with producing and marketing federal power. If Congress were to require the three PMAs to begin recovery of the net costs described earlier, or if competition drives electricity prices down significantly, the three PMAs’ competitive position could deteriorate. Because the three PMAs market power at prices that are substantially below those of other utilities, they generally have had little difficulty in selling all of the power that they produce. However, as discussed in detail in appendix VII of volume 2, each of the three PMAs has one or a few projects or rate-setting systems with problems that, taken as a whole, make the risk of some future losses to the federal government probable. In aggregate, these problem projects and rate-setting systems represent about $1.4 billion, or 19 percent of the federal government’s financial involvement in the three PMAs. BPA had over $17 billion of debt and over $766 million of interest expense as of and for the year ended September 30, 1996. These high fixed costs limited BPA’s flexibility to lower rates and contributed significantly to BPA’s loss of customers in recent years. However, as a result of existing customer contracts, a memorandum of agreement (MOA) limiting fish mitigation costs, and currently large financial reserves, we believe that the risk of any significant loss to the federal government from BPA is remote through fiscal year 2001. After 2001, expiration of customer contracts, significant risks from market uncertainties, BPA’s high fixed costs, and substantial upward pressure on operating expenses increase the risk of loss to the federal government. Despite a number of factors that mitigate this risk, we believe it is reasonably possible that the federal government will incur losses from BPA after fiscal year 2001. This risk will begin to decline after 2012, all else being equal, if BPA pays off its nonfederal debt as scheduled. In addition, one small project that serves BPA represents a probable loss to the federal government (see appendix VIII of volume 2). Three key factors have stabilized the government’s risk of loss relative to BPA through fiscal year 2001 and, in our view, make risk remote for this timeframe. First, in 1995-96, BPA signed its customers to contracts to purchase a substantial amount of power through fiscal year 2001. BPA projects that firm power sales to these customers will secure $1.14 billion annually through fiscal year 2001, or 63 percent of each year’s total projected power revenues. Second, BPA management entered into a MOA with various federal agencies that has limited its fish mitigation costs through fiscal year 2001. This agreement also created a contingency fund of $325 million comprised of past BPA nonpower fish mitigation expenditures. Finally, BPA has had strong water years in 1996 and so far in 1997 and estimates that it will have financial reserves of about $400 million at the end of fiscal year 1997. In addition, the $325 million fish cost contingency fund is available under specified circumstances. After fiscal year 2001, BPA faces the expiration of customer contracts, significant market uncertainties, high fixed costs, and significant upward pressure on operating expenses. Nearly all of BPA’s power contracts with customers expire at the end of fiscal year 2001. If these customers can find power cheaper than BPA can offer, they might opt to leave BPA. One of the key market uncertainties that will determine whether cheaper power will be available is the future production cost of gas-fired generation plants. This generation source has become increasingly competitive due to low natural gas prices and improving gas turbine technology. Natural gas prices in the Pacific Northwest are low due to several factors, including a large supply coming from Canada. Also, recent technology advances have improved the efficiency of gas turbines by more than 50 percent. According to BPA, natural gas-generated power has driven down the price of wholesale electricity and resulted in customers leaving or obtaining some of their power at rates well below BPA’s current rate. According to BPA, a surplus of power on the west coast is also driving down the price of wholesale power. Because utilities are still able to pass on fixed costs to captive retail customers, surplus wholesale power is being sold on a marginal cost basis. According to BPA, other utilities and power marketers are offering wholesale power at as low as 1.5 cents per kWh, which is lower than BPA’s price for sales of comparable products of 2.14 cents per kWh. It is uncertain whether surplus power and low cost natural gas generation will continue to drive down wholesale power prices after fiscal year 2001. It is also uncertain what impact retail open access will have on BPA’s competitive position. Retail open access—which would provide retail consumers freedom to choose among suppliers—could result in BPA’s wholesale customers being uncertain about the size of their own future power needs. These power needs will be directly impacted by retail customers being able to choose their supplier. BPA’s customers may be hesitant to sign long-term contracts to purchase power from BPA to the extent they face uncertainty about future power needs. However, even without long-term contracts, BPA is likely to remain a major supplier. Most states and the Congress are considering various proposals regarding the approach to retail open access. BPA’s substantial fixed costs will continue to inhibit its flexibility to lower its rates and meet competitive pressures. For example, 32 percent of BPA’s revenue went to pay financing costs in fiscal year 1996—substantially more than the nationwide average of 14 percent for IOUs and 18 percent for POGs. BPA will continue to face high fixed costs after fiscal year 2001 relating to its $17 billion of debt. BPA will also face upward pressure on its operating expenses after fiscal year 2001. The most significant of these operating expenses is fish mitigation. It is uncertain whether an agreement similar to the current MOA will be possible after expiration of the present one. Without this agreement, BPA is at risk of escalating costs after fiscal year 2001 if additional funds for fish measures beyond those planned at this time are needed. BPA also faces new or additional costs after 2001. First, it plans to implement a phased-in approach to recover the full cost of pension and postretirement health benefits in fiscal year 1998 but will defer full recovery until fiscal year 2002, when $55 million will be due. To completely recover obligations for fiscal years 1998 through 2001, an additional $35 million will be due in fiscal year 2003. Other new or additional costs that will be incurred after fiscal year 2001 include $806 million of irrigation debt payments and $396 million in payments to the Confederated Tribes of the Colville Reservation for their share of Grand Coulee Dam revenues. These costs, which are discussed further in appendix VIII of volume 2, will be paid out over several decades. Several factors mitigate the federal government’s risk of future losses relative to BPA. These factors include certain inherent cost advantages, management actions to reduce operating costs, and an extensive transmission system. We believe that these factors reduce the risk of loss to the federal government after 2001, but that the risk of loss is still reasonably possible. Additionally, BPA is scheduled to have nearly all of its nonfederal debt paid off by 2019, with a substantial decrease in debt service beginning in 2013. If BPA is able to make these payments as scheduled, all else being equal, its fixed financing costs would be more in line with those of its competitors. This would reduce the risk to the federal government. As shown in figure 3, BPA’s 1995 average revenue per kWh was more than 15 percent lower than IOUs and POGs in the primary NERC region (Western Systems Coordinating Council) in which BPA operates. As previously mentioned, BPA is facing significant competition today. However, BPA believes that its average production costs are less than others in the Pacific Northwest, as shown in figure 3. If the supply of surplus power dries up and gas generation costs rise, which BPA believes will happen, BPA’s low average production costs should improve its long-term competitive position. This long-term position will be further improved after 2012, if BPA repays its nonfederal debt as scheduled. BPA has comparatively low average production costs because of certain inherent cost advantages over nonfederal utilities. As previously mentioned, BPA did not recover nearly $400 million of costs associated with producing and marketing federal power. In addition, the hydroelectric plants that generate the power marketed by all the PMAs have cost advantages over coal and nuclear generating plants, which generate over 81 percent of the electricity in the United States. BPA’s hydroelectric plants, which were built decades ago, had relatively low construction costs compared to the newer construction of nonfederal utilities. Another key advantage for BPA is that like the other three PMAs, it generally does not pay taxes. Furthermore, interest income to bondholders from BPA’s nonfederal debt is exempt from federal personal income tax and some state income taxes. BPA management has taken significant steps in the last several years to react to the intense wholesale electricity competition in the Pacific Northwest. According to BPA, it reduced its staff from about 3,755 in March 1994 to 3,160 by the end of fiscal year 1996. An additional reduction to 2,755 is planned by fiscal year 1999. In addition, over the last several years, BPA has refinanced much of its Treasury bonds and nonfederal debt to keep its interest expense as low as possible. According to BPA, these staffing and other cost savings will reduce planned expenses by an average of $600 million per year during fiscal years 1997 through 2001 and allow for a 13 percent rate decrease for those years. BPA also has an extensive transmission system that comprises about 75 percent of the bulk power transmission capacity in the Pacific Northwest. BPA has advised us that in the event of BPA being unable to sell its power at a level that recovers all costs, it might be able to use its massive transmission system to help recover stranded costs. This could involve allocating stranded generation costs, in whole or in part, to transmission charges. At September 30, 1996, TVA had $27.9 billion of debt and $6.3 billion of deferred assets, which leaves TVA with far more financing costs and deferred assets than its potential competitors. However, we believe that as long as TVA remains in a protected position similar to a traditional regulated utility monopoly, the risk of loss to the federal government is remote. If this position changes and TVA is required to compete when wholesale prices are expected to be falling, its high level of fixed costs and deferred assets compared to neighboring utilities increase the risk that the federal government would incur future losses. Despite a number of factors that mitigate this risk, it is reasonably possible under this scenario that the federal government would incur future losses related to TVA. TVA has two key items that protect it from competition and result in TVA operating like a traditional regulated utility monopoly in its service area. First, contracts with TVA’s distributors (except for Bristol, Virginia) automatically renew each year and require that at least 10 or 15 years’ notice be given before they can switch to another power company. Second, TVA is exempt from the wheeling provisions of the Energy Policy Act of 1992. This exemption generally prevents other utilities from using TVA’s transmission system to sell power to customers inside TVA’s service area. TVA’s regulated monopoly-type position enables it to set its rates at whatever price is necessary to recover its costs. However, TVA has chosen to defer costs related to its substantial nuclear investment to future years rather than currently including them among the costs being recovered from ratepayers. As a result, TVA had accumulated about $28 billion of debt as of September 30, 1996, which resulted in over $2 billion of interest expense in fiscal year 1996. By not recovering these costs from ratepayers and using the cash to pay off debt in prior years, TVA has developed a high level of fixed costs and deferred assets which will leave it vulnerable to future competition if it loses its protections. This is similar to the situation BPA faced when its high fixed costs limited its flexibility to meet competitive challenges when electricity prices fell sharply in the Pacific Northwest. However, unlike TVA, BPA has no deferred nuclear assets. Our analysis shows that for fiscal year 1996, TVA’s ratio of financing costs to revenue was more than twice the average for 11 neighboring utilities and its ratio of fixed financing costs to revenue was almost five times higher. These two ratios clearly show that because of high financing costs, TVA does not have the same level of flexibility as neighboring IOUs to lower prices to meet price competition. Additionally, as TVA’s debt matures, the portion that is not repaid will likely need to be refinanced, thus exposing TVA to the risk of rising interest rates and even higher financing costs. However, if interest rates decline, TVA’s financing costs would decrease. TVA has deferred $6.3 billion in costs associated with its Bellefonte units 1 and 2, and Watts Bar unit 2 that are currently in “mothballed” status. TVA is treating these assets similar to construction work-in-progress, with the costs not being recovered from ratepayers. In aggregate, TVA has spent over $26 billion on nuclear plants, which were primarily debt financed. Most of these costs have not yet been recovered from ratepayers. Other utilities have been preparing for competition by writing-down their uneconomical assets at a much faster rate than TVA. As a result, these utilities have been recovering costs at a much greater pace than TVA and thus will have greater financial flexibility in the future. To demonstrate the magnitude of TVA’s deferral of costs, we compared TVA’s rate of depreciation and its cost deferral to neighboring utilities. First, TVA’s ratio of accumulated depreciation and amortization to gross property, plant, and equipment (PP&E) was about 18 percent as of September 30, 1996, compared to about 36 percent for the 11 neighboring utilities. This ratio shows that other utilities have already recovered twice as much of their capital investments percentagewise as TVA. Second, TVA’s deferred assets as of September 30, 1996 were nearly 20 percent of its gross PP&E compared to about 3 percent for the IOUs. This ratio clearly shows that TVA’s deferral of $6.3 billion of costs is unique and out of line with neighboring utilities. TVA’s ability to recover its substantial capital costs in a competitive environment is uncertain. TVA’s vulnerability to wholesale competition without protections was recently demonstrated when one of its customers, the Bristol Virginia Utilities Board, announced that it will leave the TVA system for Cinergy, Inc., in January 1998. Cinergy offered firm wholesale power at 2.59 cents per kWh for 7 years, 40 percent lower than TVA’s comparable wholesale rate of 4.3 cents per kWh. Bristol, which is on the border of TVA’s service area, was able to purchase this power because it had given TVA written notice of its intent to cancel its power contract and had received a unique exemption in the Energy Policy Act of 1992, which allows other utilities to transmit (wheel) electricity to Bristol over TVA’s power lines. While we recognize that Cinergy may have offered this power to Bristol at a price representing its marginal costs, TVA could face this type of competitive situation regularly if it were to lose its protections from competition. Several factors mitigate the government’s risk of future loss relative to TVA. These factors include certain inherent cost advantages; management actions to increase revenue, cut operating expenses, and reduce debt; and an extensive transmission system. We believe these factors reduce the risk of loss to the federal government, but the risk of loss is still reasonably possible. TVA has several inherent cost advantages because it is a federal government corporation. First, TVA’s debt receives the highest possible rating from the bond rating services. According to these services, TVA’s creditworthiness is based primarily on its links to the federal government rather than on the criteria applied to a stand-alone corporation. As a result, the private lending market has provided TVA with access to billions of dollars of financing at favorable interest rates. One of the major bond rating services believes, and we concur, that without the links to the federal government, TVA would have a lower bond rating and higher cost of funds. Additionally, interest income for TVA’s bondholders is generally exempt from state income taxes, which further lowers TVA’s cost of funds. TVA is exempt from paying income taxes, unlike its neighboring IOUs. Therefore TVA, as a nonprofit entity, does not have to generate the net income that an IOU would need to cover income taxes and provide for an expected rate of return. However, the TVA Act requires TVA to make payments in lieu of taxes to state and local governments where power operations are conducted. The base amount TVA is required to pay is 5 percent of gross revenues from the sale of power to other than federal agencies during the preceding year. This amounted to about $256 million in fiscal year 1996. In addition, according to TVA, its distributors are required to pay various state and local taxes that amounted to about $125 million, or about 2 percent of the total fiscal year 1995 operating revenues of TVA and the distributors. According to EIA, IOUs pay about 14 percent of gross revenues for taxes. Another cost advantage is that TVA generates significantly more hydroelectric power than other utilities in the region and purchases hydropower from Southeastern at less than 1 cent per kWh. TVA’s hydropower dams generate about 11 percent of TVA’s power with a relatively low capital investment of about $1.3 billion; on the average, other utilities nationwide generate only about 6 percent of their electricity with hydropower. TVA management has taken significant steps to reduce its expenses. According to TVA, it reduced its staff from about 34,000 in 1988 to about 16,000 in 1996 and plans further reductions in 1997. In addition, TVA has refinanced its debt to keep its interest expense as low as possible. The completion of TVA’s Watts Bar 1 and restarting of its Browns Ferry 3 nuclear power units—a major reason for TVA’s increasing debt in recent years—is another important step. According to TVA, it has internally capped its debt at about $28 billion and plans to finance its future capital expenditures from operations. These plans and actions are consistent with those of IOUs in preparation for competition. On July 22, 1997, TVA released a 10-year business plan that identifies actions it plans to take to position its power operations to meet the challenges from the coming restructured marketplace. This plan calls for TVA to (1) increase power rates enough to increase annual revenues by about 5.5 percent ($325 million), (2) limit annual capital expenditures to $595 million, (3) reduce debt by about 50 percent from $27.9 billion as of September 30, 1996, to $13.8 billion by fiscal year 2007, and (4) reduce its total cost of power by about 16 percent by fiscal year 2007. To the extent TVA is able to use the cash generated from increasing rates, reducing expenses, and capping future capital expenditures to pay down debt, the risk of loss to the federal government will be reduced. In addition to the above planned actions, the plan calls for TVA to change the length of the wholesale power contracts with its distributors from a rolling 10-year term to a rolling 5-year term beginning 5 years after the amendment. However, reducing the length of the wholesale contracts with its distributors could increase the risk of loss to the federal government. A final mitigating factor is TVA’s extensive transmission system, which covers nearly 100 percent of the transmission service available in its service area. If TVA is exposed to competition and is unable to sell its power at a level that recovers all costs, it may be able to use its transmission system to recover some stranded costs. As agreed with your offices, we did not estimate the forgone revenue for federal, state, or local governments resulting from the tax-exempt status of the RUS borrowers, the PMAs, or TVA; estimate the forgone revenue for federal and state governments resulting from tax-exempt debt instruments issued by TVA or related to Western or BPA’s nonfederal debt; assess the reasonableness of the methodologies used by the operating agencies to allocate power-related costs to the PMAs for recovery; or quantify the amount of potential future losses to the federal government. A detailed discussion of our objectives, scope, and methodology, including additional items not included in the scope of our review, is contained in appendix II of volume 2 to this report. Appendix II also includes detailed explanations of the calculations of various estimates used in the report and the criteria we used to assess cost recovery and the likelihood that the federal government will incur future losses relating to RUS, the PMAs, and TVA. When appropriate, we used audited numbers from RUS, RUS’ borrowers, PMA, and TVA fiscal years 1996, 1995, and earlier financial statements included in their annual reports. We conducted our review from January 1997 through July 1997 in accordance with generally accepted government auditing standards. We received written comments on a draft of this report from USDA, the three PMAs, BPA, and TVA. These comments are discussed in the following section and are reprinted in volume 2, appendixes X through XIII. We also received technical oral comments from the Corps of Engineers and the Bureau of Reclamation. We evaluated their comments and incorporated changes, where appropriate, into volumes 1 and 2 of our final report. The comments from USDA, the three PMAs, BPA and TVA generally focused on our analysis of net financing costs and the federal government’s risk of future financial losses related to the electricity-related activities of these entities. All of these entities generally disagreed with our estimates of their net financing costs. In addition, they also disagreed with our assessment of the federal government’s risk of future financial losses related to their electricity-related activities. USDA, the three PMAs, and BPA took issue, for varying reasons, with our estimate of net financing costs. USDA disagreed with our use of the portfolio methodology in estimating net financing costs on RUS outstanding federal debt. It noted that our analysis resulted in larger estimates of net financing costs to the federal government than the estimates obtained in USDA’s application of the credit reform methodology that were discussed in our April 1997 report. As we stated in our current report, the majority of outstanding RUS electricity loans and guarantees, approximately 90 percent, were made prior to 1991 and therefore are not required to be reported under credit reform. Additionally, because the USDA Inspector General deemed the RUS credit reform estimates unreliable, we chose to use actual costs incurred rather than any credit reform estimates for our analysis. The three PMAs and BPA disagreed with our estimate of the net financing costs. While the comments regarding the net financing cost estimates were not consistent and in one instance contradictory, two broad common issues were raised: (1) disagreement with our use of the portfolio methodology for estimating the net financing costs to the federal government for appropriated debt, including the use of the weighted average interest rate on outstanding long-term Treasury bonds, and (2) the assertion that the PMAs’ appropriated debt is analogous to a mortgage loan. The three PMAs stated that they believe the use of the portfolio methodology assumes that both the PMA interest rate and Treasury’s cost of funds are variable, so the cost difference on any individual investment varies from year to year. They stated that this is equivalent to assuming that the PMA appropriated debt should be refinanced annually. The three PMAs stated that comparing the interest rates assigned to PMA financings to Treasury rates in the years the financings were provided (loan-by-loan methodology) would be a more accurate way of determining the net financing cost. BPA also suggested a loan-by-loan approach, stating that determining the cost to Treasury of providing BPA’s financing should be done “on the basis of an assessment of each loan incrementally, as a commercial lender would do.” Finally, the three PMAs and BPA disagreed with our using the interest rate on Treasury’s outstanding bond portfolio to estimate net financing costs on outstanding appropriated debt. As discussed in appendix II of volume 2 of this report, we defined the net financing cost to the federal government as the difference between Treasury’s borrowing cost or interest expense and the interest income received from RUS borrowers, the PMAs, and TVA. Our basic methodology was to determine whether the federal government received a return sufficient to cover its borrowing costs and, if not, to estimate the net financing cost. RUS, the PMAs, and TVA had several forms of federal debt outstanding at September 30, 1996. Each of these forms of federal debt had different terms and thus required us to apply variations of our basic methodology in assessing whether there was a net financing cost and, if so, estimating the amount. For the PMAs’ appropriated debt, the portfolio methodology best captures the combined impact of the four distinct aspects of the net financing cost that we identified: (1) the difference between the PMAs’ borrowing rate and Treasury’s borrowing rate for securities of similar maturity at the time the appropriation was made (interest rate spread), (2) the PMAs’ ability to repay the highest interest rate debt first (prepayment option), (3) the interest rate risk arising from Treasury’s general inability to refinance or prepay outstanding debt in times of falling interest rates (Treasury borrowing practices), and (4) the difference in the maturities of the three PMAs’ and BPA’s appropriated debt and Treasury’s bonds (maturity differential). The loan-by-loan methodology suggested by the three PMAs and BPA is limited in that it captures only that portion of the net financing cost arising from the interest rate spread and not the other three aspects of that cost. We noted in appendix II of volume 2 of this report that as a comparison to our portfolio analyses, we did perform loan-by-loan assessments to estimate the net financing cost to the federal government for one of the three PMAs—Southwestern—as well as for BPA and RUS. In our loan-by-loan analyses, we attempted to match the PMAs’ appropriated debt and RUS federal debt with Treasury borrowing. In these analyses, we assumed that to provide financing for up to 50 years for a PMA project and 40 years for RUS debt, Treasury had to borrow an equivalent amount via the sale of long-term bonds. Because Treasury does not generally borrow for more than 30-year terms, in the loan-by-loan analyses, we also assumed that Treasury had to refinance each borrowing to extend the financing to the PMAs or RUS borrowers for the remainder of the terms of the debt. Our loan-by-loan analyses resulted in a net financing cost for fiscal year 1996 that was higher than under the portfolio methodology for two of the three entities (BPA and Southwestern) and the same for the third (RUS). For BPA, the net financing cost for fiscal year 1996 was about $445 million under the loan-by-loan analysis (versus $377 million under the portfolio analysis), for Southwestern it was about $54 million (versus $42 million under the portfolio analysis), and for RUS it was about $874 million (the same as under the portfolio analysis). The criticism of our use of the portfolio methodology is also inconsistent with another BPA comment asserting that a portfolio methodology estimate of net financing costs using a lower Treasury interest rate—the rate on Treasury’s entire portfolio of outstanding marketable securities, including short-term securities—would be appropriate. BPA stated that in using only long-term Treasury debt to gauge Treasury’s cost of funds for appropriated debt, our report inflates Treasury’s true cost of funds and, therefore, the net cost to the government of BPA’s operations. BPA stated that a more appropriate measure of Treasury’s cost of carrying this debt is Treasury’s composite rate for all marketable interest-bearing debt, which was about 6.7 percent at the end of fiscal year 1996. The composite interest rate that BPA proposed includes recently issued short-term Treasury bills and some notes with maturities of only several months. Using this composite interest rate that includes short-term securities is inappropriate because it would match short-term Treasury borrowing costs with long-term PMA appropriated debt. Because Treasury’s bond portfolio includes debt issued over the last several decades, the average interest rate on this portfolio is a reasonable approximation of the federal government’s cost of funds relating to the PMAs’ appropriated debt, which also was incurred over the last several decades. Moreover, the interest rate BPA is to pay on its appropriated debt under the Omnibus Consolidated Rescissions and Appropriations Act of 1996 supports our position that a long-term Treasury rate is the correct rate to use in our portfolio analysis. Under this act, that interest rate is based on long-term Treasury bond interest rates. The three PMAs and BPA also asserted that appropriated debt is analogous to fixed-rate mortgage loans issued by a commercial lender. The three PMAs stated that their concern over our estimate of net financing costs might be best explained by using a mortgage loan example. They stated that a fixed interest rate is assigned to each investment that the PMAs’ customers are required to repay, just as a homeowner receives a fixed rate mortgage from a lender. They further stated that to assert that the PMAs impose a net cost to Treasury in a year in which market interest rates have risen above the interest rates on the PMAs’ appropriated debt is equivalent to saying that the homeowner imposes a net cost on a lender whenever market rates for home loans rise above the homeowner’s fixed mortgage rate. Similarly, BPA stated that a 30-year fixed rate mortgage entered into in a year with low interest rates would not result in a cost to the lender simply because interest rates increased over time. We do not agree that the PMAs’ financing is analogous to a mortgage lending situation for several reasons. First, in a mortgage-type lending arrangement, if the lender wants to remain in business, it establishes a spread between the rate charged the borrower and the rate it must pay for the capital it lends. In the case of the PMAs’ appropriated debt, the PMAs do not pay higher interest rates than the interest rates Treasury pays on its bonds. On the contrary, in most instances the rates the PMAs paid on currently outstanding appropriated debt were significantly lower than the rates Treasury paid when the financings were provided. In addition, the PMAs do not pay any transaction fees (for example, points or closing costs) associated with the financings, which homeowners generally pay. The highest interest rate the PMAs are subject to for new financing is based on the rates on long-term Treasury securities issued the previous year, which generally have maturities of 30 years or less even though the repayment periods for the PMAs’ appropriated debt are up to 50 years. No attempt is made to charge a differential or take into account the greater risk of having appropriated debt outstanding for 50 years; in contrast, 30-year mortgages have higher interest rates than 15-year mortgages. Also, the PMAs are able to receive interest rates based on Treasury bonds that are “risk-free.” If the PMAs were required to obtain financing in the private market, without any implicit or explicit federal guarantee, they would likely pay interest rates higher than the risk-free Treasury rate. Furthermore, a mortgage lender typically requires that borrowers repay their loans, including principal and interest payments, on a fixed schedule, while the PMAs are not required to make fixed principal payments. Instead, the PMAs’ appropriated debt is similar to a balloon loan that is due in full at the end of the term—up to 50 years for the PMAs. The PMAs are required to repay debt with the highest interest rate first to minimize interest expense. Since the PMAs’ interest expense is minimized, this requirement minimizes interest income to Treasury and maximizes Treasury’s interest rate risk. The PMAs currently have debt outstanding from decades ago at extremely low and outdated interest rates and upon which no principal has been paid. If the PMAs’ appropriated debt had been paid back like a mortgage, their current weighted-average interest rates would be far higher. The result of this type of arrangement, along with Treasury’s general inability to call its outstanding bonds, is that the interest income Treasury receives on the PMAs’ appropriated debt is considerably less than the interest Treasury pays to bondholders on a comparable amount of Treasury debt. We are not aware of any mortgage lender who would be able to remain in business over the long term if it operated similarly. BPA stated that our draft report disregards the fact that the interest charges BPA pays on its appropriated debt were determined years ago using interest rates prevailing at that time. While the interest rates assigned to some Federal Columbia River Power System (FCRPS) appropriations approximated Treasury’s long-term interest rate, BPA’s statement is not factually accurate. Over the last 4 decades, BPA has incurred substantial debt at below-Treasury interest rates, as shown by the following examples: BPA incurred over $250 million in appropriated debt in 1969 at an interest rate of 2.5 percent when Treasury’s long-term bond rate was 6.67 percent; less than 1 percent of this has been repaid. BPA incurred over $250 million in appropriated debt at 2.5 percent in 1975 when Treasury’s long-term bond rate was 7.99 percent; less than 1 percent of this has been repaid. BPA incurred over $399 million in appropriated debt at 3.25 percent in 1982 when Treasury’s long-term bond rate was 12.76 percent; none of this has been repaid. BPA currently has outstanding appropriated debt bearing interest at 2.5 percent that was borrowed as recently as 1992 when Treasury’s long-term bond rate was 7.67 percent, and 3.125 and 3.25 percent debt incurred as recently as 1990 when long-term Treasury bond rates were 8.61 percent. In addition, BPA stated that we were inconsistent in assessing the net financing cost for BPA and TVA appropriated debt because we used a 9.0 percent interest rate to assess BPA’s net financing cost but used a 6.87 percent interest rate to determine the federal government’s net cost of providing financing to TVA. We disagree with BPA’s assessment. Since the terms of BPA’s and TVA’s appropriated debt differ markedly, it is reasonable to reflect this in assessing the net financing cost to the federal government. TVA is generally required to make payments on its outstanding principal balance every year, whereas BPA is required to pay outstanding principal only in the year of maturity. Also, the interest rate on TVA’s appropriated debt is revised annually to reflect the cost to Treasury of providing the financing. In contrast, BPA is allowed to repay appropriated debt with the highest interest rate first and keep appropriated debt with a low interest rate on the books for decades. Since TVA appropriated debt is in effect refinanced annually, it can reasonably be assigned an interest rate based on Treasury’s composite interest rate on all outstanding marketable securities, which includes short-term securities. Moreover, the different terms result in different exposures to interest rate risk. TVA bears interest rate risk in that if Treasury’s interest rates rise, TVA’s interest expense rises. In contrast, once BPA’s interest rates are assigned, they remain the same over the life of the debt. As a result, BPA bears interest rate risk only in the unlikely event that Treasury rates fall below BPA’s weighted-average interest rate of 3.5 percent. For example, in 1982 (because of high inflation and resultant high interest rates), TVA’s weighted-average interest rate on its appropriated debt was over 12 percent while BPA’s was approximately 3.3 percent. Moreover, TVA’s appropriated debt currently carries a weighted-average interest rate of 6.87 percent, while BPA’s weighted-average rate is 3.5 percent. BPA also stated that we inadequately addressed the restructuring of BPA’s appropriated debt and that we “hint” that BPA has an imbedded interest rate advantage that the Congress has ignored. In making this observation, BPA appears to be suggesting that the restructuring of its appropriated debt has permanently eliminated any net financing cost to the federal government. We do not agree. Under the terms of BPA’s appropriated debt restructuring, more than $2.5 billion of appropriated debt will be written off in exchange for increasing the interest rates on BPA’s revised appropriated debt balance to market interest rates. BPA will also pay an additional $100 million over the remaining terms of the debt. Other than this $100 million, the net cash flow to Treasury is essentially unchanged as a result of the restructuring. We acknowledge that Treasury will receive $100 million more under the restructured repayment plan than under the existing arrangement if BPA pays off the debt when it matures. However, this $100 million is less than one-third of the $377 million net financing cost we estimate that Treasury incurred in 1996 alone. This net negative cash flow to the federal government will continue as long as the appropriated debt and the corresponding Treasury debt are outstanding. TVA suggested that our report include an income item of approximately $100 million in our presentation of “annual costs to the government.” It contends that this amount represents what TVA pays Treasury each year in excess of the government’s current cost of financing TVA’s Federal Financing Bank (FFB) loans. We disagree. Treasury’s current interest rate is not an appropriate measure of its cost of financing loans issued in the past. Rather, the interest rates in effect at the time the loans were issued represents Treasury’s cost. Because FFB is charging TVA the long-term borrowing rate of similar Treasury debt at the time the loan was made, the federal government is receiving a return sufficient to cover its borrowing costs. If TVA is permitted to refinance these loans without penalty, the federal government will suffer a significant loss. This loss represents the difference between the interest rate at the time of the borrowing and the interest rate on current debt Treasury could avoid incurring today. In 1995, the Congressional Budget Office (CBO) was asked to review proposed legislation that would have authorized TVA to prepay the $3.2 billion in loans made by FFB without paying the prepayment premiums. CBO estimated that enacting such legislation in 1996 would increase federal outlays by about $120 million per year through 2002 with amounts declining thereafter until the last notes matured in the year 2016. This proposed legislation was never introduced. Several of the entities commented on our use of average revenues per kWh as an indicator of cost competitiveness and risk. In addition, each entity commented on our assessment of the risk of future financial losses. The three PMAs and USDA disagreed with our use of average revenue per kWh to compare utilities’ competitiveness. The three PMAs stated that the use of average revenue per kWh is overly simplistic and may mislead the report’s readers about the magnitude and causes of the difference in costs between the PMAs and other utilities. The three PMAs stated that they do not believe that average revenue per kWh takes into account the differences in the types of power being sold by different utilities. They stated that a more accurate measure would be to compare similar products being offered by different utilities. USDA officials stated that many variables not addressed in our analysis could significantly alter any comparison. We believe that average revenue per kWh is a strong indicator of the relative power production costs of the PMAs, TVA, and RUS G&T borrowers compared to IOUs and POGs. For the three PMAs, RUS G&T borrowers, and POGs, average revenue per kWh should equal cost over time because each operates as a nonprofit organization that recovers costs through revenues. This assumes that the entity’s competitive position is such that it can charge sufficiently high rates to recover all costs from customers. For IOUs, average revenue per kWh should represent cost plus the regulated rate of return. Given that a large portion of an IOU’s rate of return (net income) is used to pay common stock dividends, which is a financing cost, average revenue per kWh, while somewhat higher because it includes a profit, is a reasonable approximation of IOUs’ power production costs. In addition, analysts and bond rating agencies commonly use average revenue per kWh in assessing the competitiveness of power rates. We do recognize, however, that using average revenue per kWh as an analytical tool has some limitations. We clearly state in appendix III of volume 2 of this report that the price that any one utility charges another for wholesale energy comprises numerous transaction-specific factors, including fees charged for reserving a portion of capacity, consumption during peak and off-peak periods, and the use of the facilities. In appendix III, we have also clarified our discussion of the current electricity market, in which utilities are generally able to recover their fixed costs from retail customers. Thus, when competing for new wholesale customers, utilities with excess capacity and the ability to recover fixed costs from retail customers are able to sell surplus power at less than full production cost (that is, marginal cost). However, despite these limitations, average revenue per kWh is a good indicator of production costs since, over time, utilities must recover all costs to remain in business. The PMAs also stated that because of the variability in output of certain hydropower projects, our use of average revenue per kWh to indicate competitiveness could result in wide variations in a PMA’s competitive position from year to year. To address this concern, in this report and our September 1996 report, we compared the overall average revenue per kWh for the three PMAs, IOUs, and POGs from 1990 through 1995. In each year, the overall average revenue per kWh for each of the three PMAs were lower than IOUs and POGs by at least 40 percent. This 6-year comparison shows that the use of average revenue per kWh does not result in wide fluctuations in assessing the PMAs’ competitiveness from year to year. Each entity commented on our assessment of the risk to the federal government of future financial losses related to that entity. USDA did not agree with our assessment that it is probable that some RUS borrowers who are not currently financially distressed will require loan write-offs in the future. The three PMAs asserted that our assessment of risk of future losses is overstated. BPA stated that our risk assessment did not adequately take into account changes which will occur in the year 2012, when BPA asserts that the price of its wholesale power should be well below market. TVA stated that our assessment of the federal government’s risk of loss is more negative than is warranted. USDA agreed that in the near future, some write-offs of loans related to old investments by borrowers that are currently financially stressed are probable. However, USDA disagreed that it is probable that other borrowers that are not currently financially stressed will also require write-offs of their loans. USDA stated that it does not believe that the past history of power plant investment is useful in projecting the future in a new competitive, restructured, unbundled infrastructure. We disagree. Because past investments must be recovered and directly impact current production costs, these investments will be key factors in the ability of RUS G&Ts to compete in a deregulated environment. Our analysis shows that 27 of the 33 G&T borrowers (82 percent) had higher production costs than the IOUs in their regions. For this and other reasons discussed in our report, it is probable that the federal government will eventually incur losses on some of these G&T borrowers. In a May 1995 report, Moody’s Investors Service reported, “In a more competitive environment, a G&T’s production costs relative to those of IOUs will become increasingly important. Competitively priced power resulting from low generation and purchased power costs is essential for co-ops to maintain their place in the electric utility industry of the future.” The three PMAs asserted that our assessment of risk of future losses is overstated and that the risks of future financial losses from four projects (Russell, Truman, Mead-Phoenix, and Washoe) are not “probable.” We disagree. As discussed in our report, each of these projects faces operational and/or financial difficulties. Increasing competition in the electricity industry is expected to lead to falling prices, which will put even more competitive pressure on these projects and could result in financial difficulties at others. For the reasons detailed in our report, these four projects all meet the probable loss criteria if they do not become fully operational (Russell and Truman) or certain proposals to mitigate the risk are not implemented or are not successful (Mead-Phoenix and Washoe). Because the likelihood that all four projects can be successfully turned around is, in our opinion, remote, a probable risk assessment overall is appropriate. BPA stated that by concluding that it is “reasonably possible” that the federal government will incur a loss from BPA’s operations after fiscal year 2001, we did not describe the limited, transitional nature of the risk. BPA asserted that the risk is confined to the approximately 10 years after 2001, following which BPA’s costs and the price of its wholesale power should be well below market and the risk to the government “remote.” We agree that, all else being equal, if BPA pays off its nonfederal debt as planned, the federal government’s risk begins to decrease after 2012. After that year, nuclear project debt service costs are expected to decrease from an average of about $570 million (about 29 percent of BPA’s total operating expenses for fiscal year 1996) annually to an average of about $304 million annually for the period from 2013 through 2018. However, the risk of future financial loss to the federal government would not become remote until 2019, when BPA’s scheduled debt service payments drop to less than $3 million and decrease further in the following years. TVA stated that our long-term assessment of the federal government’s risk of loss due to its involvement in TVA is more negative than is warranted. TVA stated that although there are many uncertainties about the future of the utility industry, it believes that the steps it has taken over the past 10 years and future plans to improve TVA’s competitiveness will allow it to be successful in a restructured electric utility marketplace. We disagree. As also discussed in our August 1995 report, if TVA is required to compete when wholesale prices are expected to be falling, its high level of fixed costs and deferred assets compared to neighboring utilities make it reasonably possible that the government would incur future losses. The following facts, among others, support our position. At September 30, 1996, TVA had $27.9 billion of debt and $6.3 billion of deferred assets, which leaves TVA with far more financing and deferred costs than its potential competitors. For fiscal year 1996, we found that TVA’s ratio of financing costs to revenue was more than twice the average of 11 neighboring utilities. In addition, TVA’s deferred assets at September 30, 1996, were nearly 20 percent of its gross PP&E, compared to about 3 percent for the IOUs. TVA’s vulnerability to future competition, without protections, was recently demonstrated when one of its customers, the Bristol Virginia Utilities Board, announced that it will leave the TVA system for Cinergy, Inc. beginning on January 1, 1998. Cinergy offered Bristol firm, delivered wholesale power at 2.59 cents per kWh for 7 years—40 percent lower than TVA’s comparable wholesale rate of 4.3 cents per kWh. Through the third quarter of fiscal year 1997, TVA reported a net loss of about $176 million. In May, 1997, the Board of a second TVA distributor—Paducah, Kentucky—voted to give TVA its 10-year notice to cancel its power contract. TVA’s five largest distributors, which currently buy about one-third of TVA’s power, have indicated that they plan to negotiate changes to their contracts with TVA. In addition, as discussed in our current report, on July 22, 1997, TVA released a 10-year business plan that identifies actions it plans to take to position its power operations to meet the challenges of the restructured marketplace. TVA’s planned actions support the position we have taken in this and our August 1995 report about the impact TVA’s high level of financing costs and deferred assets will have on its ability to compete in a deregulated marketplace. In announcing the 10-year plan, TVA stated that the actions described in the plan were “deemed critical for TVA to provide power at projected market prices of the future.” TVA’s Chief Financial Officer also stated “To remain competitive in the changing electrical-utility market, we must reduce our total cost of power and become more financially flexible to respond quickly to changing customer demands.” We agree with these recent TVA statements. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to appropriate House and Senate committees; the Ranking Minority Members of the House Committee on the Budget and the Subcommittee on Water and Power Resources, House Committee on Resources; interested Members of the Congress; the Secretary of Agriculture; the Secretary of the Interior; the Secretary of Energy; the Secretary of Defense; the Director, Office of Management and Budget; the Chairman of the Board of Directors of the Tennessee Valley Authority; and other interested parties. We will make copies available to others upon request. Please call me at (202) 512-8341 or Gregory Kutz, Associate Director for Governmentwide Audits, at (202) 512-9505 if you or your staffs have any questions. Major contributors to this report are listed in appendix XIV of volume 2. Linda M. Calbom Director, Civil Audits The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal electricity activities, focusing on the: (1) federal government's net recurring cost from the electricity-related activities at the Department of Agriculture's Rural Utilities Service (RUS), the Department of Energy's power marketing administrations (PMA), and the Tennessee Valley Authority (TVA) for fiscal year (FY) 1996 and, where possible, the cumulative net cost for FY 1992 through 1996; and (2) likelihood of future losses beyond the net recurring costs to the federal government from these entities. GAO noted that: (1) the federal government incurs net costs of over a billion dollars annually in supporting the electricity-related activities of RUS and the PMAs; (2) GAO estimates that the net costs to the federal government for FY 1996 totaled about $2.5 billion--$0.4 billion for BPA, $0.2 billion for the three PMAs, and about $1.9 billion for RUS, including about $982 million in RUS loan write-offs; (3) the federal government is exposed to additional future losses beyond the recurring net costs resulting from the government's more than $84 billion in direct and indirect financial involvement in the electricity-related activities of RUS, the PMAs, and TVA as of September 30, 1996; (4) these potential future losses relate to the possibility that RUS borrowers, the PMAs, and TVA would be unable to repay the full $53 billion in debt owed to the federal government or that the federal government would incur unreimbursed costs as a result of actions it took to prevent default or breach of contract on the $31 billion in nonfederal debt; (5) this risk exists because certain RUS borrowers, the PMAs (to varying degrees) and TVA are financially vulnerable primarily as a result of uneconomical construction projects and the accumulation of substantial debt, which have resulted in high fixed costs; (6) the Southeastern, Southwestern, and Western PMAs generally market wholesale power that consistently costs at least 40 percent less than power sold by nonfederal utilities and are therefore currently competitively sound overall; (7) however, the three PMAs maintain this overall soundness in part because they do not recover all power-related costs; (8) if they were required to recover some or all of these power-related costs, their ability to remain competitive might be impaired and the risk of future financial loss to the federal government increased; (9) also, each has one or a few projects or rate-setting systems with problems that, taken as a whole, make the risk of some loss to the federal government probable; and (10) for TVA, the risk that the federal government will incur losses is remote as long as TVA retains a position similar to a traditional regulated utility monopoly in its service area.
DOD is one of the nation’s largest employers, with more than 1.4 million active-duty personnel (as of March 2011). To fulfill its mission of maintaining national security, DOD must meet its human capital needs by recruiting, retaining, and motivating a large number of qualified individuals, though the requirement for new recruits has declined in the last couple of years (see table 1 for the numbers of accessions and reenlistments from fiscal years 2006 through 2010). The Office of the Secretary of Defense for Personnel and Readiness is principally responsible for establishing active-duty compensation policy. In 1962, the Gorham Commission adopted the term “regular military compensation” to be used to compare military and civilian-sector pay. Regular military compensation is defined as the sum of basic pay, allowances for housing and subsistence, and federal tax advantage. In addition to regular military compensation, DOD also uses over 60 authorized special and incentive pays, including various enlistment and selective reenlistment bonuses, to offer incentives to undertake or continue service in a particular specialty or type of duty assignment. According to DOD, special pays are used to selectively address specific force management needs, such as staffing shortfalls in particular occupational areas, hazardous or otherwise less desirable duty assignments, and attainment and retention of valuable skills. In addition, in certain occupational categories, such as technical and professional fields, special pays are used to help ensure pay comparability with civilian sector salaries. OSD believes that these pays offer flexibility to the compensation system not otherwise available through the basic pay table. To provide guidance to the services on managing their enlistment and reenlistment bonus programs, the Office of the Secretary of Defense (OSD) issued DOD Directive 1304.21. Under this directive, the Principal Deputy Under Secretary of Defense for Personnel and Readiness is assigned responsibilities including monitoring certain bonus programs carried out by the services. Specifically, the Principal Deputy Under Secretary of Defense for Personnel and Readiness is responsible for establishing (1) criteria for designating military specialties that qualify for these bonuses, (2) criteria for individual members’ eligibility for these bonuses, and (3) reporting and data requirements for the periodic review and evaluation of these bonus programs. The Principal Deputy Under Secretary of Defense for Personnel and Readiness is also responsible for recommending to the Secretary of Defense measures required to attain the most efficient use of resources devoted to these programs. As required by 37 U.S.C. § 1008, at least once every 4 years, the President directs a review of the principles and concepts of the military compensation system. These regular studies are called the Quadrennial Reviews of Military Compensation and typically focus on issues such as achieving flexibility and promoting fairness in compensation. The most recent Quadrennial Review was completed in 2008 and offered a number of recommendations, including simplifying the structure of special and incentive pays. We have completed a body of work on military compensation and enlistment and reenlistment bonuses. For example, in April 2010, we reported on the comparison of military to civilian pay. In a 2009 report, we evaluated the Army’s use of bonuses and determined that the Army did not know whether it was paying more than it needed to pay to get a cost- effective return on investment. In that report, we recommended that the Army build on available analyses to set cost-effective enlistment and reenlistment bonuses in order to avoid making excessive payments. As a result of our report, the Army significantly reduced its enlistment and reenlistment bonus program; however, the reductions were not based on specific analysis that determined the cost-effective bonus amount. DOD contracted $1.2 billion in fiscal year 2010 for enlistment and reenlistment bonuses, an amount that was 58 percent less than the $2.8 billion contracted in fiscal year 2008, its peak year. For the services, total contracted bonus amounts peaked in fiscal years 2008 or 2009 and then decreased. (See fig. 1.) Specifically, for fiscal years 2006 through 2009, total contracted amounts for bonuses rose for the Air Force and the Marine Corps and declined thereafter by 16 percent and 64 percent, respectively. For the Army and the Navy, contracted amounts increased through fiscal year 2008 and then declined by 78 percent and 40 percent, respectively. Though the Air Force contracted the least of all the services for bonuses from fiscal years 2006 to 2009, the total contracted amount increased by 254 percent during that period, from $100 million to $352 million. The Air Force attributes this increase, in part, to the reenlistment bonus program being underfunded in fiscal year 2006. In addition, the Air Force believes that the increase was necessary to ensure that its hard-to- fill occupational specialties, such as battlefield airmen, were filled and to accommodate the high operations tempo necessary for the war in Iraq and Afghanistan. During the same time, the Marine Corps increased the amounts contracted by 398 percent, from $108 million to $540 million. The Marine Corps attributes this increase to the 2007 presidential Grow-the- Force initiative, which required the Marine Corps to increase its number of active-duty personnel by 27,000. The Army also increased as part of the Grow-the-Force initiative; its total contracted amounts increased by 15 percent from fiscal years 2006 to 2008. When growing the force, the Army stated that it was not targeting bonuses to hard-to-fill or critical specialties but rather was focused on meeting its overall recruiting mission. As a result, once the Army met 99 percent of its growth in fiscal year 2008, it began to pay fewer bonuses and target them to personnel with specific critical skill sets, such as divers and satellite communication systems operators/maintainers. Between fiscal years 2006 to 2008, the Navy increased its total bonus funds by 13 percent. Navy officials attribute this increase, in part, to the low unemployment rates for years 2007 and 2008 and the need to provide incentives to retain sailors with more options for postmilitary employment. From fiscal years 2006 through 2010, DOD contracted $11 billion for enlistment and reenlistment bonuses (in constant fiscal year 2010 dollars). Of this total, the Army accounted for approximately half, and the Air Force for the least amount, at 9 percent (see fig. 2). During this time, DOD reported that the active components of all four services met or exceeded their numeric goals for enlisted accessions and, with the exception of the Army in fiscal years 2006 through 2008, the active components of the services also met their benchmarks for recruit quality. For retention, the services generally met their goals but not in all years. With the exception of the Army, the services contracted more on their reenlistment bonus programs than on their enlistment bonus programs. Of the $11 billion in contracted bonuses by all the services, $4.5 billion, or 40 percent, was for enlistment bonuses, and $6.6 billion, or 60 percent, was for reenlistment bonuses. Army officials said they were paying high enlistment bonuses to achieve very high accession rates beginning in 2005 because of the negative publicity surrounding the wars in Iraq and Afghanistan, coupled with a strong economy and high employment rates from 2005 to 2008. In addition, the Army was to increase its end strength, consistent with the “Grow-the-Force” plan, from approximately 480,000 to approximately 547,000. To meet this goal, the Army also had to retain greater numbers of personnel. Unlike the Army, the Navy, Air Force, and Marine Corps contracted a greater portion of their overall bonus amounts on reenlistment, rather than enlistment, bonuses (see fig. 3). According to the Navy, more is spent on reenlistment bonuses because the cost to replace trained sailors is significant due to long training programs, high attrition rates, and a high demand for the occupations they are trained for in the civilian sector such as those trained in nuclear occupations. Similarly, the Air Force attributed its greater spending on reenlistment bonuses to the competition with the private sector for trained and experienced airmen. The Air Force also stated that the eligible population for reenlistment bonuses is much larger than for enlistment bonuses and the Air Force has a training investment in these experienced servicemembers. According to the Marine Corps, its focus has also been on retaining proven combat leaders, and it has therefore been targeting the majority of its discretionary funding on retention rather than accessions. In addition, the Marine Corps stated that the Marine Corps “sells itself” to potential applicants and therefore needs to offer enlistment bonuses only for certain hard-to-fill occupations. The services also varied in the average amounts of bonuses. From fiscal years 2006 through 2008, the Army’s average per-person enlistment bonuses were higher than those of the other services (see fig. 4). For example, in fiscal year 2008, the Army’s average enlistment bonus was $18,085, while the Air Force’s was only $4,271. However, in fiscal years 2009 and 2010, the Navy’s average per-person enlistment bonus amounts were higher than those of all the other services. For example, in fiscal year 2010, the Navy’s average enlistment bonus was $23,957, while the Army’s was $5,969. Navy officials stated that, during this period, it began to give bonuses to fewer personnel, but those personnel were given higher bonuses, thus driving the average up. With respect to reenlistment bonuses, the Air Force’s average per-person bonus amount was higher than those of the other services from fiscal years 2006 through 2008. The Army’s average per-person bonus amount was smaller than those of the other services from fiscal years 2006 through 2010, ranging from $13,796 to $4,392 (see fig. 5). In contrast, for fiscal years 2006 through 2008, the Air Force’s average per-person reenlistment bonus amounts were higher than the other services’, ranging from $32,667 to $36,247. The Marine Corps’ average was highest of all the services’ in fiscal year 2009, at $36,753; and the Navy’s average was highest in fiscal year 2010, at $32,719. According to Navy officials, the Navy needs to retain highly skilled sailors who have undergone extensive training for skills that are marketable in private industry and require arduous missions. For example, officials commented that the SEALs are the first in line when infiltrating military targets in dangerous environments, and their skills have been sought by private contractors; as a result, their bonuses tend to be higher. Navy officials also said that the length and cost of training nuclear personnel makes the opportunity cost for retraining a new sailor greater than the bonus. The services have processes in place that include the analysis of data on how difficult it is to retain and recruit particular occupations and the subjective judgment of personnel who are involved in managing these occupations. DOD guidance allows the military departments the flexibility to offer a bonus to any occupation that meets certain criteria, such as being hard to fill or retain, and they may adjust bonuses as market conditions change. However, although much research has been conducted on bonuses’ effects on enlistment and retention, DOD does not know whether the services have been paying more than necessary to meet their recruiting and retention goals. Identifying optimal bonus amounts is challenging because such studies must control for the numerous, changing factors that affect individuals’ recruiting and retention decisions, such as the unemployment rate, the deployment rate resulting from overseas operations, and the changing public perceptions of the war. The services’ processes for determining which occupations should be offered enlistment or reenlistment bonuses include the use of models. While the services use different models, they generally incorporate factors such as data on occupations that have historically received bonuses, attrition and retention rates for these occupations, and the current population for each occupation. Models for determining eligibility for enlistment bonuses include data on occupational fill rates and available training slots for particular occupations. Models for determining reenlistment bonuses include data on the retention rates of and projected future shortages in particular occupations. In addition to using models, the services seek stakeholder input on their bonus program plans. Stakeholders include personnel managers who have experience with the occupations being discussed and can contribute information that cannot be provided by the models, such as whether servicemembers in a particular occupation are experiencing unusual difficulties. Stakeholder input is provided differently across the services but is consistently used to make adjustments to data provided by the models. For example, the Army and the Navy consider stakeholder input through formal meetings. Specifically, the Army formally holds Enlisted Incentives Review Boards each quarter that include personnel from the Army Recruiting Command and the Army Human Resources Command. During these board meetings, stakeholders discuss which occupations should receive a bonus, whether these bonuses are appropriately set, and come to a consensus on how much each bonus should be during the next quarter. The Navy, in addition to a monthly review of the bonus program, formally convenes a working group three to four times per year for reenlistment bonuses where personnel managers responsible for monitoring and managing the retention health of occupations present opinions and analysis as to whether the recommendations for bonus amounts are set appropriately or need adjustments. In contrast, the Marine Corps and Air Force utilize a less formal approach to stakeholder input. For example, to obtain input on their projected enlistment bonus award plans, Marine Corps and Air Force bonus program managers seek input from their recruiting and human resources personnel, who provide their perspectives on projected future shortages. As part of the process, all services stressed that regardless of whether bonus levels are produced by models or stakeholder input, in the end, bonus amounts must be adjusted to fit into the services’ fiscal budgets. OSD guidance allows the military departments flexibility to offer bonuses to occupations that they are having difficulty filling. OSD guidance to the services on administering their bonus programs states that the intent of bonuses is to influence personnel inventories in situations in which less costly methods have proven inadequate or impractical. The guidance also states that the military skills selected for the award of bonuses must be essential to the accomplishment of defense missions. Additionally, the guidance sets forth some general criteria to use when identifying bonus- eligible occupations. For enlistment bonuses, the Secretaries of the military departments are to consider, among other things, the attainment of total accession objectives, priority of the skill, year group and pay grade shortages, and length and cost of training. For reenlistment bonuses, the Secretaries of the military department concerned are to consider, among other things, critical personnel shortages, retention in relation to objectives, high training cost, and arduousness or unattractiveness of the occupation. These general criteria provided by OSD allow each Secretary of a military department to determine what occupations should be considered essential and therefore eligible for bonuses. Because the criteria OSD lists in its guidance are broadly defined and because the Secretaries of the military departments are purposely given the flexibility to adjust which occupations they believe need to be offered bonuses as conditions change, the departments are given the authority to award bonuses to any occupation under certain conditions. That is, all service occupations could be considered essential to the accomplishment of defense missions if the department is experiencing difficulty filling them. Service officials told us that this flexibility allows the departments to adjust bonuses quickly as market conditions change. An Army official explained that, for example, in some cases an occupation such as cook may need a bonus because personnel do not want to be assigned to it. All services regularly monitor the performance of their enlistment and reenlistment bonus programs. With respect to measuring the performance of their enlistment bonus programs, all services said that they continuously monitor their progress in meeting recruiting goals. For example, Army officials told us that they use the quarterly recruiting numbers within each occupational specialty as indicators of the effectiveness of the Army’s enlistment bonus program. If they notice that an occupation is lagging behind or that recruiters have been particularly successful in meeting goals for an occupation, the quarterly Enlisted Incentives Review Board provides an opportunity for the Army to move that occupation to a level associated with a higher or lower bonus amount. The Army then continues to monitor its recruiting numbers to gauge whether this change has worked. With respect to measuring the performance of the retention bonus programs, all services monitor their progress in meeting their retention goals. For example, Navy officials said they review the percentage of reenlistment goals achieved for each occupational specialty and use that information to increase or decrease bonus amounts. With both enlistment and reenlistment bonuses, the services take a certain amount of risk when changing bonus amounts, but officials told us that continuous monitoring of the recruiting and retention data allows them to make necessary adjustments. Moreover, officials also told us that they are not willing to take too much of a risk with some critical occupations. For example, Navy officials said that, given the length and cost of training nuclear personnel, the high qualifications that these personnel must have, and the high marketability of their skills in the private sector, the Navy sees bonuses for these occupations as essential. The services have been relying on the analyses of recruiting and retention data to determine whether their bonus programs have produced intended results, but these data alone are not sufficient to help ensure that bonus levels are set at the most cost-effective amounts. Just as for any government program, resources available for bonuses are finite, and increasing bonuses for some groups or occupations must come at the expense of incentives for other groups or occupations. Service officials agreed that their existing approach of monitoring the performance of bonus programs by looking at recruiting and retention data does not tell them what specific bonus amounts are most cost-effective and whether their goals could be achieved with a smaller bonus amount or a different, and possibly less costly, combination of incentives. OSD guidance indicates that officials must exercise bonus authorities in a cost-effective manner. According to DOD Directive 1304.21 and DOD Instruction 1304.29, bonuses are intended for specific situations in which less costly methods have proven inadequate or impractical. DOD Directive 1304.21 also states that it is wasteful to use financial incentives when less costly but equally effective actions are available. Further, in its 2006 report, the Defense Advisory Committee on Military Compensation set forth principles for guiding the military compensation system, one of which called on the military compensation system to meet force management objectives in the least costly manner. There is an extensive body of research on bonus effectiveness, but much of it does not assess the cost-effectiveness of specific bonus amounts. Over the years, the services and other organizations have conducted extensive research on the use of cash incentives, some of it dating back to the 1960s and 1970s. This research has generally shown that bonuses have a positive effect on the recruitment and retention of military personnel, even after controlling for a variety of demographic, economic, and other factors. Additionally, a study issued by RAND in 1986 specifically considered the cost-effectiveness of bonuses. RAND analyzed the results of a nationwide experiment to assess the effects of varying enlistment bonus amounts, showing that cash bonuses were extremely effective at channeling high-quality individuals into the traditionally hard-to-fill occupations. Furthermore, RAND found that increased bonuses had the effect of both bringing more people into the service and lengthening the terms of their commitment. However, according to DOD and the researchers interviewed, there is no recent work focused on the cost- effectiveness of specific bonus amounts. We cited some of this research in a 1988 report on the advantages and disadvantages of a draft versus an all-volunteer force and, more recently, in a 2009 report on the Army’s use of incentives to increase its end strength. In the 2009 report, which focused on the Army, we determined that the Army did not know whether it was paying more than it needed to pay to get a cost-effective return on investment, and we recommended that the Army build on available analyses to set cost-effective enlistment and reenlistment bonuses in order to avoid making excessive payments. DOD concurred with our recommendation and commissioned RAND to conduct a study to implement it. The study, released in June 2010, found that bonuses were an important and flexible tool in meeting recruiting and retention objectives, particularly for the Army, but did not assess whether bonuses were set too high. According to DOD, a detailed study for bonus amounts was beyond the scope of the RAND study. DOD wanted that study to determine whether bonuses in general were an efficient and effective use of resources for recruiting and retention and how these bonuses compared with other incentives. DOD believes that determining what bonus amounts are optimal is significant and complex enough to warrant its own study and plans to pursue that line of effort when sufficient resources are available. At present, however, it has no immediate plans to do so. We recognize that identifying optimal bonus amounts is challenging because such studies must control for the numerous, changing factors that affect individuals’ recruiting and retention decisions, such as the unemployment rate, the deployment rate resulting from overseas operations, and the changing public perceptions of the war. Despite these challenges, research organizations and some of the services have been considering various approaches that could be used for that purpose. Several research organizations have developed specific methodologies for conducting studies on the cost-effectiveness of bonuses. For example, one research organization submitted a proposal to DOD and the Army to develop an econometric model for determining the most cost-effective bonus amounts for different occupations. Another research organization is considering the use of an experiment, in combination with an econometric model, for determining the minimal amounts of bonuses needed to fill different occupations and had informally shared its ideas with DOD. The researchers interviewed considered the costs of such research to be modest and expected the benefits of any potential improvements to the services’ bonus programs resulting from such research to outweigh the costs, particularly given the billions of dollars that the services have spent on bonuses over the years. According to DOD, service officials are interested in this type of research, which would provide them with information needed to more effectively manage limited resources in their bonus programs. In fact, some services have already taken steps toward obtaining this information. For example, the Army has funded an econometric model developed by a research organization to predict the likelihood of applicants’ choosing particular occupational specialties as a function of various factors, including bonuses offered. According to an Army official, this model would allow the Army to evaluate alternative cash incentive packages needed to fill specific occupations, thus optimizing its recruiting resources. The Navy uses an econometric model developed 10 years ago by a research organization, which Navy officials told us allows them to predict the extent to which a mix of recruiting resources, including varying bonus amounts, would enable them to meet recruiting goals. Although Navy officials said that this model does not provide information on recruiting outcomes within specific occupations, it helps them determine which bonus amounts would be needed to meet the overall recruiting mission. While efforts to develop ways to assess the cost-effectiveness of bonuses have been made by some research organizations and have generated interest at the individual service level, OSD has not coordinated research in this area. The Principal Deputy Under Secretary of Defense for Personnel and Readiness is responsible for monitoring the bonus programs of the military services and recommending to the Secretary of Defense measures required to attain the most efficient use of resources devoted to the programs. The Office of the Under Secretary of Defense for Personnel and Readiness therefore has a role in monitoring individual service efforts to assess the cost-effectiveness of bonuses, which could be facilitated by information-sharing among the services on this issue. OSD recognizes the importance of having information on the cost-effectiveness of bonuses and using that information to guide the services’ management of their bonus programs. OSD officials stated that they are in constant contact with the services regarding their use of bonuses and facilitate conferences, working groups, and other meetings that allow the services to discuss their incentive programs. Moreover, the development of statistical models for assessing bonus effectiveness is one of the fiscal year 2012 research priorities for the Accessions Policy office within the Office of the Under Secretary of Defense for Personnel and Readiness. However, to date, OSD has not facilitated the exchange of information among the services on how best to conduct research on the cost- effectiveness of bonuses, what efficiencies could be gained from such efforts, and whether to jointly undertake them. Without such information- sharing, the services may not be able to fully take advantage of existing and emerging methodologies for assessing cost-effectiveness, share lessons learned, and ultimately obtain critical information needed to know whether they are getting the best return on their bonus investments. DOD has begun to increase its flexibility in managing special and incentive pays, as authorized by the National Defense Authorization Act for Fiscal Year 2008. According to DOD, special and incentive pays are intended to provide the services with flexible compensation dollars that can be used to address specific staffing needs and other force management issues that cannot be efficiently addressed through basic pay increases. However, while DOD has discretionary authority to determine the amount and the recipients of enlistment and reenlistment bonuses based on personnel needs, it did not previously have similar discretion to adjust pays where the amounts and eligibility criteria are specified by law. According to DOD, a significant number of special and incentive pays paid to military personnel have been statutorily prescribed. In our review of 15 special and incentive pays, 6 are currently entitlement pays and accounted for $3.9 billion, or 29 percent, of the $13.6 billion expended on the 15 special and incentive pays from fiscal years 2006 through 2010. Of the 15 pays we reviewed, DOD has not yet exercised its authority to consolidate all of them and thereby increase its flexibility in managing who receives these pays and how much recipients are paid. Specifically, DOD has not yet consolidated pays in the following categories: Aviation Career Incentive Pay; Career Sea Pay; Submarine Duty Incentive Pay; Hazardous Duty Incentive Pay, which includes Crew Member Flying Duty Pay; and Parachute Duty Pay. The differences in flexibility DOD has in managing entitlement pays that are currently required by statute compared with discretionary pays are illustrated by the two special and incentive pays that the services give to aviation officers: Aviation Career Incentive Pay (ACIP) and Aviation Continuation Pay (ACP). The services have specific statutory guidelines that require certain levels of payment and define the personnel who receive ACIP until this pay is consolidated with other flight pays. If a servicemember meets the aviation criteria outlined in 37 U.S.C. § 301a, he or she is entitled to this special pay on a graded scale that depends on years of flying experience. The payments range from $125 to $840 a month. Officer aviators who meet the statutory criteria are entitled by law to this monthly supplement regardless of individual assignments. In other words, payment does not vary according to type of aircraft, training required, or any other measure services might use to differentiate aviator assignments. By comparison, ACP is a special pay authority that is used as a retention bonus for officers who have completed their active-duty service obligations to incentivize them to remain on active duty. Unlike the restrictions currently applicable to administering ACIP, DOD and the services have the discretion to decide who should get ACP and how much to pay—-up to the statutory maximum of $25,000 per year. The flexibility the services currently have in administering ACP allows them to use the pay differently from year to year according to their needs. For example, over the 5-year period we reviewed, the Marine Corps offered the lowest amounts of ACP, ranging from a minimum of $2,000 to a maximum of $20,000. The Air Force and the Army offered the highest levels of ACP, ranging from $12,000 to $25,000; however, despite having the same range, the two services differ on the average bonus amounts awarded, with averages of $20,000 and $15,000 respectively. Each service also determines which of its aviators should receive the highest amounts of bonus based on its determination of an aviation specialty as critical and requiring a bonus. For example, as DOD reported in its 2010 report to Congress on Aviation Continuation Pay, in the Air Force’s fiscal year 2010 program, the highest amount—$25,000 per year—was offered to pilots who had just completed their undergraduate flying training service commitments and who signed a 5-year agreement. Uncommitted pilots and combat systems officers operating remotely piloted aircraft were offered $15,000 a year for 3-, 4-, or 5-year contracts; air battle managers were offered the same amount for 5-year contracts. By comparison, the Army offered $25,000 per year to Special Operations Aviation Regiment pilots and $12,000 per year to pilots who were Tactical Operations Officers. Each of the services, with the exception of the Army, has decreased the number of servicemembers receiving ACP from fiscal years 2006 to 2010 (see table 2). All services decreased their ACP programs in fiscal year 2010, but each service justified the program as necessary. For example, the Army reported that shortages remained in critical military occupational specialties and incentives were necessary to increase pilot inventories, support present readiness, and enable future transformation. The Air Force stated that the demand for pilots continued to exceed supply. Specifically, it required a large eligibility pool of pilots for remotely piloted aircraft, special operations forces pilots, and air operations center and air liaison officer pilots. In The Tenth Quadrennial Review of Military Compensation, DOD identified limited flexibility in managing its special pays as a key weakness in its compensation system. DOD further stated that some statutory pays were rarely reviewed, updated, or discontinued, even when the staffing concerns they were designed to address had abated. In order to prevent special and incentive pays from becoming permanent entitlements paid to servicemembers because of statutory requirements, DOD recommended in this review that the more than 60 special and incentive pays be replaced with 8 broad discretionary special and incentive pay authorities that will allow DOD and the services discretion to determine recipients and amounts. This authority was provided in the National Defense Authorization Act for Fiscal Year 2008 and requires DOD to transition to a consolidated structure over a 10-year period. According to DOD’s consolidation plan, the transition will be complete in fiscal year 2014 (see fig. 6). However, OSD officials stated that some pays will be transitioned sooner. For example, OSD is currently preparing a draft policy for transitioning ACP and ACIP, which is expected to be approved this fiscal year by the Secretary of Defense, 1 year ahead of the originally planned date. The Tenth Quadrennial Review identified three benefits of consolidating the statutory authorities for DOD’s special and incentive pays. These benefits include (1) increasing the ability of the services to better target resources to high priority staffing needs and respond to changing circumstances; (2) decreasing the number of pays and therefore reducing the administrative burden of managing over 60 different pays with different sets of rules and budgets; and (3) increasing performance incentives, by allowing the services to link some special and incentive pay grades to high performance by motivating and rewarding effort and achievement. Under the consolidation, for example, aviator pays will be combined into a single pay authority entitled “Special Aviation Incentive Pay and Bonus Authorities for Officers,” allowing the services to make payments to aviators depending on staffing needs and other force management issues specific to each service. This consolidation could result in many differences in the ways the services administer these pays. For example, certain aviator occupations may no longer receive an incentive, or incentives could vary by specific occupation or years of service. DOD has identified perceived benefits of consolidating special and incentive pays, but it does not have baseline metrics in place to measure the effects of its consolidation effort. As we previously reported, organizations should establish baseline measures to assess progress in reaching stated objectives. DOD’s January 2009 report on the consolidation effort, the latest such report available, stated that it had only converted a limited number of pays to the new consolidated pay authority, but this report did not outline how effectiveness will be measured for implementing these pays. OSD officials told us that they plan to revise the relevant DOD instructions giving the services guidelines on how to administer the new programs but they did not say these guidelines would include any performance metrics for measuring the effects of the consolidation effort. As a result, DOD may not be positioned to monitor the implementation of this consolidation to determine whether it is in fact resulting in greater flexibility and more precise targeting of resources and what impact the consolidation is having on DOD’s budget. From fiscal years 2006 through 2010, the Army’s contracted amounts for bonuses rose more dramatically than the other services’, as the Army increased its force size and deployed vast numbers of servicemembers to Iraq and Afghanistan. Conversely, the Army was able to more dramatically decrease its bonus contract amounts as the economy declined, the unemployment rate rose, and the Army was not trying to grow its overall force. The Army, and the other services to some extent, demonstrated that they can use bonuses flexibly in response to changing market conditions, but they still do not know whether they are paying more than they need to pay to attract and retain enlisted personnel. Also, at present, DOD has no formal method of facilitating discussions among the services on efficiencies to be gained from assessing the cost-effectiveness of their incentive programs. Although determining optimal bonus amounts is challenging, coordination of research efforts to determine the return on investment of DOD’s various programs will become increasingly important as constraints on fiscal resources increase. Moreover, determining optimal bonus amounts will help DOD adjust the amounts for occupations due to changing market conditions. Also, DOD has not yet fully implemented its consolidation authorities, which would give it more flexibility to target its special and incentive pays to those servicemembers it needs most to retain and to discontinue paying some servicemembers these pays when it is no longer necessary to retain them. The statutory requirement to consolidate DOD’s more than 60 pays should move DOD toward more flexibility in managing its incentive programs, but it will be critical for DOD to continually monitor its progress toward this goal as it completes the consolidation of its special and incentive pays over the next several years. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following two actions: Coordinate with the services on conducting research, as appropriate, to determine optimal bonus amounts. As the consolidation of the special and incentive pay programs is completed over the next 7 years and the instructions directing the services on how to administer the new programs are revised, monitor the implementation of this consolidation to determine whether it is in fact resulting in greater flexibility and more precise targeting of resources and what impact the consolidation is having on DOD’s budget. In written comments on a draft of this report, DOD concurred with both our recommendations. DOD stated that it would find the line of research we discuss in our first recommendation to be beneficial and has discussed this issue on a number of occasions. DOD also said that it will consider this a priority research project and begin it when funds are available. DOD stated that it also agrees, as we discussed in our second recommendation, with the appropriateness of monitoring the implementation of the consolidated authorities to help ensure that they do result in greater flexibility and more precise targeting of resources. However, it stated that, while the department believes that the new authorities will result in more precise targeting of resources, it pointed out that the cost of special and incentive pays could increase or decrease based on market conditions, such as the economy. (DOD’s comments appear in their entirety in app. II.) We will send copies of this report to the appropriate congressional committees. We will also send copies to the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This review included an analysis of enlistment and reenlistment bonuses for enlisted personnel, as well as special pays for officers and enlisted personnel in the active components of the Army, the Navy, the Marine Corps, and the Air Force. We analyzed data on 15 special and incentive pays across the services for fiscal years 2006 through 2010, which represented the top five expenditures for special and incentive pays each year for each service. We focused on pays that were available to most servicemembers. For this reason, we excluded medical pays. To conduct our work, we analyzed service data on enlistment and reenlistment bonuses, reviewed Department of Defense (DOD) and service regulations related to the use of bonuses and special and incentive pays; interviewed DOD and service officials on the processes and methodological tools in place to identify occupations eligible for bonuses and steps taken to assess the effectiveness of their bonus programs; observed two services’ meetings that are convened to determine which occupations should be eligible for bonuses; interviewed researchers knowledgeable about literature on bonus effectiveness; and reviewed selected studies on this subject. We interviewed DOD officials in the Washington, D.C., metropolitan area and conducted a site visit to Millington, Tennessee, to observe the Navy’s Working Group convened to determine which occupations should be eligible for bonuses. In the course of our work, we contacted or visited the organizations and offices listed in table 3. To determine trends in the use of enlistment and reenlistment bonuses, we requested and analyzed service data on enlistment and reenlistment bonuses contracted from fiscal year 2006 through fiscal year 2010. For enlistment bonuses, the services provided data on the amounts contracted for various types of enlistment bonuses that they used for the purpose of attracting individuals into the service, such as bonuses awarded for entering specific occupational specialties, having certain qualifications, or leaving for basic training within a specific amount of time. Some of the bonuses, such as those paid through the Army’s Advantage Fund, were only available in some of the years for which the data were requested. In conducting our analyses of enlistment bonuses, we combined the amounts that the services contracted for all enlistment bonuses in a given fiscal year. For reenlistment bonuses, all services provided data on the amounts contracted in the Selective Reenlistment Bonus (SRB) program, which offers monetary incentives to qualified personnel who reenlist in certain occupations. We assessed the reliability of each service’s enlistment and reenlistment bonus data by obtaining information from the services on their systems’ ability to record, track, and report on these data, as well as the quality control measures in place to ensure that the data are reliable for reporting purposes. We found enlistment and reenlistment data reported by the services to be sufficiently reliable to demonstrate trends in the services’ use of these incentives. In order to observe the trends in the use of enlistment and reenlistment bonuses over time, we adjusted the data provided by the services for inflation by using the Consumer Price Index. To evaluate the extent to which the services have processes to designate occupations that require bonuses and whether bonus amounts are optimally set, we reviewed DOD and service regulations pertaining to their processes for designating bonus-eligible occupations. We also interviewed relevant officials from the Office of the Secretary of Defense (OSD) and the services with responsibilities for designating occupations as bonus- eligible on the processes in place to determine which occupations should receive bonuses, including the analytical tools such as statistical models used for this purpose. Additionally, we discussed with them how the effectiveness of their bonus programs is measured, requesting any available data to demonstrate the effectiveness of their bonus programs. We also observed two services’ meetings that are convened to determine which occupations should be eligible for bonuses. To determine whether bonus amounts are optimally set, we requested and reviewed the data used by the services to gauge their bonus programs’ effectiveness. All the services indicated that they use accession and retention data for that purpose, and we obtained these data for all the services for fiscal years 2006 through 2010 from OSD. In addition, we contacted officials from the Army Research Institute, the Center for Naval Analyses, the Institute for Defense Analyses, RAND, and the Lewin Group to discuss their past and proposed work on bonus effectiveness. We also reviewed selected studies on bonus effectiveness. To determine how much flexibility DOD has in managing selected special and incentive pays, we requested and analyzed service data on the top five special pays (according to overall expended dollar amount by service) for officer and enlisted active-duty personnel from fiscal year 2006 through fiscal year 2010. The list of the top five pays in each of these years varied by service, as shown in table 4. For the purposes of this objective, we excluded enlistment and selective reenlistment bonuses because we addressed them in detail in previous objectives. We also excluded the Critical Skills Retention Bonus for enlisted personnel. In addition, we excluded medical pays for enlisted personnel and officers because we focused on pays that were available to most servicemembers. We assessed the reliability of each service’s special pays data by obtaining information from the services on their systems’ ability to record, track, and report on these data, as well as the quality control measures in place to ensure that the data are reliable for reporting purposes. We found the special pays data reported by the services to be sufficiently reliable for demonstrating trends in the services’ use of these incentives over time. In addition, we interviewed DOD officials on their role in managing special pay programs, the amount of flexibility they have over them, and their ongoing efforts to consolidate these pays. We also requested and reviewed DOD reports and other documents pertaining to special pays and the consolidation effort, such as the 2010 report to Congress on Aviation Continuation Pay and the 2009 report to Congress on the implementation plan for the consolidation of special pays. We conducted this performance audit from September 2010 through June 2011 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our research objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, Lori Atkinson, Assistant Director; Natalya Barden; Darreisha Bates; Timothy Carr; Grace Coleman; K. Nicole Harms; Charles Perdue; Terry Richardson; Beverly Schladt; Amie Steele; Michael Willems; and Jade Winfree made key contributions to this report.
The Senate report to accompany the 2011 Defense authorization bill directed GAO to assess the Department of Defense's (DOD) use of cash incentives to recruit and retain highly qualified individuals for service in the armed forces. This report (1) identifies recent trends in DOD's use of enlistment and reenlistment bonuses, (2) assesses the extent to which the services have processes to determine which occupational specialties require bonuses and whether bonus amounts are optimally set, and (3) determines how much flexibility DOD has in managing selected special and incentive pays for officer and enlisted personnel. GAO analyzed service data on bonuses and special and incentive pays, reviewed relevant guidance and other documentation from DOD and the services, interviewed DOD and service officials, and observed two working groups that were determining bonus amounts. DOD engaged in enlistment and reenlistment contracts for bonuses to servicemembers that totaled $1.2 billion in fiscal year 2010, down 58 percent from fiscal year 2008. Contracted amounts peaked in the Army and the Navy in fiscal year 2008 and declined thereafter; amounts peaked for the Marine Corps and the Air Force in fiscal year 2009 and then declined. From fiscal years 2006 through 2010, the services contracted a total of $11 billion for bonuses, with the Army accounting for 52 percent, the Navy, 24 percent, the Marine Corps, 16 percent, and the Air Force, 9 percent. About $4.5 billion of the $11 billion was contracted for enlistment bonuses and $6.6 billion for reenlistment bonuses. With the exception of the Army, the amounts the services contracted were higher for reenlistment than enlistment bonuses during this time period. For example, the Army's average enlistment bonus was higher than that of the other services in fiscal years 2006 through 2008, while the Navy's was highest in fiscal years 2009 and 2010. On the other hand, the Army's average reenlistment bonus was smaller than those of the other services during this period. The services have processes that include the analysis of data on how difficult it is to recruit and retain particular occupations and use these processes to adjust bonuses, but they do not know whether they are paying more than they need to for these purposes. DOD guidance allows the departments to offer a bonus to any occupation that they have difficulty recruiting or retaining, thereby allowing them to adjust their policies to changing market conditions. However, though much research has been conducted on bonuses' effects on enlistment and retention, DOD does not know whether the bonus amounts the services offer are optimal. Efforts to develop ways to assess the cost-effectiveness of bonuses have been made by some research organizations and have generated interest at the individual service level, but there has been no coordinated DOD-wide work to facilitate information-sharing among the services on this issue. Without such information-sharing, the services may not be able to fully take advantage of existing and emerging methodologies for assessing whether they are getting the best return on their bonus investments. DOD has begun to increase its flexibility in managing special and incentive pays while consolidating them into eight categories. GAO reviewed 15 of DOD's more than 60 special and incentive pays and found that during fiscal years 2006 through 2010, it spent $13.6 billion on those pays and that for about 30 percent of that amount, DOD was unable to adjust numbers of recipients or amounts based on market conditions because they had not yet been consolidated and were established in legislation. DOD's consolidation of special and incentive pays will allow the services more flexibility in managing them. However, at present, DOD has not established metrics that will enable it to determine whether this consolidation is resulting in greater flexibility as it transitions to the new categories by fiscal year 2014. As a result, DOD may not be positioned to measure future progress in meeting the intended goal of the consolidation, which is to give the services more flexibility. GAO recommends that DOD (1) coordinate with the services to facilitate discussions on conducting research, as appropriate, to determine optimal bonus amounts and (2) monitor the implementation of its consolidation of special and incentive pays to determine whether it is resulting in greater flexibility and what impact the consolidation is having on DOD's budget. In commenting on a draft of this report, DOD concurred with both recommendations.
Puerto Rico, with about 3.6 million residents, is the largest U.S. territory. As a territory, Puerto Rico is subject to congressional authority, although Congress has granted Puerto Rico authority over matters of internal governance. Puerto Rico has held 4 plebiscites intended to determine its preferred status relationship with the United States. The most recent plebiscite, held in November 2012, asked voters in Puerto Rico two questions: (1) whether Puerto Rico should continue its present form of territorial status, and (2) regardless of how voters answered the first question, which non- territorial status option is preferred—statehood, independence, or a sovereign free associated state. For the first question, about 54 percent of voters indicated that Puerto Rico should not continue its present form of territorial status. For the second question, about 61 percent of voters who chose a non-territorial status option chose statehood. The Consolidated Appropriations Act, 2014 includes $2.5 million in funding for objective, nonpartisan voter education about, and a plebiscite on, options that would resolve Puerto Rico’s future political status. The funds are to be provided to the State Elections Commission of Puerto Rico. Congress generally determines whether Puerto Rico is eligible for federal programs on a case-by-case basis, and defines any different treatment in law. For example, federal programs in Puerto Rico may be subject to certain funding or eligibility restrictions. For some programs, current law applies certain limitations or exceptions to Puerto Rico by name. For other programs, the governing statutes refer to the 50 states or the 50 states and the District of Columbia. Where differences are not mandated by law, federal agencies generally treat Puerto Rico the same as the states. Yet, characteristics of federal programs in Puerto Rico may differ from the states for other reasons. For example, a study by the U.S. President’s Task Force on Puerto Rico’s Status found that governments and organizations in Puerto Rico were not applying for and seeking all available federal funds. It also found that a significant amount of funds available in Puerto Rico are not spent in a timely manner. Pub. L. No. 113-76, 128 Stat. 5, 61 (2014). exempt from federal taxes on income from Puerto Rico sources.differences contribute to Puerto Rico and its residents receiving fewer federal payments, and paying less in federal tax, than residents of the states on a per capita basis, as shown in figure 1. Historically, trends in Puerto Rico’s economy have tended to follow those in the rest of United States. However, Puerto Rico’s latest economic downturn has been longer and more extreme than the mainland U.S. downturn. Specifically, the U.S. economy entered into a recession in December 2007, which ended in June 2009, according to the Business Cycle Dating Committee of the National Bureau of Economic Research. In contrast, Puerto Rico’s recession began in the fourth quarter of 2006, and the economy contracted every fiscal year from 2007 to 2011. After growth of 0.1 percent in fiscal year 2012, the economy is projected to have contracted in fiscal year 2013 by 0.4 percent, according to the Government Development Bank for Puerto Rico. Recently, Puerto Rico’s government has faced various fiscal challenges, including an imbalance between its general fund revenues and expenditures. In fiscal year 2009, Puerto Rico’s fiscal deficit reached a high of $2.9 billion—based on $7.8 billion in revenues and $10.7 billion of expenditures. Persistent deficits have resulted in an increase in Puerto Rico’s public debt, which represents a much larger share of personal income than in any of the states. In February 2014, Puerto Rico’s general obligation bonds were downgraded to speculative—noninvestment— grade by three ratings agencies. Recently, Puerto Rico has taken steps to improve its fiscal position. Beginning in 2007, Puerto Rico began to reduce the size of its government workforce. For example, between 2007 and 2009, government employment declined almost 10 percent. However, as of July 2012, government employment still accounted for a larger share of overall employment in Puerto Rico when compared to the states (although, government employment as a share of the population older than 15 in Puerto Rico was similar to that in the states). In 2009, a fiscal stabilization plan was put into effect that reduced government spending and increased tax revenues. In April 2013, Puerto Rico enacted comprehensive reform of its largest public employee retirement system, which is funded primarily with budget appropriations from the government’s general fund. The reform was intended to address the retirement system’s deteriorating solvency. Through measures like these, Puerto Rico has reduced its annual deficits. However, the fiscal year 2013 deficit was approximately $1.3 billion, based on projected expenditures of approximately $10 billion. As the Government Development Bank for Puerto Rico notes in its Financial Information and Operating Data Report from October 2013, Puerto Rico’s ability to continue to reduce its deficit will depend in part on its ability to continue increasing revenues and reducing expenditures, which in turn depends on a number of factors, including improvements in economic conditions. Of the 29 selected federal programs we reviewed, statehood would likely affect 11 programs. For 3 other programs, while the programs themselves would likely not change under statehood, eligibility determinations for these programs could be affected indirectly by changes that could occur to benefits in other programs. Statehood would not likely affect the 15 remaining programs. Ultimately, changes to programs under statehood would depend on decisions by Congress and, to some extent, on decisions by federal agencies. For example, Congress could enact legislation that creates or maintains certain exceptions for Puerto Rico. Figure 3 shows whether and how statehood would potentially affect the programs we reviewed. Additional details on programs that statehood would likely affect appear in appendix II. The extent to which federal spending would change for some of the programs that would be affected by Puerto Rico statehood depends on various assumptions, such as which program eligibility options Puerto Rico might select, and the rates at which eligible residents might participate in the programs. For example, for the four largest programs for which federal spending would be likely to change under statehood— Medicare, Medicaid, SNAP, and SSI—and for the ninth largest program for which federal spending would be likely to change—CHIP—GAO used various assumptions to estimate a range of federal spending. Figure 4 below shows the range of estimated federal spending for these programs based on these assumptions, which are described in detail for each program in appendix II. Figure 4 also shows a Federal Highway Administration estimate for federal spending for Federal-Aid Highways, the fifth largest program for which federal spending would be likely to change under statehood. The estimates were developed for a single year in the past, as if Puerto Rico were treated the same as the states in the year specified for each program. For programs other than Federal-Aid Highways, the estimates are in calendar-year terms because the eligibility and other data used to develop the estimates were in calendar-year terms. The estimate for Federal-Aid Highways is in fiscal-year terms. Actual spending in Puerto Rico, to which we compare the estimates, is in fiscal-year terms because the spending data were reported in fiscal-year terms. All the federal revenue sources we reviewed could be affected if Puerto Rico became a state. As with our review of programs, we assumed that if Puerto Rico becomes a state, it would be treated as such for purposes of revenue collection. For example, under statehood, Puerto Rico residents would be subject to federal tax on all their income: currently they are subject to federal tax only on income from sources outside of Puerto Rico. However, for two revenue sources through which Puerto Rico receives revenue not provided to other states—excise taxes and customs duties—whether or how statehood would result in changes would depend on decisions by Congress. Figure 5 shows how the revenue sources we reviewed potentially would change under statehood. Additional details on the two largest revenue sources that would be affected substantially by statehood—individual and corporate income taxes—appear after figure 5 and in appendix III. The extent to which statehood would affect federal revenue depends on various assumptions. For example, for the two largest revenue sources that would be affected substantially by statehood—individual and corporate income taxes—we used various assumptions to estimate a range of federal revenue. The estimate ranges are based on Puerto Rico being treated the same as the states in either 2009 or 2010, based on the year for which the most recent data were available. An example of how assumptions affect the estimates is illustrated by the estimate range for corporate income tax. That estimate is influenced by assumptions on applicable tax rates for business with activities in Puerto Rico, the extent of ownership of Puerto Rico businesses by U.S. corporations, and the extent to which U.S. corporations use prior-year losses from their affiliated Puerto Rico businesses to offset their federal taxable income. For example, the low end of the estimate range shown in figure 6 below is based on lower-bound assumptions for applicable corporate income tax rates, upper-bound assumptions for the extent of U.S. ownership of Puerto Rico businesses, and the assumption that U.S. corporations would have used prior-year losses of affiliated Puerto Rico corporations to offset their federal taxable income to the maximum extent. The high end of the estimate range shown in figure 6 is based on the upper-bound assumptions for applicable tax rates, lower-bound assumptions for the extent of U.S. ownership of Puerto Rico businesses, and the assumption that U.S. corporations would not have used any prior-year losses of affiliated Puerto Rico corporations to offset their federal taxable income. The estimates for corporate income tax in figure 6 do not take into account any changes in behavior of businesses with activities in Puerto Rico. For example, according to tax policy experts at the Department of the Treasury and the Joint Committee on Taxation, changes in federal income tax requirements under Puerto Rico statehood are likely to motivate some corporations with substantial amounts of income derived from intangible (and therefore mobile) assets to relocate from Puerto Rico to a lower tax foreign location. The extent to which such corporations might relocate from Puerto Rico is unknown. Consequently, we produced an alternative set of corporate income tax revenue estimates to account for some businesses with activities in Puerto Rico potentially relocating under statehood. Accounting for this assumption, in conjunction with the other assumptions described previously, resulted in an estimated range of corporate income tax revenue of -$0.1 billion to $3.4 billion. Statehood could result in dynamic economic and fiscal changes for Puerto Rico, changes that could ultimately impact the level of federal spending in Puerto Rico, and the revenue collected from residents of, and corporations in, Puerto Rico. However, the precise nature of how such changes would affect federal spending and revenue is uncertain. Because statehood would cause numerous adjustments important to Puerto Rico’s future, it would require careful consideration by Congress and the residents of Puerto Rico. Consequently, statehood’s aggregate fiscal impact would be influenced greatly by the terms of admission, strategies to promote economic development, and decisions regarding Puerto Rico’s revenue structure. As we have reported in the past, the history of statehood admissions is one of both tradition and flexibility. While Congress has emphasized the traditional principles of democracy, economic capability, and the desire for statehood among the electorate, it has also considered potential states’ unique characteristics, including population size and composition, geographic location, economic development, and historical circumstances when making these decisions. Any decision to transition Puerto Rico to statehood in the future will also involve assessing a complex array of similar factors, in addition to economic and fiscal ones. Some factors that could influence changes in federal spending and in revenue for specific programs or types of tax are discussed in appendix II and appendix III of this report. In this section, we discuss general factors that could influence how Puerto Rico statehood could affect future federal spending and revenue. As previously discussed, Puerto Rico’s economy has largely been in recession since 2006. Likewise, Puerto Rico’s unemployment rate has been relatively high, and its labor force participation rate has been relatively low, compared to those of the states. Statehood—and the resultant changes to spending programs in Puerto Rico, and in tax requirements for Puerto Rico residents and corporations—could have wide-reaching effects on Puerto Rico’s economy and employment. Under statehood, Puerto Rico residents would be eligible for the federal earned income tax credit (EITC)—including refundable payments—which is designed to encourage work. Also, in the short-term, increased federal transfers—such as through SSI benefits, which Puerto Rico residents would become eligible for under statehood—could stimulate Puerto Rico’s economy. However, some Puerto Rico industry group representatives we interviewed worried that the relatively high rate of government transfer payments in Puerto Rico could discourage work. According to the Federal Reserve Bank of New York, such transfer payments equate to roughly 40 percent of personal income, more than double the share in the states. Likewise, the effect of statehood on Puerto Rico migration—and the corresponding effect of that migration on Puerto Rico’s economy and employment—is uncertain. From 2002 to 2012, Puerto Rico’s population decreased by about 5 percent based on U.S. Census Bureau estimates. Migration has been cited as a possible explanation for Puerto Rico’s relatively low labor force participation rate, particularly if those Puerto Rico residents most interested in participating in the labor force are migrating to the states in search of higher wage employment, leaving behind residents that have relatively less attachment to the labor force. From 2002 to 2012, population in the states increased by 9.1 percent, and decreased in only 2 states—Rhode Island and Michigan (by 1.5 percent in both states)—based on U.S. Census Bureau estimates. various factors corporations generally take into account when determining where to locate their operations). Also, local businesses could incur higher costs because of additional tax liabilities. As previously discussed, Puerto Rico has run persistent fiscal deficits in recent years, which has increased Puerto Rico’s public debt. As a result, Puerto Rico government-issued debt represents a much larger share of personal income than in any of the states. Recently, Puerto Rico has taken steps to improve its fiscal position, including reducing the size of its government workforce and reforming its primary public employee retirement system. However, in February 2014, Puerto Rico’s general obligation bonds were downgraded to speculative—noninvestment— grade by three ratings agencies, in part because of concerns about Puerto Rico’s fiscal position. One factor that may have facilitated Puerto Rico’s ability to issue debt is that the interest on most bonds issued by Puerto Rico’s government, its political subdivisions, and its public corporations generally is not subject to income tax at the federal, state, or local levels. Under statehood, if Puerto Rico was treated like the states, its government-issued debt would no longer enjoy this so called triple-exemption, as income accruing to residents of other states would become taxable at the state and/or local levels. The loss of triple-exempt bond status could result in reduced demand for Puerto Rico’s debt. The $152 million estimate is subject to a high level of statistical imprecision, with a margin of error of plus or minus 14.5 percent of the estimate itself. tax burdens for individuals and corporations, it would need to lower its tax rates, which could reduce tax revenue. Under statehood, certain federal programs in Puerto Rico could change substantially if Puerto Rico were treated the same as the states. Likewise, Puerto Rico residents and corporations operating in Puerto Rico would become subject to significant changes in their tax requirements under statehood. Prior bills on Puerto Rico’s status that Congress has considered have included provisions providing for a transition period or plan. Under one approach, if Puerto Rico were to become a state, federal funding would increase incrementally until parity with other states was reached, and federal income tax requirements would be phased in. If Congress granted statehood to Puerto Rico, it could decide to establish a similar transition period. In turn, the characteristics and length of time of such a transition period could affect federal spending and revenue during—and beyond—that period. We provided draft sections of this report to the relevant federal program agencies, the Department of the Treasury, and IRS. We also shared a draft of the report with officials from the Government of Puerto Rico and the Resident Commissioner from Puerto Rico (Puerto Rico’s Congressionally-authorized representative in Washington, D.C.). In total, we sent draft report sections to 16 federal agencies.agencies had no comments on their draft report sections. Ten agencies provided technical comments, which we incorporated as appropriate. We also received technical and written comments from the Governor of Puerto Rico and the Resident Commissioner from Puerto Rico. Technical comments were incorporated as appropriate; the written comments are Six reproduced as appendix V (Governor) and appendix VI (Resident Commissioner) to this report. In his written comments, the Governor of Puerto Rico noted that if we had considered two factors omitted from our estimate of individual income tax revenue under statehood, estimated revenue would have been higher. First, the Governor noted that different federal filing thresholds and tax rates, compared to those for Puerto Rico, would have resulted in more individuals subject to tax and an increased amount of federal taxes paid by individuals. Our individual income tax revenue estimates take these differences into account, as they are based on the federal filing thresholds and federal tax rates. That is to say, we were able to determine which Puerto Rico residents who filed a Puerto Rico tax return for 2010 would have met federal filing thresholds and what tax rates would have applied to their taxable income, if at all. The Governor also noted that Puerto Rico does not tax Social Security benefits, which may be taxable at the federal level. In the individual income tax section of appendix III to this report, we note that because Social Security benefits are not included on Puerto Rico tax returns, our estimates do not take taxable Social Security benefits into account, and as a result our estimates could understate individual income tax revenue. In response to our estimate for corporate income tax revenue, the Governor noted that to counter the effect of increased taxes on Puerto Rico businesses upon the imposition of federal taxes, our draft report suggested that Puerto Rico would reduce its corporate tax rate to 3.8 percent to be on par with the average corporate tax rate in the states (state taxes are deductable against corporate income for federal tax purposes). He noted that this assumption is unrealistic given Puerto Rico’s current level of corporate tax rates and Puerto Rico’s current fiscal situation. We based our modeling of corporate income tax revenue under statehood on the assumption that Puerto Rico would lower its corporate tax rates to be more in line with those in the states. However, we used the average effective rate in the states, which is different than a simple Based on this comment, we conducted a sensitivity marginal rate.analysis to determine how our estimates would change if we assumed that the effective rate of Puerto Rico’s corporate income tax under statehood would have been twice as high as the average effective state rate (an effective rate of 7.6 percent). We found that the estimate ranges would not have changed substantially using this alternative assumption. Finally, the Governor noted that the characterization in the draft report of the percent of votes received by statehood in the 2012 plebiscite is inaccurate, and that the report should explain further the structure and outcomes of the plebiscite. In response to this comment, we provided additional detail on the number of voters and blank votes for both questions from the plebiscite. Assessing the structure of the plebiscite is outside the scope of this report. In his written comments, the Resident Commissioner for Puerto Rico summarized the central findings of the draft report. He also pointed out some of the uncertainties and limitations inherent in developing estimates for how federal spending and revenue would change if Puerto Rico became a state, which we recognize in the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the heads of the relevant agencies for the programs and revenue sources in this report, Puerto Rico’s governor, the Resident Commissioner for Puerto Rico, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6520 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The objectives of this report are to evaluate (1) potential changes to selected federal programs and related changes in federal spending, and (2) potential changes in selected sources of federal revenue, should Puerto Rico become a state. We also describe factors under statehood that could influence changes in federal spending and revenue. To evaluate potential changes in selected federal programs under Puerto Rico statehood, we selected programs to review based on three criteria. Programs that generally provide funds directly to states and territories, or residents and institutions in the states and territories. Based on this criterion, we excluded certain types of federal spending from our review. Specifically, we excluded spending on the military;debt; and administrative, operational, procurement, or capital acquisition expenses at federal agencies, including federal employee salaries and retirement compensation. international aid and affairs; interest on the national Programs with net outlays of at least $5 billion. We used the Office of Management and Budget (OMB) public budget database from fiscal years 2010 and 2011 to select programs to review. We identified budget accounts with at least $5 billion, and then reviewed the programs within each account to identify those with outlays of at least $5 billion. We defined a program as an organized set of activities with the same objective(s) and funded by the federal government. We focused on programs that are expected to have an ongoing impact on the federal budget, meaning we did not consider temporary funding, such as that provided under the American Recovery and Reinvestment Act of 2009. To further support whether the programs met our selection criteria, we compared programs with total federal outlays of at least $5 billion (from the OMB database) to those described in the Catalog of Federal Domestic Assistance and the Appendix to the Budget of the U.S. Government. Through this process, we identified 27 programs to review. Programs for which federal spending in Puerto Rico differed significantly from spending in a set of comparable states. To identify programs with less than $5 billion in net outlays that may be subject to relatively large spending changes under statehood—such as those providing little or no funding to Puerto Rico—we reviewed federal program spending by state from the Census Bureau’s Consolidated Federal Funds Report for fiscal year 2010. We selected five states most similar to Puerto Rico in terms of population and median household income, and identified programs for which the difference in average federal spending between these states and Puerto Rico was at least $100 million. Through this process, we identified two additional programs to review: the Public Housing Operating Fund and the Public Housing Capital Fund. The 29 programs we selected to review accounted for about 86 percent of spending in fiscal year 2010 on federal programs that generally provide funds directly to states and territories, or to residents and institutions in the states and territories. We asked the 12 federal agencies that administer the 29 programs we selected to review our selection methodology and confirm that inclusion of each program was appropriate based on our criteria. The agencies provided additional information and documentation, when necessary. In one instance, a selected program comprised a subset of a larger budget account, and the agency overseeing the program was unable to provide a net outlay figure.programs, we report obligation amounts for the 29 programs we selected for our review, as obligation amounts were available for all selected programs (see figure 3 earlier in this report). To consistently report federal spending across For each selected program, we reviewed federal laws and regulations to determine whether and how statehood might affect funding or other requirements for Puerto Rico. We based our analyses on the assumption that, if it is granted statehood, eventually Puerto Rico would be treated the same as the states. For some programs, current law applies certain limitations or exceptions to Puerto Rico by name. For other programs, the governing statutes refer to the 50 states or the 50 states and the District of Columbia. For these programs, we assumed that, if Puerto Rico became a state, it would be treated the same as any existing state, either because Congress would amend the statutory limitations and exceptions or they would otherwise not apply. We did not evaluate whether Puerto Rico would be required to be treated the same as the states in the context of any specific program. We confirmed with the relevant agencies whether and how statehood would affect funding or other requirements for Puerto Rico. To evaluate potential changes in federal spending related to changes to federal programs, we evaluated the five largest programs that would be likely to change under statehood. We developed estimate ranges of the potential changes in federal spending for the four largest programs that would be likely to change under statehood: Medicare, Medicaid, the Supplemental Nutrition Assistance Program (SNAP), and Supplemental Security Income (SSI). The Federal Highway Administration developed an estimate on our behalf of potential changes in federal spending for the fifth largest program that would be likely to change under statehood— Federal-Aid Highways. We also developed an estimate of potential changes in spending for the ninth largest federal program that would be likely to change under statehood—the Children’s Health Insurance Program (CHIP)—because Puerto Rico receives federal CHIP funding as part of its Medicaid program. The programs for which we developed estimates accounted for about 94 percent of fiscal year 2010 spending on programs that would likely change under statehood. We developed estimate ranges for a single year in the past, as if Puerto Rico had been treated the same as the states in that year. The years of the estimate ranges vary by program and are based on the most recent relevant data when we began our work. For programs other than Federal-Aid Highways, the estimate ranges are in calendar-year terms because the eligibility and other data used to develop the estimates were in calendar- year terms. The estimate for Federal-Aid Highways is in fiscal-year terms. Actual spending in Puerto Rico, to which we compare the estimates, is in fiscal-year terms because the spending data were reported in fiscal-year terms. To estimate potential changes in federal spending for Medicaid, SNAP, SSI, and CHIP, we contracted with the Urban Institute to conduct portions of the work using two simulation models. We also used aspects of the Urban Institute’s simulations in estimating spending for Medicare. The estimates of potential spending changes involve various sources of uncertainty. Except for Federal-Aid Highways, the estimates are based, in part, on sample survey data, which include sampling error. Sample survey data are obtained by following a probability procedure based on the selection of random samples, and each sample is only one of a large number of samples that might have been selected. Since each sample could have provided different estimates, sampling error measures the level of confidence in the precision of a particular sample’s results, which we express as a margin of error at the 95-percent confidence interval. Unless otherwise indicated, the estimates included in this report that used sample survey data—plus or minus 7 percent, or less, of the estimates themselves—would contain the actual value for the populations we analyzed for 95 percent of the samples that could have been selected. There are other sources of uncertainty that are not readily quantifiable. These include the assumptions we used to develop the estimates, such as those for which program eligibility rules Puerto Rico would adopt, and the rates at which eligible Puerto Rico residents would participate in the programs. To some extent, the various scenarios for estimated spending included in this report capture how these assumptions would impact spending. In other instances, there may be sources of uncertainty and dynamic changes to the programs that we could not incorporate into our modeling. These could include further changes in eligibility rules once additional program funding becomes available, the reaction of program beneficiaries to changes in the programs, or congressional action resulting from statehood. To estimate potential changes in federal spending for Medicare for 2010, we estimated spending for the two options through which Medicare beneficiaries can obtain insurance coverage for hospital and medical services—Medicare fee-for-service (Medicare FFS) and Medicare Advantage (MA), the private plan alternative to Medicare FFS—as well as the optional prescription drug benefit. For Medicare FFS, we first calculated average spending for various demographic groups of Medicare beneficiaries in the states, using the Centers for Medicare & Medicaid Services’s (CMS) Medicare Current Beneficiary Survey (MCBS) Cost and Use file for 2010, the most recent available at the time we began our work.information on Medicare beneficiaries, matched to administrative data on actual spending. Using these data, we developed estimates for average Medicare FFS spending by categories of age, gender, disability status, and dual-eligible status—that is, beneficiaries eligible for both Medicare and Medicaid. For some groups of Medicare beneficiaries, the MCBS sample size allowed for finer age and gender group breakdowns; for others, we combined age and gender groups. To identify corresponding groups of Puerto Rico residents by age and gender, we used data from the Census Bureau’s Puerto Rico Community Survey (PRCS) three-year sample for 2009-2011: the years closest to the year of the MCBS data we used. We also used estimates of dual- eligible beneficiaries that the Urban Institute developed. We calculated the number of disabled but non-dual-eligible enrollees by subtracting the number of dual-eligible enrollees and the number of enrollees over age 65 from the total number of enrollees. We multiplied the average Medicare spending amounts from MCBS respondents in the states for each group or category to the corresponding Puerto Rico beneficiary count. To account for different health care costs in Puerto Rico relative to the states, we adjusted the wage indices used to calculate spending for Medicare FFS Part A and the Geographic Practice Cost Indices used to calculate spending for Medicare FFS Part B. We also made adjustments to account for the lower Medicare FFS Part B take-up rate and lower utilization rates in Puerto Rico relative to the states. For MA, we used data from the MCBS file to estimate the average cost for Puerto Rico MA enrollees. Because the benchmark underlying payments to MA plans in Puerto Rico is changing (regardless of whether Puerto Rico becomes a state) because of provisions in the Patient Protection and Affordable Care Act (PPACA), we modeled spending based on two benchmark scenarios: (1) the benchmark generally applicable for Puerto Rico for fiscal year 2014 (147.5 percent), and (2) the benchmark that generally will apply in 2017, when PPACA is fully phased- in (115 percent). Given that these changes in benchmarks could result in Puerto Rico MA enrollees switching to Medicare FFS, we developed estimates based on two MA enrollment scenarios: (1) the percentage of Puerto Rico Medicare beneficiaries enrolled in MA in 2010 (about 64 percent), and (2) the highest MA enrollment percentage in the states (about 42 percent). These changes are the only potential impacts of PPACA we incorporated into our estimates for Medicare. For a description of how PPACA could affect Medicare spending in Puerto Rico, see appendix IV. For the Medicare prescription drug benefit, we estimated the number of enrolled beneficiaries by applying the percentage of Puerto Rico beneficiaries who enroll in a benefit plan (77 percent) to our estimates of Puerto Rico Medicare beneficiaries. Our estimates of Puerto Rico Medicare beneficiaries include the estimates of dual-eligible beneficiaries that the Urban Institute developed. We assumed that there would be different costs per person depending on whether an enrolled beneficiary was a dual-eligible beneficiary, a disabled but non-dual eligible beneficiary, or any other beneficiary (essentially, all other enrolled beneficiaries age 65 or older). Using MCBS data, we estimated average prescription drug benefit costs for each of the three categories of beneficiaries and applied those costs to the number of beneficiaries in each category. We also assumed that the percentage of eligible Puerto Rico Medicare beneficiaries who would have enrolled in the low-income subsidy—which covers all, or a portion of, a beneficiary’s prescription drug benefit plan premiums, deductibles, copayments, and other out-of- pocket costs—would have been the same as that for all Medicare beneficiaries (about 77 percent). We assessed the reliability of the PRCS and MCBS data by performing appropriate electronic data checks, comparing MCBS data to administrative data, and by interviewing CMS officials who were knowledgeable about the data. We found the data were sufficiently reliable for the purposes of this report. To estimate potential changes in federal spending for Medicaid, SNAP, SSI, and CHIP, we contracted with the Urban Institute to conduct portions of the work using two simulation models: (1) the Health Policy Center’s American Community Survey Medicaid/CHIP Eligibility Simulation Model (HPC Medicaid/CHIP model), and (2) the Transfer Income Model, Version 3 (TRIM3), which simulates major federal tax and transfer programs, including SNAP, SSI, and Temporary Assistance for Needy Families (TANF). For this work, these models used 2011 PRCS data and other data sources to estimate the effect of program eligibility changes on the number of eligible and enrolled individuals for a select program and, in certain instances, they estimate the associated costs. We chose PRCS as a data source because of its large sample size and detailed information on the respondents’ demographics and participation in public assistance programs. 2011 was the most recent available year of PRCS data. We assessed the reliability of these data by reviewing available documentation and conducting reliability tests on the data that we used. We determined that the data were sufficiently reliable for the purposes of this report. We assessed the reliability of the Urban Institute’s modeling procedures by reviewing documentation on TRIM3 and the HPC Medicaid/CHIP model and input data sources, reviewing the Urban Institute's internal quality control procedures, and discussing the program rules and underlying assumptions used in the models with staff from the Urban Institute who were responsible for the work provided under our contract. Further, we evaluated the estimates on the basis of substantive significance (rather than statistical significance) by considering their size and the direction of the effect of changes to the programs under statehood. We determined that none of the modeling assumptions compromised the analysis for this report and that the data were sufficiently reliable for our purposes. Using the HPC Medicaid/CHIP model and TRIM3 to estimate spending changes for these programs required our input on assumptions, and about the rules governing federal programs. Therefore, the information presented in this report is attributable only to GAO. Specific steps taken to estimate spending for these programs appears below. To estimate federal Medicaid spending for 2011, the Urban Institute used the HPC Medicaid/CHIP model to estimate (1) the number of individuals who would have been eligible for Medicaid, and (2) the number of eligible individuals who would have enrolled in Medicaid. We then estimated (1) total (federal and Puerto Rico) Medicaid spending, and (2) the federal share of total Medicaid spending. We also estimated the extent to which Puerto Rico’s spending on Medicaid would change. The Urban Institute estimated eligibility based on our input for income eligibility assumptions. To determine the most appropriate income eligibility assumptions, we identified federal Medicaid mandatory categories of individuals for states and Puerto Rico’s 2011 Medicaid eligibility standards. We asked Puerto Rico officials for input on what optional Medicaid income eligibility standards might be selected under statehood. They told us that it would be difficult to determine what optional coverage groups would be selected, given the significant economic and budgetary restraints Puerto Rico currently faces, and uncertainty around the cost to Puerto Rico of expanding coverage. Ultimately, we chose to model two eligibility scenarios: Assuming Puerto Rico would have covered only mandatory categories of individuals. Under this scenario, Medicaid eligibility would have increased for some categories (such as pregnant women and as well as children). Optional categories (such as childless, non-elderly, non- disabled adults) would no longer be covered. This scenario represents the lower bound of potential federal Medicaid spending under statehood. Assuming Puerto Rico would have covered mandatory categories of individuals and expanded coverage levels for the optional categories it actually covered in 2011. For certain populations, such as pregnant women, infants, and children, Medicaid eligibility is based on a family’s income level as a proportion of a defined poverty level. Although Puerto Rico currently uses its own local poverty level (see the section on Medicaid in appendix II), we assumed that as a state, Puerto Rico would be required to adhere to the same federal poverty guidelines as the 48 contiguous states and the District of Columbia. We based this assumption, in part, on our general assumption that Puerto Rico would be treated in the same manner as the states, and on input from officials with the Department of Health and Human Services’ Office of the Assistant Secretary for Planning and Evaluation (ASPE). This office updates and publishes the annual federal poverty guidelines.participation in federal programs, such as SSI, for which Puerto Rico residents would become eligible under statehood. For other populations, eligibility is based on To estimate enrollments, we assumed that all actual Medicaid beneficiaries and individuals estimated to have received SSI and TANF benefits in 2011 would have enrolled. For all other eligible individuals, we decided to apply participation rates observed for actual Puerto Rico Medicaid and CHIP enrollees in the top decile of the distribution of income-to-poverty ratio by subgroup (i.e., a matrix of age group, insurance coverage, and disability status), based on the assumption that newly eligible individuals would be more similar to higher-income eligible individuals than to lower-income eligible individuals. The enrollment estimates are by geographic region in Puerto Rico. The Urban Institute also estimated the number of beneficiaries dually eligible for Medicare and Medicaid. To estimate total (federal and Puerto Rico) Medicaid spending, we applied annualized per member per month rates paid for different categories of enrolled individuals to estimated enrollments. The annualized per member per month rates we used were those paid by Puerto Rico to its managed care organization for Medicaid enrollees between October 2010 and June 2011 for physical and mental health services. These rates generally varied by geographical area and ranged from about $1,180 to $1,852. For dual-eligible beneficiaries enrolled in Puerto Rico’s Platino program—for whom the majority of health care costs are covered by Medicare—the rate was $120. According to CMS officials, the vast majority of beneficiaries enrolled in the Platino program are dual-eligible beneficiaries. Thus, we used the Urban Institute’s estimates for dual-eligible beneficiaries as a proxy for the number of Platino enrollees when applying per person costs. To estimate the federal share of total Medicaid spending, we applied a predicted Federal Medical Assistance Percentage (FMAP)—the statutory formula that determines the federal share of Medicaid funding provided to states and territories—to total Medicaid spending. For Puerto Rico, the predicted FMAP was 83 percent. We assumed that the statutory limit on federal Medicaid funding to Puerto Rico would have been removed. We also estimated Puerto Rico’s share of total Medicaid spending to show how it would change under statehood. We did not incorporate all aspects of the Medicaid program into our spending model, including the cost of the Medicaid Disproportionate Share Hospital (DSH) program or potential savings resulting from Puerto Rico’s participation in the Medicaid drug rebate program. We did not incorporate Puerto Rico’s Enhanced Allotment into our model, as it would be likely eliminated under statehood and replaced with the Medicare low-income subsidy for prescription drugs. To estimate federal SNAP spending for 2011, the Urban Institute used TRIM3 to estimate (1) the number of household units that would have been eligible for SNAP benefits, (2) the number of eligible household units that would have participated in SNAP, and (3) aggregate SNAP benefits for participating household units. The Urban Institute based its eligibility estimates on program eligibility rules in the states (SNAP is currently unavailable to Puerto Rico residents), including income and resource limits, and rules related to participation in other means-tested programs, such as SSI and TANF. The Urban Institute calculated net income by subtracting various deductions from a household unit’s gross income—such as those for earned income, dependent care expenses, medical care expenses, excess shelter costs, and a standard deduction. Where the rules for allowable deductions differ between (1) the 48 contiguous states and the District of Columbia and (2) other states and territories, the estimates use the rules applicable to the 48 contiguous states and the District of Columbia. The Urban Institute imputed household units’ resources by applying assumed annual rates of return on reported interest, dividends, and rent. To estimate the number of eligible household units that would have participated in SNAP, we directed the Urban Institute to model four different scenarios, based on the following assumptions on household unit definitions and participation rates. Everyone in a household would have filed for SNAP as a single unit, unless the household contained at least one person who received TANF. If the household contained a TANF recipient, it was divided into as many filing units as possible, subject to the requirements involving married couples and children. This household unit definition was modeled using (1) a national probabilities estimate of SNAP participation, resulting in a household participation rate of 75 percent, and (2) full participation, which occurs in some states. Assuming full participation, all related persons in a household would have filed for SNAP as a single unit. Unrelated individuals and subfamilies would have filed as separate units. Assuming full participation, with households that had more than one potential SNAP unit split into as many filing units as permitted. For all scenarios, household units that reported receiving benefits under Puerto Rico’s current federally-funded nutrition assistance program were assumed to have chosen to participate in SNAP if they had qualified. To estimate aggregate SNAP benefits for participating household units, each participating household unit’s benefit amount was determined by subtracting 30 percent of the household unit’s net income from the maximum SNAP allotment for household unit size, using the maximum SNAP allotments for the 48 contiguous states and the District of Columbia. This reduction from the maximum SNAP allotments is made because households are expected to spend 30 percent of their resources on food. We also determined the impact on SNAP benefits of replacing the Aid to the Aged, Blind, and Disabled (AABD) program with the higher- benefit SSI program, since an increase in cash aid could lower a person’s SNAP benefits. To estimate federal SSI spending for 2011, the Urban Institute used TRIM3 to estimate (1) the number of individuals who would have been eligible for SSI benefits, (2) the number of eligible individuals who would have participated in SSI, and (3) aggregate SSI benefits for participating individuals. The Urban Institute based its eligibility estimates on program eligibility rules for individuals’ age, blindness, or disability status, and income and resource limits. To qualify for benefits based on age, an individual must be at least 65 years old. Adults younger than 65 can qualify for benefits based on blindness or a permanent disability that prevents work; children can qualify based on a disability with conditions that severely limit their activities. To determine disability status, doctors examine prospective adult and child beneficiaries. Because the PRCS data do not precisely capture the same criteria that are assessed by doctors, assumptions were required to estimate potential SSI eligibility among the non-elderly. TRIM3 designated an adult as blind or disabled where the survey responses showed (1) that the adult did not work in the prior year or earned income less than the substantial gainful activity limit, and (2) at least one of the following was true: The adult indicated having a physical, remembering, or vision limitation. The adult was between 22 and 61 years old, not a widow, and reported Social Security income. TRIM3 treated children ages 15 and older as adults for the disability eligibility determination since children are asked the same survey questions asked of adults. TRIM3 identified children younger than 15 as potentially disabled if they reported a remembering or vision disability. The model was not used to estimate the number of children younger than age 5 who were potentially disabled, because the remembering limitation question is not asked of them. Instead a sufficient number of children under age 5 were included so that the portion of the total children’s caseload that is under age 5 is the same as in the states. Regardless of age or disability status, individuals must have limited assets and income to be eligible for SSI benefits. TRIM3 imposed the eligibility asset test of $2,000 for a unit with one eligible person and $3,000 for an eligible couple. Asset values were inferred from the level of reported asset-based income (interest, dividend, and rental income). Adults may qualify either individually or as couples. The simulation model found that 81 percent of eligible adults were either unmarried or married to an ineligible individual. To estimate the number of eligible individuals who would have participated in SSI, we present two scenarios—with participation rates that varied by age group and disability status—assuming that eligible individuals would participate based on (1) national average participation rates, and (2) the average of participation rates for the five states with the highest three-year average poverty rate for 2009 to 2011. For children younger than 5, the data were not sufficient to estimate a participation rate; instead, a sufficient number of children younger than 5 were included so that their share of all eligible children was the same as in the states. Individuals in the simulation determined to be participating in AABD, which SSI would replace, were included as participating in SSI. To estimate aggregate benefits, we used participant benefits following the SSI program rules. To determine the actual benefit, the maximum SSI benefits (in 2011, $674 for individuals and $1,011 for couples) were reduced, based on countable income. In determining countable income, SSI program rules disregard the first $20 of most income per month, plus the first $65 of earned income and 50 percent of any additional earned income. One-third of child support is also disregarded. For an individual with a spouse who is not potentially eligible for SSI, the amount of the spouse’s income to be deemed available is determined. For children, some income is deemed from their parents. To estimate federal CHIP spending for 2011, the Urban Institute used the HPC Medicaid/CHIP model to estimate (1) the number of individuals who would have been eligible for CHIP, and (2) the number of eligible individuals who would have enrolled in CHIP. We then estimated total (federal and Puerto Rico) CHIP spending. The Urban Institute estimated eligibility based on our input on eligibility rule assumptions. To qualify for federal CHIP funding, states’ CHIP cannot cover children who are eligible for Medicaid. In 2011, states were required to provide Medicaid coverage to children with family incomes up to 100 percent and 133 percent of the federal poverty level (FPL), depending on a child’s age. States have discretion in setting CHIP eligibility standards. Forty-five states and the District of Columbia covered children between 200 percent and 300 percent of the FPL in 2011. Given required increases to Medicaid income eligibility limits under statehood, Puerto Rico residents enrolled in CHIP in 2011 would have qualified for Medicaid, but not for CHIP. To draw down federal CHIP funding, Puerto Rico would have needed to raise its CHIP income eligibility standards. When asked what income eligibility rules might be adopted under statehood, officials from Puerto Rico’s Department of Health responded that it would be difficult for Puerto Rico to determine what income eligibility rules would be adopted. Ultimately, we chose to model three eligibility scenarios. Assuming Puerto Rico had opted to cover children up to 300 percent of the FPL. Assuming Puerto Rico had opted to cover children up to 200 percent of the FPL. Assuming Puerto Rico had opted to discontinue its version of CHIP. To estimate enrollments, we followed a process similar to that for Medicaid. We assumed that all actual CHIP beneficiaries and individuals estimated to have received SSI and TANF benefits in 2011 would have enrolled. As previously described for Medicaid, for all other eligible individuals, we decided to apply participation rates observed for actual Puerto Rico Medicaid and CHIP enrollees in the top decile of the distribution of income-to-poverty ratio by subgroup (i.e., a matrix of age group, insurance coverage, and disability status), based on the assumption that newly eligible individuals would be more similar to higher-income eligible individuals than to lower-income eligible individuals. The enrollment estimates are by geographic region in Puerto Rico. To estimate total (federal and Puerto Rico) CHIP spending, we applied annualized per member per month rates paid for different categories of enrolled individuals to estimated enrollments. The annualized per member per month rates are the same as for Medicaid. To estimate the federal share of total CHIP spending, we applied a predicted enhanced federal medical assistance percentage (enhanced FMAP) to total CHIP spending. The enhanced FMAP is the statutory formula that determines the federal share of CHIP funding provided to states and territories. For Puerto Rico, the predicted enhanced FMAP was 85 percent. We also estimated Puerto Rico’s share of total CHIP spending to show how it would change under statehood. To estimate federal spending on Federal-Aid Highways for fiscal year 2013, we obtained estimates from the Federal Highway Administration (FHWA) on (1) Puerto Rico highway users’ expected contribution to the Highway Account of the Highway Trust Fund (the Fund), and (2) Puerto Rico’s expected apportionment—a division of authorized highway funding according to statutory formulas. Using these estimates, we determined Puerto Rico’s net deficit for Federal-Aid Highways. To estimate Puerto Rico highway user’s expected contribution to the Fund for fiscal year 2013, FWHA multiplied Puerto Rico’s reported number of gallons of motor fuel consumed on highways for fiscal year 2011 by the applicable federal tax rate. We confirmed that FHWA calculated Puerto Rico highway user’s expected contribution to the Fund with the same process it used for highway users in the states. We did not independently review FHWA’s process for estimating state users’ contributions into the Fund. However, we reviewed the process in the past, and FHWA made changes to the process as a result of that review. Regarding the motor fuel data collected by Puerto Rico, FHWA officials were unaware of any specific limitations to the data. To estimate Puerto Rico’s apportionment, FHWA officials ran Puerto Rico data through a series of formulas on our behalf. Under legislation passed in July 2012, apportionments for the states in fiscal year 2013 are virtually the same as apportionments for fiscal year 2012, which, in turn, were based on apportionments for fiscal years 2009 and 2011.apportionment for fiscal year 2009 was calculated using a series of 13 statutory formulas linked to sub-programs. The formulas rely on data elements—referred to as factors—such as total lane miles eligible for Federal-Aid Highways, and vehicle miles traveled on open Interstates. Some factors were unavailable for Puerto Rico and were entered as zero in the calculations. According to FHWA officials, the unavailable data had no effect on the estimated apportionment because of Equity Bonus computations. The Equity Bonus, in effect for fiscal year 2009, guaranteed that each state received at least a share of combined apportionments and High Priority Projects equal to 92 percent of contributions from highway users from that state to the Highway Account of the Fund. Similarly, as stated in prior work, the underlying data and factors are ultimately not meaningful for determining apportionments because they are overridden by other provisions that yield a predetermined outcome—in particular, the Equity Bonus under prior legislation. The estimated Puerto Rico apportionment for fiscal year 2009 was adjusted to meet the 92 percent Equity Bonus minimum relative rate of return. Given the overriding effect of Equity Bonus on the estimated Puerto Rico fiscal year 2009 apportionment—and, consequently, the fiscal year 2013 estimated apportionment—we did not verify the reliability of the Puerto Rico data that fed into the apportionment calculations. Additionally, we did not verify that the formulas FHWA used were consistent with the relevant statutes. However, we confirmed that FHWA used the same formulas and process for calculating state apportionments as were used for fiscal year 2009, which, by law, is the basis for fiscal year 2013 apportionments. To evaluate potential changes to selected sources of federal revenue under Puerto Rico statehood, we reviewed federal laws and regulations related to the main sources of federal revenue in 2012—individual income tax (which accounted for 46.2 percent of federal revenue in 2012), employment tax (34.5 percent), corporate income tax (9.9 percent), excise tax (3.2 percent), customs duties (1.2 percent), and estate and gift taxes (0.6 percent). We also estimated potential changes in revenue for individual and corporate income taxes—the two largest revenue sources that would be affected substantially by statehood. As with our estimates of potential changes in federal spending, our estimates of potential changes in federal revenue involve uncertainty. To some extent, the various scenarios for estimates revenue capture how these assumptions would impact revenue. However, there may be sources of uncertainty and dynamic changes in economic activity that would affect revenue that we could not incorporate into our modeling. To estimate potential changes to individual income tax, we obtained data for all individuals who filed a Puerto Rico individual income tax return for tax year 2010, the most recent complete year of tax return data available when we began our work. We obtained these data from Puerto Rico’s Department of Internal Revenue. The 2010 Puerto Rico individual income tax return generally includes information comparable to the federal individual income tax return. However, it does not include some items that are included on the federal return, and Puerto Rico law defines certain items differently. According to Puerto Rico officials, variations between the two returns include the following: Puerto Rico does not tax income from Social Security benefits or unemployment compensation. Thus, these items are not included on the Puerto Rico return. Under statehood, these forms of income would be subject to federal income tax. Because we excluded these items, our estimates of aggregate individual income tax revenue under statehood could be understated. Winnings from the Lottery of Puerto Rico and racetracks are exempt from Puerto Rico income tax. Under statehood, this income would be subject to federal income tax. Excluding these items could have resulted in understated estimates. Some federal income tax deductions—such as for taxpayers and their spouses who are blind—and some tax credits (such as for qualified expenses paid to adopt an eligible child) have no equivalent under Puerto Rico income tax law and therefore are not reported on Puerto Rico returns. Excluding these items could have result in overstated estimates. Puerto Rico defines short-term capital gains as those from the sale or exchange of capital assets held for 6 months or less. In comparison, federal tax law defines short-term capital gains as those from capital assets held for one year or less. Under the federal income tax, short- term gains are taxed as ordinary income, at rates that may be higher than those at which long-term gains are taxed for some taxpayers. Because we used capital gain information as reported on the Puerto Rico returns, our estimates may be understated. The Puerto Rico tax return does not distinguish between qualified and ordinary dividends. Qualified dividends generally are subject to a lower federal income tax rate than are ordinary dividends. For our estimate, we assumed that Puerto Rico qualified dividends would have comprised the same percentage of total dividends (74 percent) as for dividends reported on federal income tax returns in 2010. The federal tax system generally allows taxpayers to carry back and carry forward net operating losses for 2 and 20 years, respectively; in contrast, Puerto Rico only allows net operating losses to be carried forward for 10 years. Consequently Puerto Rico filers might have been able to reduce their federal tax liabilities to a greater extent than observed on the Puerto Rico tax returns we used. We used the National Bureau of Economic Research’s TAXSIM program—which models U.S. federal and state income tax systems—to estimate the aggregate federal income tax liability for 2010, as if each Puerto Rico individual income tax filer had filed a U.S. individual tax return per U.S. tax law as of January 2, 2013.payments in excess of tax liability for the three largest refundable tax credits: the American Opportunity Tax Credit (AOTC), child tax credit We also estimated (CTC), and earned income tax credit (EITC).accounted for 94 percent of obligations from refundable credits in fiscal year 2012. In addition to the variations between Puerto Rico and federal tax returns as described above, the estimates are based on the following assumptions: Puerto Rico filers would not have changed their behavior related to work, investment, or income reporting as a result of the imposition of federal tax requirements. All filers who would have been eligible for the refundable credits would have claimed them. Puerto Rico residents’ compliance with tax laws would have remained constant under statehood. Different assumptions would have resulted in different estimates. For example, some Puerto Rico residents who decided not to file a Puerto Rico return might have filed a federal return in order to receive payment from one or more of the refundable tax credits, had they been eligible. In addition, the Joint Committee on Taxation has noted that taxpayer compliance would likely increase under statehood because the federal Internal Revenue Service (IRS) has relatively more resources to enforce tax laws than does Puerto Rico’s Department of Internal Revenue. Under statehood, Puerto Rico filers may report their income at higher levels of compliance as a result. We also developed an assumption to account for the possibility that Puerto Rico could change its own local income tax rates under statehood. Puerto Rico’s local income tax rates would be relatively high compared to those of the states. For example, the highest marginal tax rate in Puerto Rico for 2010 was 33 percent. In comparison, the 2010 highest marginal tax rate in the states was 11 percent (Hawaii and Oregon). How Puerto Rico’s government would respond to the imposition of federal income taxes is unknown. However, one possibility is that it would reduce its income tax rates to be more in line with those from other states. Puerto Rico’s equivalent of a state income tax rate is relevant to estimates of aggregate tax liability and refundable credit payments because some filers would be able to deduct state and local taxes paid on their federal returns. Accordingly, we developed an alternate scenario for estimating aggregate tax liability and refundable credit payments based on Puerto Rico reducing its income tax rates. Under this alternative scenario, we imputed amounts for the deduction for state and local taxes paid (using reported data from IRS’s Statistics of Income program for 2010) based on the national average deduction as a percentage of adjusted gross income (3.3 percent). To determine the amount of federal income tax that Puerto Rico residents actually paid for 2010, we used data reported on Puerto Rico tax returns. We used these data because, although IRS publishes data on taxes collected by state (and for Puerto Rico) the amounts for individual and employment taxes are combined. According to IRS officials, the agency cannot separate the amounts for these two types of taxes at the state level. Instead, we used information reported on the Puerto Rico tax returns as a proxy for the amount of federal income tax paid. Puerto Rico allows a credit for taxes paid to the United States, its possessions, and foreign countries. According to officials from Puerto Rico’s Department of Internal Revenue, most of these taxes would have been paid to the United States. As a result, we used the aggregate tax amount that taxpayers reported in calculating the credit as the upper bound of federal income tax that would have been paid for 2010. To estimate potential changes to corporate income tax revenue, we obtained data on net operating income or losses, losses carried forward from prior years, and credits for taxes paid to the United States for every entity that filed a business income tax return for tax year 2009, the most recent complete year of tax return data available when we began our work. We obtained these data from Puerto Rico’s Department of Internal Revenue. We focused on these items because they are the best available proxies for the income and losses that would be taxed under the federal corporate income tax if Puerto Rico were to become a state. Net operating income or losses reported on Puerto Rico tax returns are computed in a manner broadly similar to how they are computed on federal returns (although the manner in which that income is taxed, if at all, can differ). We divided the Puerto Rico business entities into three categories based on the type of tax returns they filed. Regular corporations, which filed the standard corporate income tax return. Regular partnerships, which (in 2009) were subject to an entity-level income tax and filed returns largely identical to the regular corporate income tax return. Exempt businesses, which had been granted partial or full exemptions of their business income under one of Puerto Rico’s tax incentive laws, and filed special tax returns. Within the Puerto Rico tax return data, we could not always determine whether the filing entities were (1) branches of other corporations, (2) subsidiaries of other corporations, or (3) separate Puerto Rico entities. Likewise, officials from Puerto Rico’s Department of Internal Revenue told us that the data would not provide sufficient or reliable information on the country of incorporation for any of the filing businesses or for their parent corporations. Consequently, we made a range of assumptions regarding the percentage of the filing entities’ income attributable to either (1) branches or subsidiaries that would have been included in the consolidated federal corporate income tax return of a U.S. corporation, or (2) corporations that would have been taxed as separate entities under statehood. These distinctions mattered in terms of which tax rates we applied when making our estimates and how we treated accumulated losses. In addition, we did not have data for the amount of state and local income taxes that the filing entities would have paid in Puerto Rico if it had been a state. As a result, we needed to estimate these amounts, because they represent an important deduction under the federal corporate income tax. As with the individual income tax, Puerto Rico’s corporate income tax rates are relatively high compared to those in the states. For example, Puerto Rico’s highest marginal tax rate in 2010 for regular corporations was 19 percent; the highest corporate tax rate in the states in 2010 was 12 percent (Iowa). How Puerto Rico’s government would respond to the imposition of federal corporate income tax is unknown; however, if Puerto Rico were placed in the same fiscal relationship to the federal government as the 50 states, it might reduce its rates to be more in line with those from other states. Consequently, we assumed that under statehood, the effective rate of Puerto Rico’s income tax would be similar to the average effective rate for income taxes levied in the states (the rate for profitable corporations was 3.8 percent of net income; for corporations with losses it was -1.0 percent).estimate this average rate for the states. We used data compiled by IRS to We also needed to make assumptions regarding which federal tax rates would have applied to these entities’ net income under statehood. In the case of corporations taxed as separate entities, we assumed an effective tax rate falling within a broad range (from 15 percent to 35 percent) around the average effective tax rate that U.S. corporations paid for tax year 2009. We used this range to reflect the possibility that the tax attributes of the typical corporation operating in Puerto Rico could have differed from those of the typical U.S. corporation. On the advice of tax experts from the Joint Committee on Taxation, we used a different approach to determine tax rates for entities included in the consolidated returns of controlled groups of U.S. corporations. For these corporations, the applicable rate of tax depended not only on the Puerto Rico entities’ net income, but also on the income and losses of other group members, and on the credits earned by the group as a whole. For these entities, we applied the marginal federal tax rate for the consolidated group to the net income (or losses) that the Puerto Rico entity would have added to the group’s tax return. For entities in the financial services and social services industries, we applied the full 35 percent corporate marginal tax rate based on the assumption that these entities would not have qualified for the domestic production activities deduction. For all other corporations, we reduced the marginal rate to 31.85 percent to reflect the effect of this deduction. We estimated tax liabilities both before and after applying prior-year losses to offset income from 2009. We did so because the initial effect that these prior-year losses would have had on tax revenue may not have been representative of their effects over a longer time period. In the first year of statehood, when Puerto Rico subsidiaries of U.S. corporations first become subject to federal tax and are consolidated into their parent corporations’ tax returns, large portions of their losses could be used to offset the taxable income reported on those returns, leaving only smaller amounts (or newly generated losses) available to offset income in subsequent years. We also made assumptions to account for the potential relocation, under statehood, of businesses with activities in Puerto Rico. Tax experts at the U.S. Department of the Treasury and the Joint Committee on Taxation suggested that the changes in tax treatment that would occur under statehood likely would motivate some businesses to move their operations from Puerto Rico to lower-tax foreign locations—particularly those with substantial amounts of income derived from intangible (and therefore mobile) assets. For 2009, exempt corporations in the pharmaceutical and the medical equipment and supplies industries accounted for over 70 percent of the net income (and about 20 percent of accumulated losses) of the full population of exempt corporations. In addition, other industries with potential income from intangible assets accounted for significant shares of total net income. Using the other assumptions described above, we produced an alternative set of estimates to account for the potential relocation, under statehood, of businesses with activities in Puerto Rico. The first set of estimates assumes that all filing businesses would have maintained their activities in Puerto Rico. The second set of estimates assumes that (1) all filing businesses in the pharmaceuticals and the medical equipment and supplies industries would have relocated away from Puerto Rico, and (2) other filing business would have maintained their activities in Puerto Rico. To determine the amount of federal corporate income tax that entities with activities in Puerto Rico actually paid in 2009, we used data that U.S. corporations reported to IRS on Form 1118 on income they received in 2009 from their Puerto Rico branches or subsidiaries. To estimate the amount of tax that would have been paid, we applied a tax rate of 31.85 percent (the 35 percent corporate tax rate reduced to account for the domestic production activities deduction) to the remaining income. Separately, published IRS data show that the agency collected about $145 million (net of refunds) in business income taxes from entities in Puerto Rico in fiscal year 2009. However, this amount included taxes collected from any tax year, and we could not determine whether any of the amounts collected overlapped with amounts we estimated based on Form 1118 (any taxes paid by businesses incorporated in Puerto Rico on their U.S.-source income would not overlap with those amounts). Consequently, we did not include any of the $145 million in our estimate of the amount paid in corporate income tax by entities with activities in Puerto Rico. We took several steps to assess the reliability of the Puerto Rico tax return data we used for our individual and corporate income tax estimates. For example, to identify possible outliers that could reflect data errors, we checked maximum and minimum amounts reported for each tax return line item we used. We also discussed the data with officials from Puerto Rico’s Department of Internal Revenue, and, in some cases, adjusted the data to address errors and inconsistencies. Based on our assessment, we determined that the data were sufficiently reliable for our purposes. We also discussed our methodology for estimating tax revenue with tax experts from the Department of the Treasury and the Joint Committee on Taxation, who generally agreed with our estimation approaches. To identify factors under statehood that could influence changes in federal spending and revenues, we reviewed economic data from the Puerto Rico government and reports on the Puerto Rico economy, such as those from the Federal Reserve Bank of New York and the Congressional Budget Office. We also interviewed officials from the current and past Puerto Rico government administrations and Puerto Rico business associations representing large economic sectors in Puerto Rico to obtain their views on the potential impacts of statehood on Puerto Rico’s economy and public finances. We conducted this performance audit from June 2012 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Patient Protection and Affordable Care Act (PPACA), enacted in March 2010, makes substantial changes to Medicare and Medicaid, as well as other components of the federal budget. There are significant uncertainties surrounding the effects of PPACA on health care spending and on other factors that influence future health care costs more generally—such as how the development and deployment of medical technology, future policy decisions, and cost and availability of insurance affect growth in per-capita health care spending. These factors could influence our estimates of federal spending for Medicare, Medicaid, and the State Children’s Health Insurance Program (CHIP) under Puerto Rico statehood. Below we summarize selected PPACA provisions that may affect federal Medicare, Medicaid, and CHIP spending in Puerto Rico. PPACA is projected to decrease direct Medicare spending by almost $400 billion from fiscal years 2010 to 2019, according to the Congressional Budget Office (CBO). The following table summarizes selected PPACA provisions that have affected, or could potentially affect, federal Medicare spending in Puerto Rico. The Patient Protection and Affordable Care Act (PPACA), enacted in March 2010,makes substantial changes to Medicare and Medicaid, as well as other components of the federal budget. There are significant uncertainties surrounding the effects of PPACA on health care spending and on other factors that influence future health care costs more generally—such as how the development and deployment of medical technology, future policy decisions, and cost and availability of insurance affect growth in per-capita health care spending. These factors could influence our estimates of federal spending for Medicare, Medicaid, and the State Children’s Health Insurance Program (CHIP) under Puerto Rico statehood. Below we summarize selected PPACA provisions that may affect federal Medicare, Medicaid, and CHIP spending in Puerto Rico. PPACA is projected to decrease direct Medicare spending by almost $400 billion from fiscal years 2010 to 2019, according to the Congressional Budget Office (CBO). The following table summarizes selected PPACA provisions that have affected, or could potentially affect, federal Medicare spending in Puerto Rico. PPACA provisions (legal citation) Medicare Advantage (Medicare Part C) Changes benchmarks underlying payments to Medicare Advantage (MA) plans to align more closely with Medicare FFS spending. The new benchmarks will be phased in from 2012 to 2017 and blended with old benchmarks. In 2017, county benchmarks will be one of four values: 95 percent, 100 percent, 107.5 percent or 115 percent of Medicare FFS spending. Benchmarks could be increased for certain plans if they are new, demonstrate indicators of plan quality, or have low enrollment. PPACA, § 3201 (as amended by HCERA, § 1102) (codified at, 42 U.S.C. § 1395w-23(n)). Phased-in from 2012 to 2017. This provision is estimated to reduce payments to MA plans by about $136 billion from fiscal years 2010 to 2019. These reductions could result in reduced benefits and enrollments. When fully phased in 2017, benchmarks for Puerto Rico generally will be 115 percent of Medicare FFS spending plus any quality bonus payments. In the long term, the reduction will result in plans receiving lower payments. Given the high enrollment in MA in Puerto Rico, these changes could significantly affect Medicare spending. Medicare prescription drug benefit (Medicare Part D) Higher premiums will be charged for beneficiaries who exceed certain income thresholds. PPACA, § 3308 (codified at 42 U.S.C. § 1395w-113(a)(7)). January 1, 2011. This provision is estimated to provide an additional $10.7 billion in Medicare funding from fiscal years 2010 to 2019. Although Puerto Rico could benefit from additional funding, relatively few Puerto Rico Medicare beneficiaries may end up paying the higher premiums given relatively low incomes in Puerto Rico. This provision would not be affected by statehood. PPACA provisions (legal citation) Discounts and additional subsidies must be provided to Part D beneficiaries who purchased covered drugs during the coverage gap, or “donut hole.” would not benefit from this provision, as they already receive assistance with costs in the coverage gap. In addition, one-time payments of $250 were provided to certain individuals who incurred costs for covered Part D drugs exceeding the coverage limit in 2010. PPACA, §§ 3301, 3315 (as amended by HCERA, § 1101) (codified at, 42 U.S.C. §§ 1395w-152(c), 1395w–153, 1395w–114a). Potential effects on spending This provision is expected to increase Medicare spending by $42.6 billion from fiscal years 2010 to 2019. As of March 2013, Part D beneficiaries in Puerto Rico have saved $143 million under this provision, according to CMS.This provision would not be affected by statehood. Unless otherwise noted, projections are from: Congressional Budget Office, H.R. 4872, Reconciliation Act of 2010-- Final Health Care Legislation, (Washington, D.C.: Mar. 20, 2010). The donut hole refers to the point when standard Part D plans provide coverage for costs over $2,970, until out-of-pocket costs reach $4,750 (for 2013), when the plan covers most costs. CBO estimated that PPACA will increase federal spending on Medicaid The following and CHIP by $642 billion over fiscal years 2012 to 2022.table summarizes selected PPACA provisions that have affected, or could potentially affect, federal Medicaid and CHIP spending in Puerto Rico. PPACA provisions (legal citation) Provides that Medicaid payments to primary care physicians for services provided in 2013 and 2014 will not be less than the greater of the Medicare rates in those years or the payments rates that would be applicable in those years using the 2009 Medicare physician fee schedule conversion factor. The federal government must provide a 100 percent match for any increased payments. The territories, however, are not required to provide these increased payments to primary care physicians, nor are they eligible for this enhanced federal match. HCERA § 1202 (codified at 42 U.S.C. §§ 1396a(a)(13)(C), 1396d(dd), 1396u-2(f)). Medicaid DSH payments will be reduced under a specified methodology for 2014 through 2020. PPACA, §§ 2551, 10201(e)(1)(B) (as amended by HCERA, § 1203) (codified at 42 U.S.C. § 1396r-4(f)(7)). Effective date January 1, 2013. This provision is expected to increase federal Medicaid spending by $8.3 billion. Puerto Rico is not eligible for this enhanced federal match. Under statehood Puerto Rico would be required to make these increased payments to primary care physicians and would receive an enhanced match for qualified payments, increasing federal and Puerto Rico Medicaid spending. October 1, 2013. This provision is expected to result in $14.1 billion in savings to the federal government from fiscal years 2014 to 2019. Since Puerto Rico does not qualify for a DSH allotment as a territory, there is no direct effect from this provision on actual federal spending in Puerto Rico. Under statehood, Puerto Rico would become eligible for a DSH allotment; however, according to CMS officials, the methodology for calculating that allotment is unclear because a state’s allotment is based on its prior year allotment. Beginning October 1, 2015, the enhanced FMAP for CHIP will increase by 23 percentage points, not to exceed 100 percent, which will continue until September 30, 2019. PPACA, §§ 2101(a), 10203(c)(1) (codified at 42 U.S.C. § 1397ee(b)). October 1, 2015. Under statehood, the increased enhanced FMAP may impact the level of CHIP coverage that Puerto Rico would select. Unless otherwise noted, projections are from Congressional Budget Office, H.R. 4872, Reconciliation Act of 2010 (Final Health Care Legislation) (Washington, D.C.: Mar. 20, 2010). The Bipartisan Budget Act of 2013 amended the reduction of Medicaid DSH payments. Specifically, it delayed the reductions for two years until October 1, 2015 and doubled the reduction that otherwise would have applied in that year. Additionally, it added another special rule for calculating Medicaid DSH allotments in 2023. Pub. L. No. 113-67, § 1204, 127 Stat. 1165, 1199. In addition to the individual named above, key contributors to this report were Jeff Arkin, Assistant Director; Susan T. Anthony; Benjamin Bolitzer; Gerardine Brennan; Christine Brudevold; Virginia (Jenny) Chanley; Steve Cohen; James C. Cosgrove; Kevin Daly; Bertha Dong; Deirdre Duffy; Elizabeth Fan; Ellen Grady; Shirley Jones; Michael Kendix; Kathleen M. King; Laurie King; Hayley Landes; Kathryn Larin; Dan Meyer; Donna Miller; John Mingus; Meredith Trauner Moles; Ruben Montes de Oca; Edward Nannenhorn; Jeffrey Niblack; Keith O’Brien; Rhiannon Patterson; Amy Radovich; Robert Robinson; Erin Saunders Rath; Cherié Starck; Andrew J. Stephens; Lindsay Swenson; Hemi Tewarson; Daniel Webb; Monique B. Williams; James Wozny; Robert Yetvin; and Carolyn L. Yocom.
Puerto Rico has access to many federal programs, and is subject to certain federal tax laws; however, for some programs and for some aspects of tax law, Puerto Rico is treated differently than the states. Options for Puerto Rico's political status include statehood. GAO was asked to review potential fiscal implications for federal programs if Puerto Rico were to become a state. This report examines potential changes to selected federal programs and related spending changes, and changes to selected federal revenue sources that would be expected should Puerto Rico become a state. This report also discusses economic and fiscal factors under statehood that could influence changes in spending and revenues. To evaluate potential changes to selected federal programs and revenue sources, GAO reviewed federal laws and regulations and interviewed federal and Puerto Rico agency officials. To discuss factors that could influence changes in spending and revenue, GAO reviewed economic data from Puerto Rico's government and interviewed officials from the current and past Puerto Rico government administrations. Of the 29 federal programs GAO reviewed (which accounted for about 86 percent of federal program spending for states or their residents in fiscal year 2010), statehood would likely affect 11 programs. For 3 other programs, while the programs themselves would likely not change under statehood, eligibility determinations for these programs could be affected indirectly by changes that could occur to benefits in other programs. Statehood would not likely affect the 15 remaining programs. See figure below. The extent to which federal spending would change for some of the programs affected by Puerto Rico statehood depends on various assumptions: these assumptions include the program eligibility options Puerto Rico might select or the rates at which eligible residents might participate in the programs. For example, for the four largest programs for which federal spending likely would change under statehood—Medicare, Medicaid, the Supplemental Nutrition Assistance Program (SNAP), and Supplemental Security Income (SSI)—GAO used various assumptions to estimate the range of potential effects on federal program spending. The estimated ranges for the four programs, as described below, are based on Puerto Rico being treated the same as the states in either 2010 or 2011, based on the year for which GAO had the most recent data. Medicare: In fiscal year 2010, actual federal Medicare spending in Puerto Rico was $4.5 billion; if Puerto Rico had been a state in calendar year 2010, estimated federal spending would have ranged from $4.5 billion to $6.0 billion. The Medicare estimates take into account certain changes under the Patient Protection and Affordable Care Act occurring after 2010 that would reduce spending. Also, the Medicare estimates depend on the estimates for Medicaid, as some individuals are eligible for both programs. Medicaid: In fiscal year 2011, actual federal Medicaid spending in Puerto Rico was $685 million; if Puerto Rico had been a state in calendar year 2011, estimated federal spending would have ranged from $1.1 billion to $2.1 billion. The Medicaid estimates do not take into account the cost of nursing home and home health services in Puerto Rico due to the lack of available cost data, and because Puerto Rico lacks an infrastructure of nursing home facilities, according to Centers for Medicare & Medicaid Services officials. If these services became available, Medicaid spending would likely increase. SNAP: In fiscal year 2011, actual federal spending for a similar program in Puerto Rico was $1.9 billion; if Puerto Rico had been a state in calendar year 2011, residents would have been eligible for SNAP, and estimated federal spending would have ranged from $1.7 billion to $2.6 billion. One reason why the low end of the estimate range is less than actual spending is because participants' benefits would be reduced because of benefits received from SSI, for which Puerto Rico residents would newly qualify. SSI: In fiscal year 2011, actual federal spending for a similar program in Puerto Rico was $24 million; if Puerto Rico had been a state in calendar year 2011, residents would have been eligible for SSI, and estimated federal spending would have ranged from $1.5 billion to $1.8 billion. All the federal revenue sources GAO reviewed—individual and corporate income taxes, employment tax, excise tax, estate and gift taxes, and customs duties—could be affected if Puerto Rico became a state. For example, under statehood, Puerto Rico residents would be subject to federal tax on all their income: currently, they are subject to federal tax only on income from sources outside of Puerto Rico. Also, some sources of income, such as pension income, are taxed differently in Puerto Rico than in the states. As a result, for 2010, Puerto Rico filers' adjusted gross income for federal tax purposes would have been higher than that for Puerto Rico tax purposes. For some revenue sources, the extent to which federal revenue would change depends on various assumptions. For example, for the two largest revenue sources that would be affected substantially by statehood—individual and corporate income taxes—GAO used various assumptions to estimate a range of federal revenue. The estimate ranges, as described below, are based on Puerto Rico being treated the same as the states in either 2009 or 2010, based on the year for which GAO had the most recent data. Individual income tax: In 2010, Puerto Rico taxpayers reported paying $20 million to the United States, its possessions, or foreign countries. According to officials from Puerto Rico's Department of Internal Revenue, most of these payments would have been to the United States. If Puerto Rico had been a state in 2010, estimated individual income tax revenue from Puerto Rico taxpayers would have ranged from $2.2 billion to $2.3 billion (after accounting for estimated payments in excess of tax liability from refundable tax credits, such as the earned income tax credit). Corporate income tax: In 2009, U.S. corporations paid about an estimated $4.3 billion in tax on income from their affiliates in Puerto Rico. Most of this amount was from an unusually large amount of dividends repatriated from Puerto Rico (compared to amounts repatriated in earlier years or in 2010). Absent that spike in dividends, the federal taxes these corporations would have paid for 2009 would have been about $1.4 billion. If Puerto Rico had been a state in 2009, estimated corporate income tax revenue from businesses that filed a Puerto Rico tax return for that year (or their parent corporations in the United States) would have ranged from $5.0 to $9.3 billion. The low end of this range assumes that U.S. corporations would have used prior-year losses of affiliated Puerto Rico corporations to offset their federal taxable income to the maximum extent (leaving only smaller or newly generated losses available to offset income in subsequent years), among other assumptions. However, this range does not take into account any behavioral changes of businesses with activities in Puerto Rico. For example, according to tax policy experts at the Department of the Treasury and the Joint Committee on Taxation, changes in federal income tax requirements under statehood would likely motivate some corporations with substantial amounts of income derived from intangible (and therefore mobile) assets to relocate from Puerto Rico to lower tax foreign locations. The extent to which such corporations might relocate from Puerto Rico is unknown. Consequently, GAO produced an alternative set of revenue estimates to account for some businesses with activities in Puerto Rico potentially relocating under statehood: this range was -$0.1 billion to $3.4 billion. The low end of this range is negative because U.S. corporations would have used their Puerto Rico affiliates' prior-year losses to reduce their taxes to such an extent that they would have more than offset the positive tax amounts that other corporations continuing to operate in Puerto Rico under statehood would have paid. Puerto Rico faces various economic and fiscal challenges that could potentially impact changes in federal spending and revenue under statehood. For example, its economy largely has been in recession since 2006, and its levels of employment and labor force participation are relatively low, compared to those of the states. Persistent deficits have resulted in an increase in Puerto Rico's public debt, which represents a much larger share of personal income than in any state (and in February 2014, Puerto Rico's general obligation bonds were downgraded to speculative—noninvestment—grade by three ratings agencies). Puerto Rico has taken recent steps to improve its fiscal position, such as reducing its government workforce and reforming its largest public employee retirement system. Changes in federal program spending and to federal tax law under statehood could lead to economic and fiscal changes of their own in Puerto Rico. That may have a cascading effect on federal spending and revenue levels. However, the precise nature of such changes is uncertain. Because statehood would cause numerous adjustments important to Puerto Rico's future, it would require careful consideration by Congress and the residents of Puerto Rico. Consequently, statehood's aggregate fiscal impact would be influenced greatly by the terms of admission, strategies to promote economic development, and decisions regarding Puerto Rico's government revenue structure. GAO is not making recommendations. Federal agency and Puerto Rico government officials reviewed GAO's draft report; their comments were incorporated as appropriate. To view the Spanish translation of this highlights page, please see GAO-14-301 .
Personnel from each of the services utilize a variety of PPE based on factors such as the operational environment, job description or occupation, and commander discretion. For example, Army and Marine Corps ground combat personnel utilize soft and hard body armor designed to protect against specific small arms, fragmentation, and other unconventional threats, such as improvised explosive devices. Likewise, personnel with aviation based occupations or explosive ordinance disposal responsibilities, and those operating in extreme climates or maritime environments have specific PPE options for their specific assignments. During ground combat operations in Iraq and Afghanistan in the 2000s, Soldiers and Marines typically wore tactical vests or plate carriers with hard armor ballistic inserts, a combat helmet, and other miscellaneous items such as eye protection and gloves. This PPE, added to the other items that personnel typically carry or wear in operational environments (weapon systems, food and water, communications equipment, and other items), cumulatively represent the total load burden on personnel. The total load varies to some degree between the Army and the Marine Corps, and the services use specific load categories for mission-planning purposes. For example, the Army uses three combat loads, fighting (lightest), approach march (mid), and emergency approach march (heaviest), based on a number of factors, including mission duration and purpose, the likelihood of resupply, climate, and other characteristics that affect equipment and supply decisions. Similarly, the Marine Corps uses the following categories for mission and load-planning purposes: fighting (lightest), assault (mid), and sustainment (heaviest). According to Army guidance and Marine Corps documentation, the two services generally use these load parameters as a guide for determining the most appropriate equipment and supply levels required to meet mission objectives. The services each have program offices that develop, acquire, and field PPE and other equipment based on generated and approved operational requirements for Soldiers and Marines. For example, Program Executive Office Soldier develops specifications, and acquires equipment, including PPE, based on capability requirements produced by the Army’s Training and Doctrine Command. Similarly, the Marine Corps Systems Command develops, acquires, and fields PPE and other equipment to address operational requirements developed by the Capability Development Directorate of the Deputy Commandant for Combat Development and Integration. The service program offices typically collaborate with each other and partner with industry providers to research, design, and develop PPE and other equipment. The Army and the Marine Corps have developed requirements for PPE to address operational threats but these requirements contribute to the total load burden on ground combat personnel. The services expect ground combat personnel to wear a combination of equipment developed to meet these requirements, including hard armor plates, soft armor plate carrier vests, and combat helmets, as shown in figure 1. According to Army and Marine Corps program managers, these items individually provide specific functions that together form a protective system for personnel. According to Army and Marine Corps documentation, the current system was initially developed and fielded to address specific threats facing personnel operating in Iraq (Operation Iraqi Freedom) in 2003. Officials stated that the two services conducted capability and threat assessments in this theater to determine how best to mitigate threats without hindering mobility or combat effectiveness. The services have documented these assessments and protection requirements in PPE guidance, technical documentation, and acquisition specifications, which include the size, weight, coverage area, protective standards, and other key parameters for each primary PPE component. These documents standardize PPE expectations and operational requirements for Soldiers and Marines. Officials noted that they are able to change PPE requirements or standards to meet evolving needs, incorporate technological advancements, or modify goals. Additionally, they provide industry partners with specifications needed to develop equipment. While the services have produced individual PPE guidance and technical documentation, officials noted that they jointly develop protection requirements and acquire some of the primary PPE components. For example, the Army and Marine Corps jointly acquired modern hard armor plates and have coordinated on the development and acquisition of the enhanced combat helmet. The two services also have similar standards for the soft armor plate carrier vests, but Marine Corps officials noted that because each service has some unique operational requirements, they developed and acquired this item separately. Army and Marine Corps officials we met with stated that the body armor that was fielded to meet current threats provides significant additional protection when compared with previously available equipment. However, they also noted that providing this level of protection adds significant bulk and weight to the total load on Soldiers and Marines, which could impede mobility and have other adverse effects. Both Marine Corps guidance and Army capability requirements indicate that PPE should provide adequate protection levels without hindering mobility or combat effectiveness. The primary PPE (hard armor plates, soft armor vest, and combat helmet) currently used by both Army and Marine Corps personnel averages approximately 27 pounds (for size medium equipment), and adds to the weight of other uniform items and equipment worn or carried by personnel. The cumulative weight of all uniform items and other equipment expected to be carried or worn by personnel in operational environments represents the total load and can vary according to individual position (e.g., squad leader, rifleman, grenadier) and mission characteristics (see figure 2). According to program managers we met with, the typical total load on personnel has increased since about 2003 based on the incorporation of new PPE systems and other equipment that is designed to enhance personnel performance or protection capabilities. According to 2016 Marine Corps data, a typical load is expected to be approximately 90 to 159 pounds, or an average of 117 pounds, depending on the individual function within the squad. Similarly, Army ground personnel are expected to wear and carry approximately 96 to140 pounds, or an average of 119 pounds, depending on individual roles. These can vary based on individual PPE sizes and other equipment variations. However, the expected totals for Army ground combat personnel generally align with actual load totals ranging from 96 to 151 pounds, reported by personnel recently operating in Afghanistan. However, program officials also explained that excessive loads can have negative effects on personnel mobility, lead to earlier fatigue onset, and exacerbate the risk associated with high temperature operational environments. Army Field Manual 21-18, published in 1990, recommends that the fighting load not exceed 48 pounds and that the approach/march load not exceed 72 pounds. According to program managers, the Marine Corps does not have specific load thresholds or maximums, but documentation identifies that loads in excess of 30 percent of body weight for ground combat personnel increase the likelihood of detrimental performance effects. Medical researchers from the services whom we met with agreed that these are some of the risks associated with substantial combat loads, and stated that they have attempted to correlate load burdens with detrimental performance and increased injury risks. For example, the Naval Health Research Center in San Diego, CA, collected injury data from personnel operating in Afghanistan and Iraq between 2011 and 2013 and concluded that excessive loads may have exacerbated the reported injuries. Service officials said that they are studying these potential effects on personnel performance, but also stated that the available load guidelines could be outdated and not reflective of current PPE systems and other capability enhancing equipment. Additionally, they note that these thresholds may not be appropriate for all personnel and that load thresholds or limits could restrict commander flexibility in the field by potentially impairing their ability to properly outfit personnel to meet mission requirements. Nonetheless, officials from both services stated that they continually seek ways to reduce the weight of PPE and reduce or offset the overall loads on personnel while maintaining operational capabilities and protection standards. Army and Marine Corps officials coordinate through formal and informal working groups that seek to develop and improve PPE. For example, two to four times annually the services hold a Cross-Service Warfighter Equipment Board, which allows Army and Marine Corps representatives, along with members of the other military services, to share developments and advancements made to PPE and other individual equipment. The Army and Marine Corps also participate in the Personal Protective Equipment Capabilities Development Integrated Product Team, an interagency forum that shares information, such as injury data, research and development findings, material developments, technologies, and test methodologies, among key stakeholders involved in PPE development. In addition, the Army and Marine Corps work together on the development and procurement of PPE, such as the hard armor plates and the enhanced combat helmet that meet both services’ needs. Informally, the Army and Marine Corps regularly communicate on a variety of PPE-related research, technology advancements, and planning efforts. Officials from both Army and Marine Corps program offices explained that coordination is mutually beneficial based on similar equipment needs for ground combat personnel. Officials noted that they have collaborated on the development, management, and procurement of current hard armor plates since their inception in the early 2000s. Additionally, in August 2016 we observed a Marine Corps-sponsored industry event focused on the next iteration of the enhanced combat helmet, where an Army engineer participated and shared with vendors the Army’s perspective on weight reduction priorities for the helmet. Army and Marine Corps officials stated that they collaborate with vendors to gather input for the development of PPE. Army and Marine Corps program managers said that when developing or improving PPE and other equipment, they prioritize protection and operational capabilities, and that they have overarching goals of reducing weight, and improving form, fit, and function of equipment. These overarching goals have led to some improvements and reductions in the weight of some PPE. For example, the Army and Marine Corps have made updates and redesigned aspects of their respective soft armor vests, which according to program managers have resulted or will result in weight savings of up to approximately 40 to 50 percent, or about 6 to 7 pounds when compared with previous versions. In addition, according to Marine Corps documentation, the service is incentivizing industry partners to produce lighter equipment and systems by incorporating weight reduction as a part of the source selection process for the enhanced combat helmet. Further, in 2016 the Army began developing a goal and subsequent plan to reduce the weight of hard armor plates by 20 percent, or about 2 pounds, by identifying and eliminating excess ballistic protection parameters and potentially updating testing methodologies. Officials said that protection standards have largely prevented significant reductions to date; however, they believe that the plates may be over- designed and heavier than necessary, based on actual operational threats and PPE performance data collected in Iraq and Afghanistan. According to research officials, updates would allow for weight reductions without increasing the ballistic risk to personnel. According to Army officials, the plan is currently pending approval by senior Army officials. If approved, researchers expect to develop new hard armor plates, with reduced weight, in fiscal year 2019. The Army and Marine Corps are also pursuing other efforts to reduce the weight of PPE. For example, the Army and Marine Corps are promoting PPE scalability as an approach to realize near-term weight reductions. PPE scalability allows Soldiers and Marines to vary the levels of PPE worn, from minimal protection or no PPE to a maximum level whereby Soldiers and Marines utilize all available PPE. The Army and Marine Corps have categorized these protection levels based on configurations of all available PPE, and officials said that potential weight reductions could be realized if commanders were to adjust protection levels (amount of PPE utilized) based on an evaluation of environment, threat, and mission characteristics. However, Marine Corps officials noted that commanders may be reluctant to increase operational risk by reducing PPE protection levels. Finally, Army and Marine Corps researchers are exploring ways to better integrate individual equipment to provide improved functionality and potentially save weight. The Army’s Warrior Integration Site and the Marine Corps’ Marine Expeditionary Rifle Squad research the integration potential of all individual equipment worn by Soldiers and Marines. According to officials with whom we met, the two services see their analyses potentially resulting in improvements to the weight, form, and function of Soldier and Marine equipment. One analytical method used by both the Army and the Marine Corps entails load effect assessment programs that use instrumented obstacle courses to gather data and evaluate mobility and functions based on various combat loads that personnel experience (see figure 3 and associated video). Officials explained that these data and analyses help them identify specific equipment that could or should be improved. While these efforts may have implications for reducing the load burden of Soldiers and Marines, the main goal for both organizations is to improve personnel performance by providing better integration and function for equipment commonly utilized by Soldiers and Marines. Army and Marine Corps researchers are exploring initiatives—such as improvements to logistics and resupply capabilities, load transfer technologies, lighter ammunition, and reduced battery usage—that may decrease the total load burden on ground combat personnel. Improved Logistics and Resupply Capabilities. Researchers at the Natick Soldier Research Development and Engineering Center said that they are exploring new technologies and systems that could provide improved logistics support for squads in the form of precise and on-demand resupply. Army officials noted that personnel loads are affected by confidence levels in resupply and logistics support. For example, squads that are more confident in resupply may be more willing to carry less ammunition, water, food, and other supplies, thus reducing the total weight carried by personnel. Therefore, developing new aerial delivery systems capable of providing small- and medium-sized payloads with precision could enable Soldiers and Marines to carry not more than the necessary equipment and supplies. The Marine Corps has implemented one of these systems, the Joint Precision Airdrop System, which was developed by the Army’s Aerial Delivery Directorate at the Natick Soldier Research Development & Engineering Center. This system is designed to accurately deliver (within 150 meters) up to 700 pounds of supplies to personnel operating in inaccessible environments. A Marine Corps program official stated that the system would likely alter planning and allow personnel to forgo packing excess food, water, ammunition, and other supplies. They also stated that the procurement and sustainment costs for all units of this system totaled approximately $850,000 for fiscal years 2013 through 2016. Load Transfer Technologies: The Army and Marine Corps are evaluating both manned and unmanned load transfer technologies capable of travelling with units or squads (see figure 4). These technologies may allow Soldiers and Marines to offload some items such as food, water, or ammunition. For example, the Marine Corps is currently employing 144 MRZR all-terrain vehicles capable of traveling with squads and transporting up to 1500 pounds of personnel, equipment, and supplies. Marine Corps officials stated that the total acquisition and sustainment costs for all the vehicles are projected to be approximately $15 million between fiscal years 2016 and 2018. Similarly, the Army is in the process of developing an unmanned or optionally-manned squad support vehicle capable of traveling with dismounted personnel and carrying up to 1000 pounds of equipment. The prototypes include both tracked and wheeled variants. Army officials stated that they plan to pursue this as an official program and field the vehicles in fiscal years 2020 and 2021. In addition, the Defense Advanced Research Project Agency supported similar research and development efforts by designing and testing the Legged Squad Support System, which researchers stated had the intended capability to carry up to 1,000 pounds of equipment and travel semi-autonomously with squads and fire teams. While the Army and Marine Corps are not currently pursuing this specific system, they plan to test additional unmanned ground systems with similar load transferring and mobility capabilities. Lighter Ammunition: Army and Marine Corps program managers are developing lightweight technologies and monitoring third-party research related to the development of polymer-case ammunition for commonly used .50 caliber, 7.62 mm, and 5.56 mm rounds. According to program managers, transitioning to polymer based ammunition casing could reduce ammunition weight by as much as 20 to 35 percent based on the weight difference between lighter polymer casing and traditional brass casing. The Marine Corps began testing a polymer-case .50 caliber round in March 2017, which could replace legacy ammunition without modifying the .50 caliber weapon systems currently in use. However, significant weight savings for personnel would require implementing this technology for smaller-caliber rounds with lightweight polymer-case compatible weapon systems, and officials noted that these investments would likely hinder near-term implementation. Reduced Battery Usage: The Army and Marine Corps are researching potential hardware and software changes that could reduce the energy demand for some commonly carried electronics and thus reduce energy usage and weight associated with batteries. For example, Army researchers stated that they are evaluating systems that harvest energy from Soldiers’ movements and solar technology that could be used to power communications systems and other battery-driven equipment. Additionally, officials noted that they are monitoring private-sector technology developments that could reduce the weight of batteries by 20 percent while providing the same amount of energy as those batteries currently used. Marine Corps program officials explained that they are also developing a single radio with the same capability as is provided by two separate radios currently used by Marines. Officials stated that this new radio may reduce the need to carry excess batteries. However, Army and Marine Corps officials noted that battery demand and its associated weight continue to pose a significant challenge. For example, Army program managers said that squad leaders currently carry approximately 8 pounds of batteries to power a variety of optics, communications systems, and other equipment. DOD provided technical comments on a draft of this report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense, the Secretaries of the Army and the Navy, and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Alissa Czyz (Assistant Director), Larry Junek (Assistant Director), Alexandra Gonzalez, Amie Lesser, Sean Manzano, Michael Shaughnessy, Michael Silver, Grant Sutton, and Cheryl Weissman made key contributions to the report.
Army and Marine Corps ground combat personnel have long worn a variety of PPE such as vests, armor, and helmets to help protect them from operational risks. The two services have documented the advanced protection capabilities of current PPE systems, but identified that the armor contributes to the total load burden—or cumulative weight of items typically worn or carried. In addition to PPE, personnel typically carry food, water, ammunition, communications equipment, and other items. House Report 114-537, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017, included a provision for GAO to review Army and Marine Corps efforts to reduce the weight of PPE and other equipment worn or carried in combat. This report describes (1) the current operational requirements associated with PPE, and how those requirements contribute to the total load burden on Soldiers and Marines in combat environments; and (2) the coordination between the Army and the Marine Corps regarding efforts to reduce the weight of PPE and the total load burden on personnel. GAO reviewed Army and Marine Corps documentation related to PPE, total load burden on combat personnel, and weight reduction initiatives; and interviewed service researchers and program officials. The Army and Marine Corps have developed requirements for personal protective equipment (PPE) to address operational threats in ground combat environments, but this PPE has increased in weight over time and has added to the total load burden on personnel. PPE primarily consists of hard armor plates, soft armor plate carrier vests, and combat helmets. Army and Marine Corps officials stated that the PPE provides significant additional protection when compared with equipment used prior to operations in Iraq in the 2000s. However, they also noted that providing this level of protection adds significant bulk and weight to the total load on Soldiers and Marines, which could impede mobility and hinder combat effectiveness. According to service-provided data, the typical total load in 2016 for Army and Marine Corps ground combat personnel averaged about 119 and 117 pounds, respectively, of which the primary PPE represented about 27 pounds based on equipment sizes (see figure). Officials stated that these totals have increased over time based on the incorporation of new PPE and other equipment. Recognizing that the weight of PPE and other equipment could have negative effects on personnel performance, the Army and the Marine Corps have coordinated and developed goals for PPE-related weight reductions and are pursuing some efforts to reduce overall load burdens on personnel. The two services coordinate through formal working groups and informal methods to develop and improve PPE. Army and Marine Corps officials stated that while they prioritize protection and operational capabilities when developing PPE, they have overarching goals of reducing weight, in addition to improving the form, fit, and function of equipment. These goals have led to reductions in the weight of some PPE. The Army is also developing a goal and plan to reduce the weight of hard armor plates by 20 percent by identifying and eliminating excess ballistic protection. In addition, the Army and Marine Corps are pursuing other efforts to reduce the weight of PPE, such as by giving commanders the option to employ varying levels of PPE at their discretion and studying the effects of integrating PPE with overall combat loads. Finally, the Army and Marine Corps are exploring research initiatives that may reduce the total load on ground combat personnel, such as improvements to logistics and aerial delivery capabilities, load transferring systems, and other enhancements to equipment. GAO is not making recommendations in this report. DOD provided technical comments on a draft of this report, which GAO incorporated as appropriate.
Helium is an inert element that occurs naturally in gaseous form and has a variety of uses because of its unique physical and chemical characteristics. For example, helium has the lowest melting and boiling points of any element, and as the second lightest element, gaseous helium is much lighter than air. Certain natural gas fields contain a relatively large amount of naturally occurring helium that can be recovered as a secondary product. To do so, the helium is separated from the natural gas and stored in a concentrated form that is referred to as crude helium because it has yet to go through the final refining process. As of September 30, 2013, the federal helium program stored about 10.84 billion cubic feet of crude helium—roughly 9 billion cubic feet owned by the government, and the rest owned by private companies— in an underground storage reservoir near Amarillo, Texas. BLM used a geologic model to identify the most efficient way to extract this remaining helium from storage. As of mid-2014, BLM estimated that it could make available for delivery from the reservoir roughly 7 billion cubic feet of helium over the life of the act, from fiscal year 2014 through fiscal year 2021. After private companies—refiners or nonrefiners—purchase helium from BLM and pay for it, the official ownership of the helium is transferred from BLM to the company on the first day of the month after payment is received, and it becomes part of the privately owned inventory in federal storage. BLM stores and then delivers the privately owned helium through the pipeline to refiners in accordance with the storage contracts it has with the companies. As of February 2015, BLM held storage contracts with 11 companies, and these storage contracts will expire at the end of fiscal year 2015. The storage contracts govern the storage, withdrawal, and delivery of helium from the federal reservoir and associated fees. BLM officials explained that they calculate fees under the current contracts based on the total amount necessary to recover BLM’s costs. The ability for companies to purchase crude helium and then leave it stored in the federal storage reservoir until it is delivered at a later time is a unique feature of the federal helium program compared with other sources of helium in the world, which typically require a purchaser to accept delivery of the helium when it is extracted or pay for it even if delivery is not accepted upon extraction. When the Helium Stewardship Act of 2013 was enacted, the global helium market had been operating under conditions of tight supplies for multiple years. From 2010 to mid-2014, refiners requested delivery of volumes of helium through the BLM pipeline that would have exceeded BLM’s production capacity. Under these conditions, refiners accepted delivery of the maximum amount of helium that BLM could produce. However, market conditions changed in 2014: supply increased due to additional production from private sources in other parts of the world. Because these additional supplies became available, starting in mid-2014, refiners requested delivery of volumes of helium that were less than BLM’s production capacity. A January 2015 U.S. Geological Survey report estimated that helium produced from the federal storage reservoir represented 29 percent of the total estimated production of helium in the United States and 17 percent of the total estimated helium production worldwide in fiscal year 2014. The 2013 act significantly changed the federal helium program. The 1996 act required Interior to sell a certain amount of helium in the federal helium reserve and to set helium sale prices to cover the reserve’s operating costs and to produce an amount sufficient to repay the debt associated with the initial purchase of the helium. According to the 2013 act’s legislative history, however, the purpose of the 2013 act is to complete the privatization of the federal helium reserve in a competitive market fashion that ensures stability in the helium markets while protecting the interests of the taxpayers. The 2013 act introduces new provisions, including the following: Phased implementation. The act establishes four phases for the sale and auction of crude helium from, and eventual closure of, the reserve—Phase A: allocation transition; Phase B: auction implementation; Phase C: continued access for federal users; and Phase D: disposal of assets. Phase D is to be completed no later than September 30, 2021. 50 U.S.C. § 167d(a)-(d). Tolling. If a refiner and nonrefiner do not agree on terms for tolling, the act does not require refiners to toll. However, as a condition of sale or auction to a refiner in Phase A and B, the refiner must make excess refining capacity of helium available at commercially reasonable rates to persons who acquire helium from BLM after the act’s enactment. 50 U.S.C. § 167d(b)(8)(B). According to the act’s legislative history, this condition was intended to maximize participation in Phase A and B helium sales. The act does not define excess refining capacity or commercially reasonable rates. We refer to the condition of sale or auction as the act’s tolling provision. Disclosure requirement and qualifying domestic helium transactions. The act requires BLM to require all persons that have storage contracts with BLM to disclose, on a strictly confidential basis, (1) the volumes and associated prices of all crude and pure helium purchased, sold, or processed by persons in qualifying domestic helium transactions; (2) the volumes and associated costs of converting crude helium into pure helium; and (3) refinery capacity and future capacity estimates. 50 U.S.C. § 167d(b)(8)(A). We refer to this as the act’s disclosure requirement. Furthermore, the act defines a “qualifying domestic helium transaction” as any agreement entered into or renegotiated during the preceding 1-year period in the United States for the purchase or sale of at least 15 million standard cubic feet of crude or pure helium to which any storage contract holder is a party. 50 U.S.C. § 167(10). Price-setting. The act requires BLM to annually establish, as applicable, separate sale and minimum auction prices for Phase A and B using, if applicable, and in the following order of priority: (1) the sale price of crude helium in BLM auctions; (2) price recommendations and disaggregated data from a qualified, independent third party who has no conflict of interest, who shall conduct a confidential survey of qualifying domestic helium transactions; (3) the volume-weighted average price of all crude helium and pure helium purchased, sold, or processed by persons in all qualifying domestic helium transactions; or (4) the volume- weighted average cost of converting gaseous crude helium into pure helium. 50 U.S.C. § 167d(b)(7). Auction and sale schedule and frequency, and one-time sale. For fiscal year 2015, the act only permits one auction, followed by one sale that had to occur no later than August 1, 2014. Payment for the sale had to be made by September 26, 2014. 50 U.S.C. § 167d(b)(12). The act also requires a one-time sale of helium from the amounts available in fiscal year 2016 that had to occur no later than August 1, 2014, with payment no later than 45 days after the sale date. 50 U.S.C. § 167d(b)(13)(A). Auction quantities. The act generally requires BLM to auction an increasing amount of the helium made available each fiscal year, beginning with 10 percent in fiscal year 2015 and increasing by an additional 15 percentage points annually through fiscal year 2019, and then with 100 percent being auctioned in fiscal year 2020. 50 U.S.C. § 167d(b)(2). However, the volume auctioned may be adjusted upward if the Secretary of the Interior determines it necessary to increase participation in auctions or increase returns to taxpayers. 50 U.S.C. § 167d(b)(5)(B). Storage and delivery. The act requires BLM to establish a schedule for transportation and delivery of helium using the federal system that ensures timely delivery of helium purchased at auction or sale, among other things. 50 U.S.C. § 167c(e)(2). The act also requires BLM to impose a fee on contract holders that accurately reflects the economic value of helium storage, withdrawal, and transportation services. The fee imposed cannot be less than the amount required for contract holders to reimburse Interior for the full costs of providing those services, including capital investments in the federal helium system. 50 U.S.C. § 167c(a),(b). BLM published a final notice in the Federal Register on July 23, 2014, that specified the agency’s plan for implementing (1) the auction of a portion of the helium that will be delivered in fiscal year 2015, (2) the sale of a portion of the helium that will be delivered in fiscal year 2015, and (3) the one-time advance sale of a portion of the helium that will be delivered in fiscal year 2016 (see table 1). For the auction, BLM’s notice stated that auction participants would compete to purchase set volumes, or lots, of helium. For the sales, the notice stated that each of the four participating refiners would receive an amount of helium based on their percentage share of the total estimated refining capability in 2000. The notice also contained, among other things, BLM’s formula for calculating the minimum auction price and the sales price; BLM’s plans for delivering helium purchased in the auction and sale during fiscal year 2015, as well as delivery plans for helium purchased prior to the 2013 act’s enactment; and BLM’s plan for collecting information about tolling agreements between refiners and other parties. During the summer of 2014, refiners purchased all the helium offered during BLM’s first competitive helium auction and in two subsequent noncompetitive sales at prices that were higher than expected by participants and BLM officials. Two refiners purchased all the auctioned helium. BLM and refiners and nonrefiners cited multiple, possible reasons for the auction’s outcomes, including that refiners had an advantage over nonrefiners in terms of having existing infrastructure to refine helium without paying another company to do so. For the two sales, held in August 2014, BLM used the average auction price to help set the sales price, and the agency restricted the sales to refiners. Two refiners purchased all 93 million cubic feet of helium that BLM auctioned for delivery in fiscal year 2015 for an average price of $161 per thousand cubic feet. Specifically, 13 companies, including refiners and nonrefiners, participated in the agency’s first-ever competitive helium auction, held in July 2014, but most stopped bidding well below the final auction prices for 12 lots of helium. BLM set the minimum starting bid for each lot at $100 per thousand cubic feet, an increase over the fiscal year 2014 sales price of $95 per thousand cubic feet. At one point during bidding, the auction price rose as high as $180 per thousand cubic feet. We observed that participants who did not win at the auction stopped bidding when prices reached from $105 to $130 per thousand cubic feet (see fig. 1). Reaction to the auction’s outcome varied among refiners, nonrefiners, and BLM officials. Most of the representatives of refiners and nonrefiners we interviewed stated that the auction prices were too high for crude helium, especially during a time of global excess of helium supplies. A representative from one refiner, for example, called the auction prices “outrageously high.” Others said the average price was not an indication of the market price for crude helium, especially since the 93 million cubic feet auctioned by BLM constituted a very small portion of the total volume of crude helium sold in a year in the global market. Some representatives of nonrefiners said that auction prices for crude helium reached levels similar to some prices for refined liquid helium, which is typically more expensive than crude helium. Others said that the auction failed to increase the number of purchasers of federal helium beyond companies that already participated in the federal helium program. BLM and some representatives of nonrefiners and a refiner, however, said the auction was a success for the federal government since it generated about $15 million in revenue. A senior BLM helium program official said the auction exceeded revenue expectations, and an agency press release stated that BLM achieved a key objective of the act: to maximize the value of federal helium through a market-driven process. In addition, representatives from a refiner and nonrefiner stated that the free market nature of the auction was a good way to determine the market price for crude helium. Also, BLM officials and a representative of one nonrefiner stated that the high auction prices were beneficial because they will help spur development of new helium supplies. The representative explained that, when the price of crude helium increases, the return from selling helium increases. As the return increases above the cost of production, it provides an incentive to find and produce more helium because the exploration of new helium resources becomes more economical. In interviewing BLM officials and representatives of refiners and nonrefiners and reviewing BLM’s July 2014 Federal Register notice, we identified multiple, possible explanations for why refiners won all the auctioned helium for higher than expected prices. Specifically: Refiners may have been more willing to pay higher prices at the auction since their costs for refining crude helium are lower than those of nonrefiners. According to BLM officials, refiners utilize the infrastructure they already have to refine crude helium. In contrast, nonrefiners must pay another company to refine, or toll, their helium, which represents additional costs that refiners do not pay. As a result, according to representatives of nonrefiners, the costs of purchasing auctioned helium and turning it into refined helium are lower for refiners than nonrefiners, giving refiners an advantage at the auction. Nonrefiners may not have bid higher at the auction because they did not know the costs and delivery terms for tolling. Representatives of nonrefiners we interviewed said that few tolling agreements were in place prior to the auction. Those agreements would have specified the rates for tolling any helium they purchased and provided details on when, where, and how purchased helium would be delivered. As a result, the nonrefiner representatives said they were unable to calculate the total costs associated with purchasing and refining crude helium during the auction. By not knowing the tolling costs in advance of the auction, nonrefiners could have bid more conservatively than they might have otherwise. In addition, according to one nonrefiner representative, not knowing delivery terms before the auction made it difficult to plan ahead and prepare to receive the helium. Refiners’ expectation of paying less for helium at two subsequent, noncompetitive sales may have led refiners to pay higher prices at the auction. Before the auction, BLM announced in its July 2014 Federal Register notice that it would make more than 1 billion cubic feet of helium available exclusively to refiners in two sales of helium to be delivered in fiscal year 2015 and fiscal year 2016, as compared with the 93 million cubic feet of helium to be auctioned. The notice further specified that the average price paid by auction winners would account for a small part—10 percent—of the sale price. According to BLM officials, when the amount of helium purchased by refiners at the higher auction price was added to the amount of helium purchased by refiners at the lower sales price, the refiners’ overall average price was considerably lower than the auction price. Specifically, refiners paid an average of $161 per thousand cubic feet for the auctioned helium, but refiners paid $106 per thousand cubic feet for helium purchased at the two sales. When the volumes and prices of the auction and sales were added together, the refiners’ overall purchase price averaged less than $120 per thousand cubic feet. BLM officials and representatives of nonrefiners told us that the refiners’ ability to average auction prices with sale prices provided an advantage to refiners because nonrefiners were not eligible to participate in the two sales held in August 2014 and therefore could not average auction and sale prices as refiners could. Changes to the way BLM proposed to deliver helium purchased at the auction may have provided an incentive to refiners to purchase as much helium at the auction as possible. Specifically, BLM had announced in its July 2014 Federal Register notice that it would reserve some of its pipeline delivery capacity in fiscal year 2015 for helium purchased at the auction. Based on our review of the notice, purchasing helium at the auction would have allowed refiners to take advantage of the new delivery method and maximize volumes of helium they would receive through the pipeline. After the auction, BLM sold more than 1 billion cubic feet of helium in the two August 2014 sales to the four refiners at a higher than expected price. BLM missed the August 1, 2014, statutory deadlines for holding the sales by 2 weeks; however, the agency reported that it received final payments by the applicable statutory deadlines. As we previously stated, BLM used the average auction price to help set the price of $106 per thousand cubic feet used in both sales. As we found in July 2014, BLM based its price for these two sales primarily on the fiscal year 2014 sales price, adjusted for inflation, but the agency also used the average auction price to account for 10 percent of the sales price. As a result, BLM’s sales price increased $11 per thousand cubic feet compared with the fiscal year 2014 price, and BLM received approximately $115 million in revenue from the two sales. We found in July 2014 that BLM selected its method for calculating the price for the two sales because agency officials said they did not have time to contract for an annual market survey of qualifying domestic helium transactions by an independent third party. This market survey is one of the options provided for in the act’s price-setting provision. BLM officials also said they did not give the auction price greater weight when setting the sale price because they did not want to create a significant price increase that would negatively affect federal users and other end users. However, some representatives of refiners and nonrefiners said that they disagreed with BLM’s decision to consider the auction price when setting the sale price because the auction accounted for a small amount of helium when compared with the volume of helium that is sold on the global market. Nevertheless, the act’s price-setting provision authorizes BLM to use the auction price to set sales prices and directs BLM to give priority to this approach. BLM restricted the two August 2014 sales solely to refiners, which was a departure from the agency’s prior practice of offering a small portion of sales to nonrefiners. BLM officials said they took this approach because they interpreted the act as intending to have the auction replace the portion of the sales that had previously been made available to nonrefiners. Most representatives of nonrefiners, however, told us that they disagreed with this interpretation, stating that the act does not require BLM to eliminate the portion of sales open to nonrefiners. Some nonrefiners told us that having a guaranteed supply of helium, even of small volumes, would help level the playing field with refiners since the refiners can participate in the sales, guaranteeing their supply. Moreover, since nonrefiners were not eligible to participate in the sales and were outbid at the auction, nonrefiners purchased none of the federal helium that BLM made available for delivery in fiscal year 2015. As a result, the number of companies purchasing helium from BLM for delivery in fiscal year 2015 compared with fiscal year 2014 decreased from eight to four. BLM has taken steps to address challenges we found in July 2014 with its administration of the act’s tolling provision, specifically by clarifying its definition of excess refining capacity. However, the agency does not have full assurance that refiners are satisfying the act’s tolling provision for various reasons. Among the reasons are that BLM has not obtained complete information about refiners’ efforts to satisfy the tolling provision and has not determined whether tolling rates offered by refiners are commercially reasonable. Representatives of nonrefiners have raised concerns that BLM’s unwillingness to act if refiners do not satisfy the provision may result in less competition in helium auctions. Since its implementation of the first phase of the act, BLM has taken steps to clarify its definition of excess refining capacity to help improve reporting of excess capacity by refiners. In our July 2014 testimony, we found that BLM asked refiners to report excess refining capacity in January 2014 as a condition of the Phase A sales, but the agency did not define the term “excess refining capacity” because BLM officials stated that they were still interpreting the act at that time. We found that, as a result, refiners used different methods for calculating excess capacity reported to BLM. Also, BLM and some nonrefiners questioned the accuracy of the total volume of excess capacity that refiners reported in January 2014. In June 2014, BLM posted a draft data collection form on its website for refiners to use when reporting excess refining capacity. This draft form included a definition of excess refining capacity. For example, refiners were to report “planned demand” as part of their determination of excess capacity. However, some nonrefiners commented to BLM that this definition left room for different interpretations. In response to comments on its draft form, BLM published the final version of the form on its website on July 23, 2014, adding more specificity to its definition of excess refining capacity. For example, rather than asking refiners to report “planned demand,” BLM clarified that refiners should report “forecasted crude helium demand” and defined that term. Refiners reported their forecasted excess capacity for fiscal year 2015 to BLM in late July 2014 and, according to BLM officials, the definition in the final form helped improve the refiners’ reports. Specifically, refiners reported a combined forecasted excess capacity of 786.5 million cubic feet for fiscal year 2015, more than 10 times the 72 million cubic feet that refiners had reported in January 2014 as their forecasted excess capacity for the same period. Representatives of refiners told us their forecasted excess capacity numbers changed because of BLM’s more precise definition of what to report, as well as changes in the global helium market since January 2014 that freed up additional capacity in their refineries on the federal pipeline. The act’s tolling provision states that, as a condition of sale or auction, refiners must make excess refining capacity available at commercially reasonable rates to certain nonrefiners, but BLM does not have full assurance that refiners are satisfying the provision. According to language in the Senate report accompanying the act, refiners were to “make excess refining capacity available to others at commercially reasonable rates as a condition of their continued participation in helium allocations and auctions.” BLM does not have this assurance because, according to BLM officials, they (1) have not obtained all relevant information about refiners’ efforts to satisfy the tolling provision, (2) have not defined or identified criteria for a commercially reasonable rate, (3) have not determined what to do if a refiner does not satisfy the tolling provision, and (4) believe the agency’s approach to ensuring that refiners satisfy the tolling provision is consistent with current market conditions. The act’s tolling provision requires that refiners make excess refining capacity available to certain nonrefiners at commercially reasonable rates. The act, however, does not define what it means to make excess capacity available or the term “commercially reasonable rates.” BLM officials told us that they consider signed tolling agreements to be evidence of refiners’ satisfying the tolling provision. In addition, BLM officials said that refiners’ attempts to negotiate tolling agreements that did not result in signed agreements could also satisfy the provision. This is because, if a refiner and nonrefiner do not agree on terms for tolling, the act does not require the refiner to toll. To obtain information about signed agreements, in the July 2014 Federal Register notice, BLM directed refiners to report information about tolling agreements that they entered into with another party during the preceding year by completing a tolling report form. However, refiners inconsistently reported information about their signed tolling agreements on these forms. For example, some refiners reported that they had signed tolling agreements and reported the volumes of helium to be tolled under those agreements, but not all refiners reported the rates they charged for tolling. According to BLM officials, a representative of one refiner said that the refiner did not report the rate because the act does not require refiners to disclose information about agreements covering less than 15 million cubic feet of helium. In contrast, another refiner reported the rates charged in tolling agreements covering less than 15 million cubic feet. Officials with Interior’s Office of the Solicitor said that BLM could not require refiners to report information about signed tolling agreements for less than 15 million cubic feet in a Federal Register notice, but BLM may be able to require it by issuing a rule. BLM officials said they expect that many signed tolling agreements will be for less than 15 million cubic feet since nonrefiners typically accept delivery of helium in 1 million cubic feet increments. As a result, BLM officials said that having information about tolling agreements for smaller volumes from all refiners, including rates, would provide BLM with a better understanding of refiners’ efforts to satisfy the tolling provision. To obtain information about refiners’ attempts to negotiate tolling agreements that did not result in signed agreements, in the July 2014 Federal Register notice, BLM said that refiners may also use the tolling report forms to report information about these attempts. According to officials with the Office of the Solicitor, the act does not require refiners to report this information. Therefore, reporting information about refiners’ attempts to negotiate tolling agreements is voluntary. As a result, refiners reported inconsistent information about their attempts to negotiate tolling agreements on their fiscal year 2014 tolling report forms. For example, some refiners reported that they had attempted to negotiate tolling agreements but did not report any details about the volumes or rates offered. Other refiners provided details about volumes or rates or both. The officials with the Office of the Solicitor said BLM also may need to issue a rule to require refiners to report information about attempts to negotiate tolling agreements that do not result in signed agreements. BLM officials said information about negotiations that do not result in tolling agreements would be helpful in determining the extent to which refiners with excess capacity are making it available to nonrefiners. Nevertheless, BLM officials said that they do not plan to issue a rule to require refiners to report information about signed agreements to toll less than 15 million cubic feet of helium or about attempts to negotiate tolling agreements that do not result in signed agreements. They said they do not plan to issue a rule, in part, because the rulemaking process is time- consuming, and there are only a few years left for BLM to implement the act. BLM officials also said they were concerned that issuing a rule might delay future auctions and sales, pending final issuance of the rule. However, options may be available for the agency to shorten the rulemaking process if, for example, the conditions for issuing an interim final rule without first issuing a proposed rule for public notice and comment have been satisfied. Until refiners consistently provide information about signed agreements to toll less than 15 million cubic feet of helium and about their attempts to negotiate tolling agreements, BLM cannot determine the extent to which refiners are satisfying the tolling provision by making excess capacity available at commercially reasonable rates. BLM officials also have not defined or identified criteria for commercially reasonable rates. The act requires refiners with excess refining capacity to make it available at commercially reasonable rates to certain nonrefiners as a condition of sale or auction of helium to the refiner. However, as we found in our July 2014 testimony, BLM officials told us that they were not planning on defining commercially reasonable rates because it is more appropriate for companies or a court to make that determination. At that time, BLM officials said that they would have a hard time finding that a rate included in a signed tolling agreement between a refiner and nonrefiner is not commercially reasonable since the parties involved agreed to it. As of January 2015, BLM officials said that they do not know how they would evaluate a rate offered by a refiner that did not result in a signed tolling agreement to determine if it was commercially reasonable. Representatives of refiners and nonrefiners told us they generally agreed that BLM should not set a specific rate, but they disagreed over whether BLM should play some role in determining what constitutes a commercially reasonable rate. For example, at least one nonrefiner submitted comments to BLM that the agency should identify guidance for what constitutes a commercially reasonable rate. According to some comments from nonrefiners, BLM’s involvement is necessary to incentivize refiners to toll since, in many instances, nonrefiners and refiners are competitors. BLM officials told us that they are looking for ways to incentivize tolling, but the officials also said it is not clear how or whether they should be involved in setting commercially reasonable rates. In addition, BLM officials told us that they are not planning on taking further action with respect to the tolling provision because they have not determined what to do if refiners do not satisfy the provision. According to language in the Senate report accompanying the act, refiners were to “make excess refining capacity available to others at commercially reasonable rates as a condition of their continued participation in helium allocations and auctions.” However, BLM officials said the tolling provision does not specify what BLM should do if a refiner does not make excess capacity available at a commercially reasonable rate. The officials said that they considered suspending a refiner that does not satisfy the tolling provision from participation in future auctions or sales, but doing so risks market disruption. The officials acknowledged, however, that such disruption is currently unlikely because, given the refiners’ significant volumes of privately owned helium stored in the reservoir, a refiner that is restricted from purchasing additional helium in auctions and sales would still be able to have its stored helium delivered. Nonrefiner representatives have raised concerns about the consequences of BLM’s unwillingness to act if refiners do not satisfy the tolling provision. For example, some representatives of nonrefiners said that this creates a disincentive for the nonrefiners to participate and purchase helium in future auctions, which could lead to less participation in the auctions. Moreover, representatives of nonrefiners noted that they do not have much time left to purchase federal helium with 6 years remaining of helium sales and auctions. BLM officials said they believe that their approach to ensuring that refiners satisfy the tolling provision is consistent with current market conditions because the increased supply in the global market has reduced refiners’ and nonrefiners’ demand to have federal helium delivered from storage and tolled. The officials said that some refiners have reduced their monthly delivery amounts from the pipeline because additional helium supplies have become available from private sources. They said that these refiners are choosing to leave their helium stored in the federal storage reservoir rather than have it delivered since, unlike private sources, BLM’s storage reservoir provides a unique opportunity for storage of helium for delivery at a later date. According to the BLM officials, these market conditions should encourage tolling because refiners have excess refining capacity that could be used for tolling. However, BLM officials said they have not seen an increase in occurrences of tolling since market conditions changed. As of the end of fiscal year 2014, refiners and nonrefiners had signed tolling agreements that covered only a small portion of the 61 million cubic feet of helium purchased by nonrefiners that needed tolling, according to BLM documents. Some representatives of nonrefiners told us they have signed or were negotiating agreements for tolling in fiscal year 2015 that would cover some additional helium. These nonrefiner representatives also said that some refiners have offered lower tolling rates since the change in market conditions. However, other representatives of nonrefiners told us they have not been successful in negotiating tolling agreements even under the current market conditions. According to BLM officials, most of the nonrefiners’ helium remains untolled because the current market conditions have reduced the nonrefiners’ demand for tolling. Yet, some representatives of nonrefiners told us they remain interested in signing tolling agreements. For example, one representative said a nonrefiner is still pursuing a tolling agreement because having access to its purchased helium offers some protection against changes in global supply and demand. BLM officials told us that they expect refiners and nonrefiners to sign more tolling agreements in fiscal year 2016, given that at least one company is seeking to connect a small refinery to the pipeline. Representatives from this company and some existing refiners told us that they are incentivized by the business opportunities offered by tolling for others and are actively pursuing tolling agreements with nonrefiners. As BLM continues to implement the various phases of the act, the agency faces decisions during the spring and summer of 2015 related to the upcoming fiscal year 2016 helium auction, the upcoming fiscal year 2016 helium sale, and the agency’s new storage contracts. First, for the fiscal year 2016 auction, BLM faces decisions on conducting a market survey to inform the minimum auction price, determining the amount of helium to make available for auction, and selecting an auction method. Second, for the fiscal year 2016 sale, BLM faces decisions on determining how to set the sale price and companies’ eligibility to participate. Third, for storing, withdrawing, and delivering helium starting in fiscal year 2016, BLM faces decisions regarding new contracts with refiners and nonrefiners that have purchased federal helium. In creating the agency’s plan for conducting the fiscal year 2016 auction, BLM officials face decisions on how the agency will (1) conduct a market survey that will be used to inform the minimum auction price, (2) determine the amount of helium the agency will make available for auction, and (3) choose a method to conduct the auction, among other things. BLM officials said the agency plans to contract with an independent third party to conduct a survey of helium transactions that will provide the basis for the agency to set the minimum auction price for the fiscal year 2016 auction, but the agency has not decided on the scope of the survey. The act’s price-setting provision calls for BLM to set minimum auction prices using, among other things, if applicable, a price recommendation from a survey of qualifying domestic helium transactions (which we refer to as qualifying transactions). Accordingly, officials with Interior’s Office of the Solicitor told us that BLM is not authorized to consider price recommendations from a survey of nonqualifying transactions when setting prices. BLM officials told us that if a third party conducted a survey solely of qualifying transactions, it would duplicate information that storage contract holders are already required to report to BLM under the act. Specifically, the act’s disclosure requirement requires contract holders to disclose volumes and prices for qualifying transactions. According to BLM officials, 8 of the 11 current contract holders already disclosed the required information, and the officials plan to require the remaining 3 contract holders to disclose the information by the end of fiscal year 2015. In addition, BLM officials and some representatives of nonrefiners told us that limiting a survey to qualifying transactions may result in a price recommendation that reflects BLM’s crude helium price rather than the broader market. Nevertheless, an October 2013 helium market pricing report recommended that BLM hire a third party to conduct a survey with a scope broader than just the qualifying transactions to help BLM set a price that is more market based. Specifically, this report recommended that BLM survey a significantly larger number of transactions than the qualifying transactions, including bulk helium transactions conducted by end users that are not storage contract holders and that involve smaller volumes of helium than the minimum volume for qualifying transactions. According to the pricing report and economic principles, a broader survey would provide a better representation of market prices than a survey solely of qualifying transactions. In determining the scope of the survey, BLM officials are weighing the act’s price-setting provision of surveying qualifying transactions with the pricing report’s recommendation of surveying a larger number of transactions that would reflect a broader market. As of February 2015, BLM officials told us they are considering having a third party conduct a broader survey that is not restricted to qualifying transactions, but they have not identified how, if at all, they would utilize information collected about additional transactions other than qualifying transactions. BLM officials told us that they are considering increasing the amount of helium the agency will auction for fiscal year 2016 above the amount set in the act. Under the act’s auction quantities provision, BLM is required to auction 25 percent of the total helium available for sale or auction for fiscal year 2016, a 15 percentage point increase over fiscal year 2015, but the agency can reduce or increase that amount under certain circumstances. For fiscal year 2016, auctioning 25 percent of the available helium would mean auctioning nearly 200 million cubic feet, more than double the volume auctioned for fiscal year 2015. The act authorizes BLM to increase the percentage of helium to be auctioned beyond the amount specified in the act if the Secretary of the Interior determines it is necessary to increase participation in the auction or increase returns to the taxpayer. BLM officials said they are considering such an increase because they believe that auctioning larger volumes of helium will result in increased revenues and increased competition. In December 2014, BLM compared different possible scenarios— including varying the percentage of helium to be auctioned—to identify different revenue outcomes. However, BLM did not take into account the current market conditions and the willingness of buyers to continue purchasing federal helium as prices increase. Depending on how buyers’ willingness to purchase federal helium is affected by price changes, BLM’s estimates of revenues from certain scenarios may not be realized. According to economic principles, buyers respond to price changes by changing the amount they purchase. For example, even small increases in price could result in a large drop in the quantity purchased and a corresponding decline in revenue. BLM officials told us that they were considering consulting BLM economists to help them select an auction percentage. However, as of February 2015, BLM officials had not obtained market information or predictions of buyer behavior from the economists to inform their decision. BLM’s fiscal year 2015 auction was a live, in-person auction conducted in Amarillo, Texas, and the agency broadcast the auction in real time over the Internet for public viewing. BLM split the total volume available for auction into 12 lots, auctioned sequentially. As of January 2015, BLM officials said they intend to use the same method for the fiscal year 2016 auction, but the final method will be announced in a Federal Register notice expected in the spring of 2015. The act requires BLM to conduct each auction using a method that maximizes revenue to the federal government. Representatives from some of the refiners and nonrefiners that participated in the auction told us they had concerns about BLM’s auction method. For example, a representative from one nonrefiner questioned whether holding a sequential live auction would yield the highest revenues. BLM officials told us they considered multiple auction methods when choosing the live auction, but that they did not assess the auction methods based on maximizing revenue. Instead, they determined which method would be most logistically practical to administer. For example, they told us that they were concerned about holding an Internet-based auction because they did not want potential technological difficulties to disrupt the auction or prevent a company from participating. Also, BLM officials said they were familiar with the live auction method because BLM uses it in other applications, such as in selling oil and gas leases. However, BLM economists told BLM helium program officials and us that there are several academic studies on different auction methods used in the past by Interior. These methods included sealed bid auctions and auctions where all lots were auctioned simultaneously rather than sequentially. BLM economists said that these academic studies could help identify an auction method that maximizes revenue. As of February 2015, however, BLM helium program officials had not evaluated the various methods. Without assessing each method based on revenue generation, BLM does not have assurance that the live auction method will maximize revenue, as required by the act. For the upcoming fiscal year 2016 sale, BLM faces decisions about how to set the sale price and determine whether a new company connecting to the pipeline will be eligible to participate in the sale. Regarding setting the sale price, BLM officials said they are considering changing how they calculate the sale price, in part to make the fiscal year 2016 auction more competitive. Specifically, BLM officials said they are evaluating whether to give greater consideration to the fiscal year 2016 average auction price when setting the fiscal year 2016 sale price than they did when setting the fiscal year 2015 sale price. As previously discussed, BLM used the fiscal year 2015 average auction price to account for 10 percent of the fiscal year 2015 sale price. BLM officials said that they believe that increasing the extent to which the auction price influences the sale price should eliminate one advantage that refiners might have over nonrefiners at the auction. The officials said they think refiners might not pay as high a price for helium at the auction if there was less chance they would be paying a lower price for helium at the sale. In December 2014, BLM officials examined the potential effects of changing how they calculate the sale price—in addition to changing the auction percentage, as previously discussed. However, as with its consideration of different auction percentages, BLM did not take into account the current market conditions and the willingness of buyers to continue purchasing federal helium as prices increase when examining the effects of changing its calculation of the sale price. As of January 2015, BLM officials said they had not obtained such information to inform their decision. Further, BLM faces a decision regarding whether new companies connecting to the pipeline will be eligible to participate in the sale as a refiner. As previously discussed, one company has initiated the process of connecting a new, small refinery to the BLM pipeline. However, BLM officials said it is not clear whether the company meets the act’s definition of a refiner. The act defines a refiner as a person with the ability to take delivery of crude helium from the BLM pipeline and refine the crude helium into pure helium. The act, however, does not define pure helium. Therefore, BLM faces decisions on what constitutes pure helium and whether the company meets that definition. Further, if BLM determines that the new company is a refiner and is eligible to participate in sales reserved for refiners, BLM officials said they will need to identify a new method for determining the amount of helium each refiner will be eligible to purchase in the sales. Currently, BLM allocates the helium it makes available in each sale among the four refiners based on their 2000 refining capacities. With the addition of a new refiner, BLM officials said they are considering alternate methods for future sales. BLM officials said they anticipate that new storage contracts, which govern storage, withdrawal, and delivery of helium from the federal storage reservoir, will go into effect on October 1, 2015. The officials said they are considering changing a number of the terms and conditions in the new contracts. For example, BLM officials said they plan to create a new contractual fee structure. BLM officials explained that they calculate fees under the current contract based on the total amount necessary to recover BLM’s costs. However, for the first time, the act requires BLM to impose a fee that “accurately reflects the economic value” of the storage, withdrawal, and transportation (which we refer to as delivery in this report) services provided, and the fee cannot be less than the amount required to reimburse the Secretary of the Interior for the full costs of providing such services. When calculating the economic value of storing helium in the federal reservoir, BLM officials told us they are considering calculating companies’ storage fees based on the volume of helium they keep in storage, in part to encourage companies to withdraw their helium rather than store it. BLM officials also said that they researched storage fees charged at commercial natural gas storage facilities to help determine the economic value of storing helium. The officials told us that commercial natural gas storage fees are a useful point of comparison for federal helium storage fees because the federal helium reservoir functions similarly to a commercial storage facility. However, according to representatives of nonrefiners and a refiner, because the federal storage reservoir provides a unique opportunity for companies to store their purchased helium until they request its delivery at a later date, the reservoir’s storage capability has economic value in addition to the value associated with commercial natural gas storage facilities. BLM officials said that the fact that the gas stored in the federal reservoir has a higher helium content than other storage facilities is irrelevant when considering storage fees. BLM officials said they are also considering including new fees in the new contracts, in part to recover costs that are not currently being captured. According to BLM officials, one such fee would allow BLM to recover costs associated with refiners who do not accept delivery of helium after they request it from BLM. Currently, refiners have the option of not accepting delivery of requested helium, which has led BLM to reinject undelivered helium from the pipeline back into the reservoir. BLM officials said they reinjected approximately 46 million cubic feet of helium from May to November 2014, in part, because refiners did not accept delivery of all helium BLM delivered into the pipeline. The officials said they anticipate that reinjection will continue to some extent. This reinjection negatively affects BLM’s ability to maximize withdrawal of helium from the reserve, according to BLM officials. For example, as a result of past and continuing reinjections, BLM technical consultants estimated that the agency will be able to produce roughly 500 million cubic feet less helium from the reservoir than originally anticipated by the end of fiscal year 2021. BLM officials told us that they are continuing to evaluate the new fee structure as part of the negotiations over the new contracts, which they expect will continue into the spring of 2015. BLM’s implementation of the Helium Stewardship Act of 2013 is a work in progress. BLM has implemented the first phase of the act and taken initial steps to ensure that refiners satisfy the act’s tolling provision. However, additional information about (1) refiners’ signed agreements to toll less than 15 million cubic feet and (2) the attempts refiners have made to negotiate tolling agreements that did not result in signed agreements would provide BLM with better assurance that refiners are satisfying the tolling provision. BLM currently relies on the voluntary reporting of this information, but not all refiners have reported it. Requiring refiners to report this information may necessitate BLM undertaking a lengthy rulemaking, according to officials in Interior’s Office of the Solicitor, but other options may be available for the agency to shorten the rulemaking process if, for example, the conditions for issuing an interim final rule have been satisfied. Without information about signed agreements to toll less than 15 million cubic feet of helium and about refiners’ unsuccessful attempts to negotiate tolling agreements, BLM cannot determine the extent to which refiners are satisfying the tolling provision by making excess capacity available at commercially reasonable rates. BLM is to select a method for conducting the fiscal year 2016 auction, and agency officials said they plan to use the same live auction method the agency used to conduct the fiscal year 2015 auction. The act requires BLM to use an auction method that maximizes revenue. However, the agency did not assess the auction methods it considered based on maximizing revenue. Several academic studies that examined different auction methods used previously by Interior are available for helium program officials to consult to help BLM identify an auction method that maximizes revenue. Without assessing each method based on revenue generation, BLM does not have assurance that the live auction method will maximize revenue, as required by the act. To provide the agency with better information to support its decisions when implementing the act, we recommend that the Secretary of the Interior direct the Director of BLM to take the following two actions: issue a rule—perhaps an interim final rule if BLM finds there is good cause to do so, given the time constraints—to require refiners to report information about signed agreements to toll less than 15 million cubic feet of helium and about refiners’ attempts to negotiate tolling agreements that do not result in signed agreements; and assess auction methods based on revenue generation, using available information, and select a method that would maximize revenue for the upcoming helium auction. We provided a draft of this report for review and comment to the Department of the Interior. In its written comments, reproduced in appendix I, Interior generally agreed with our findings and concurred with our second recommendation to assess auction methods and select the method that would maximize revenue, but the agency did not concur with our first recommendation. Specifically, Interior did not concur with our first recommendation to issue a rule to require refiners to report certain information about signed tolling agreements and attempts to negotiate tolling agreements. In its written comments, Interior stated that existing mechanisms are providing BLM with sufficient information for the agency to administer the tolling provision, and that BLM is not in a position to develop a rule due to reduced resources, current workloads, and other high priority rulemakings and initiatives in which the agency is engaged. Also, Interior stated that the expense and time necessary to undertake a rule outweigh any immediate benefit and that given the amount of time it is likely to take to promulgate the rule, the federal helium program would likely be nearing its conclusion by the time such a rule is in place. We do not agree that existing mechanisms are providing BLM with the information it needs to have full assurance that refiners are satisfying the tolling provision. BLM has obtained some of the relevant information from refiners. However, refiners’ reporting of certain information—specifically, signed agreements to toll less than 15 million cubic feet and their attempts to negotiate tolling agreements that did not result in signed agreements—is voluntary, and not all refiners provided this information to BLM. We continue to believe that BLM needs this information to determine the extent to which refiners are satisfying the tolling provision. We recognize that Interior and BLM must consider current workloads and other priorities when determining how to expend limited resources. However, if BLM does not issue a rule to require refiners to report this information, the agency cannot determine the extent to which refiners are making excess capacity available at commercially reasonable rates. As described in the report, BLM may have options for shortening the rulemaking process, which could reduce the resources necessary to issue a rule. Even if BLM cannot shorten the rulemaking process by, for example, issuing an interim final rule, BLM will continue implementing the act through fiscal year 2021, and the agency’s administration of the tolling provision could continue to affect nonrefiners’ participation in the auctions. We continue to believe that undertaking a rulemaking is necessary so that BLM can have better assurance that refiners are satisfying the tolling provision throughout the agency’s implementation of the act. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix II. In addition to the individual named above, Jeff Malcolm (Assistant Director), Cheryl Arvidson, Carol Bray, Cheryl M. Harris, Josie H. Ostrander, Leslie Kaas Pollock, Dan Royer, and Jeanette Soares made significant contributions to this report.
Helium is a key nonrenewable resource with a variety of uses. The federal government maintains an underground reservoir near Amarillo, Texas, for the storage of both federally owned helium and helium owned by private companies. The Helium Stewardship Act of 2013 establishes a phased process for the privatization of the federal helium reserve in a competitive market fashion. As part of that process, BLM conducted an auction and two sales of federal helium in the summer of 2014. GAO was asked to assess BLM's implementation of the act. This report examines (1) the outcomes of BLM's summer 2014 helium auction and sales, (2) BLM's administration of the act's tolling provision (tolling refers to a helium refiner processing or refining another party's crude helium for an agreed upon price), and (3) upcoming decisions BLM faces as it continues implementing the act. GAO reviewed the 2013 act, BLM's auction and sales results, and tolling agreement reports; interviewed BLM and other Interior officials and representatives of 12 of the 13 refiners and nonrefiners that registered to participate in the auction. In the summer of 2014, refiners purchased all the helium offered in the Department of the Interior's Bureau of Land Management's (BLM) first-ever competitive helium auction at higher than expected prices. Two refiners purchased all 93 million cubic feet of helium that was auctioned at an average price of $161 per thousand cubic feet—significantly above the prices offered by most other bidders. BLM, refiners, and nonrefiners identified possible reasons for the auction's outcome, including that refiners had an advantage at the auction because their costs for refining crude helium were lower than those of nonrefiners. After the auction, BLM sold more than 1 billion cubic feet of helium in two sales that were restricted to refiners. Since BLM used the average auction price to help set the sales price, the sales price also was higher than expected. BLM has taken steps to help improve reporting by refiners, but the agency does not have full assurance that refiners are satisfying the tolling provision. The tolling provision requires refiners, as a condition of sale or auction, to make excess refining capacity available at commercially reasonable rates to certain nonrefiners. BLM officials said that one way refiners can satisfy the tolling provision is to attempt to negotiate tolling agreements. The act does not require refiners to report information to BLM about their attempts to negotiate agreements that do not result in signed agreements, so the reporting of this information is voluntary. BLM requested that refiners report this information, but the refiners' responses were inconsistent. For example, some refiners reported that they had attempted to negotiate agreements but did not report details about volume or rates offered. Officials from Interior's Office of the Solicitor said BLM may need to issue a rule to require refiners to report about their attempts to negotiate tolling agreements. However, BLM officials said they do not intend to issue such a rule because it is a time-consuming process that might delay future auctions and sales. Nevertheless, without information about refiners' attempts to negotiate agreements, BLM cannot determine the extent to which refiners with excess capacity are satisfying the tolling provision. BLM faces a number of decisions about its continued implementation of the act, including decisions related to the auction of a portion of the helium BLM will make available for delivery during fiscal year 2016. Specifically, BLM officials said they plan to contract with a third party to conduct a survey of helium transactions that will form the basis for the fiscal year 2016 minimum auction price, but they have not determined the scope of the survey. Also, BLM officials said they are considering increasing the amount of helium the agency will auction for fiscal year 2016 above the amount set in the act because they think it will increase competition at the auction. In addition, BLM faces a decision in selecting a method for conducting the fiscal year 2016 auction. The act requires BLM to use an auction method that maximizes revenue. BLM officials said they considered multiple methods before selecting the live auction method used for the agency's first auction, but they did not assess the methods based on maximizing revenue. As of February 2015, BLM officials had not evaluated various methods, such as sealed bids or simultaneously auctioning multiple lots. Without assessing auction method options based on revenue generation, BLM does not have assurance that a live auction will maximize revenue as required. GAO recommends that BLM (1) issue a rule to, among other things, collect information about refiners' attempts to negotiate tolling agreements and (2) assess and select an auction method that would maximize revenue. Interior disagreed with the first recommendation because it believes existing mechanisms provide needed information, and agreed with the second. GAO continues to believe that its recommendation is valid.
The growth rate of crude oil and natural gas demand in the United States has outpaced the growth rate of the country’s crude oil and natural gas production over the last 20 years. This widening gap is projected to accelerate in the future. As shown in figure 1, EIA forecasts that this trend for crude oil will continue through 2030. Natural gas demand, as shown in figure 2, has similarly outpaced natural gas production, and EIA forecasts that this trend will also continue. This widening gap between U.S. domestic energy production and consumption of oil and natural gas has focused attention on the importance of these commodities to the U.S. economy. The United States’ most recent “National Energy Policy” report, issued in May 2001, outlines several U.S. energy security objectives that are relevant for international energy cooperation. The report states that the United States should work cooperatively with key countries and institutions to expand sources and types of supply, enhance the transparency and efficiency of markets, strengthen U.S. capacity to respond to disruptions, promote international trade and investment in the energy sector, and enhance emergency preparedness, among other goals. Several recommendations outlined in the “National Energy Policy” report provide guidance for the United States as it engages in multilateral and bilateral forums and discussions designed to enhance U.S. energy security, such as the following: Work with the IEA to ensure that member states fulfill their stock-holding commitments and encourage major oil-consuming countries that are not IEA members to consider strategic stocks as an option for addressing potential supply disruptions; Work with producer and consumer country allies and the IEA to craft a more comprehensive and timely world oil data reporting system; Use membership in multilateral organizations, such as APEC, and bilateral relationships to implement clear, open, and transparent rules and procedures governing foreign investment and reduce barriers to trade and investment; Engage in a dialogue through NAEWG to develop closer energy integration among Canada, Mexico, and the United States; and Assist U.S. companies in their dialogue with Russia on investment and trade and improve the overall investment climate. The United States pursues energy cooperation through international energy forums that meet specific cooperative purposes. These forums range from formal institutions with binding obligations to regional associations to more informal gatherings designed to facilitate a frank exchange of information. Information related to these forums is summarized below in table 1. IEA was established in November 1974 by most of the members of OECD, the major industrialized democracies that were generally also the largest consumers of oil, and today has 26 members. It was a collective response to energy security concerns arising from the oil embargo imposed by the Organization of the Petroleum Exporting Countries (OPEC) the previous year to reduce the vulnerability of IEA members to a major disruption in oil supplies. IEA’s primary mission was to respond to any future oil crisis through a binding emergency preparedness system that established emergency oil reserves equivalent to 90 days of members’ net imports, countering any future threat of an oil embargo. In addition, it collects and analyzes oil market data in order to increase oil market information and transparency; assesses member countries’ domestic energy policies and programs; makes projections based on differing scenarios; and prepares studies and recommendations on specialized energy topics. IEA’s goals have evolved over the years as the energy market has changed; today it focuses its emergency planning less on the threat of embargoes and more on supply disruptions that might arise from natural disasters, wars, or terrorist acts. More importantly, as the structure of the oil market has changed over the years, IEA’s emergency response measures have also evolved from a government emergency allocation program to market- based measures, according to a DOE official. IEA’s release of oil reserves in response to Hurricane Katrina in September 2005 is an example of its current focus. In addition to emergency preparedness measures, IEA also emphasizes outreach to nonmember countries, reducing dependence on oil through alternative energy and advanced technology, and integrating environmental and energy policies. Recently, IEA has also recognized that it needs to enhance its expertise related to the growing global natural gas market. IEA is an autonomous international organization based in Paris, France, created within the framework of the OECD in order to implement the treaty establishing it. IEA’s main decision-making body is the Governing Board, composed of senior energy officials from each member country and meeting about four times per year. Day-to-day operations are conducted by the IEA Secretariat, headed by an Executive Director and comprising a professional staff of about 150 energy experts drawn from member countries. IEA also receives the input of the IEA Industry Advisory Board, which has private sector representatives from member countries and meets three to four times a year. The United States is significantly involved in IEA activities, according to U.S. and IEA officials. The Deputy Executive Director is traditionally an American. The DOE Assistant Secretary for Policy and International Affairs and the Department of State Deputy Assistant Secretary for Energy, Sanctions, and Commodities both serve on the Governing Board and play an active role. U.S. energy officials participate on almost every standing group and committee as either a Chair or Vice-chair. In addition, the United States has historically provided about 25 percent of IEA’s annual budget, which amounted to $5.5 million in 2006, according to a Department of State official. The APEC Energy Working Group, comprised of 21 Asian Pacific economies accounting for 60 percent of world energy demand, is a voluntary regional effort that seeks to build consensus on energy policy issues, primarily through sharing best practices and technology insights. This working group includes both net energy consuming countries, such as the United States, Japan, and China, and net energy producing countries, such as Russia and Indonesia. It was launched in 1990 to develop a program for energy cooperation. It seeks to maximize the energy sector’s contribution to the region’s economic and social well-being, while mitigating the environmental effects of energy supply and use. Its objectives include strengthening the security and reliability of affordable energy to all members, and promoting clean and efficient technologies and the efficient use of energy to achieve both economic gains and environmental enhancement. APEC Energy Ministers’ meetings, generally held every 2 years, provide the Energy Working Group with political guidance regarding its activities. The APEC Energy Working Group has its own Secretariat in Australia, which has been financially underwritten and staffed by the Australian government. The Energy Working Group, generally comprised of member government energy officials, meets twice a year. It receives an update on the activities of the five expert groups, which focus on clean fossil energy, efficiency and conservation, energy data and analysis, new and renewable energy technologies, and minerals and energy exploration and development. It also guides the work of the Asia Pacific Energy Research Centre, an international organization based in Tokyo that receives the bulk of its financial support from the Japanese government. Finally, it is advised by the Energy Working Group Business Network, which provides private sector perspective on key energy issues affecting the region. The APEC Energy Working Group fosters discussion of members’ energy policies and planning priorities, sharing basic energy demand and supply outlook data, considering regional energy policy implications, and responding to wide-reaching energy-related issues. Recent efforts include its Energy Security Initiative, which comprises both short-term measures designed to respond to temporary energy supply disruptions and longer term policy responses designed to address the broader challenges facing the region’s energy supply. It has also focused on development of the Asia- Pacific natural gas market, particularly for liquefied natural gas (LNG). The United States generally sends two delegates from DOE’s Office of Policy and International Affairs and an observer from the Department of State to the Energy Working Group meetings. DOE staff also participate on the various expert groups. NAEWG is a trilateral regional forum—including the United States, Canada, and Mexico—focused on developing an open, efficient, and transparent North American energy market. The forum pursues this focus by emphasizing efforts such as greater regulatory cooperation, encouraging energy data and information exchange, collaborating on energy science and technology, and examining natural gas trade and interconnections. NAEWG was established and initially led by the three Energy Ministers of Canada, Mexico, and the United States in its inaugural meeting in June 2001. Natural Resources Canada, the Mexican Secretariat of Energy, and the U.S. DOE jointly chair NAEWG, with day-to-day U.S. leadership now provided at the Assistant Secretary level. DOE’s Assistant Secretary of Energy for Policy and International Affairs is the U.S. lead, while both the Department of Commerce and the Department of State support the effort at the Deputy Assistant Secretary level. The agenda of work identified at the ministerial level is carried out by nine expert working groups. Members of these expert working groups share their policy, regulatory, and technical expertise and energy statistics from the three countries. According to DOE officials, the products of this work are enhanced regulatory cooperation, such as on project siting issues; workshops on various energy issues; and joint public written documents produced by the expert working groups. For example, in 2005, NAEWG published the “North America Natural Gas Vision,” a report addressing the region’s natural gas regulations and policies, production and consumption, trade, transportation, and supply and demand projections. Each expert working group also consults informally with energy industry representatives to enable numerous subject area workshops and to obtain private sector input on an issue area. IEF—formerly known as the “Producer-Consumer Dialogue”— is a unique forum established to facilitate dialogue on energy security issues between producing and consuming countries. IEF provides the largest recurring global gathering of Energy Ministers, with over 60 countries participating. The IEF Ministerial is held every 2 years, rotating in location, and is a venue for Energy Ministers to discuss energy security issues. IEF does not serve as a decision-making organization or a forum for negotiating formal agreements. However, according to Department of State and DOE officials, U.S. participation at the senior staff level has increased since 2000 in recognition of IEF’s value in allowing for informal, frank, and wide exchange of information. IEF activities in addition to the Ministerial dialogue include the JODI and the International Energy Business Forum. JODI is a recent initiative to establish a world oil database, originally combining the efforts of six international organizations including APEC and IEA. The International Energy Business Forum serves as a venue for Ministers to meet with industry representatives prior to the IEF Ministerial and had over 30 companies participating in 2006. The Ministerial dialogue, JODI, and the International Energy Business Forum are now facilitated by the IEF Secretariat, which was established in December 2003 and is headquartered in Riyadh, Saudi Arabia. In addition to participation with IEA, APEC Energy Working Group, NAEWG, and IEF, the United States also participated in the July 2006 Group of Eight (G-8) Summit hosted by Russia, which served as an ad hoc forum addressing the need for international energy cooperation. The United States also pursues international cooperation through bilateral energy cooperation efforts. We reviewed U.S. bilateral energy cooperation efforts with Canada, China, India, Mexico, and Russia. Information related to these forums can be found in appendix II. Three key energy market issues that are important for U.S. efforts in international energy cooperation in the oil and natural gas sectors are: a tight energy market, growing market participation of national oil companies, and increased importance of reliable energy market information. World energy demand has risen in recent years, particularly from major developing countries, at the same time supply has become more constrained and more susceptible to disruptions—resulting in a tight energy market characterized by higher prices. During most of the 1990s, real crude oil prices (in 2003 dollars) fluctuated around $20 a barrel. While crude oil prices started edging up with the economic recovery and production cuts at the end of the 1990s, upward price pressures became pronounced during 2003-2004. These market conditions contributed to world crude oil prices increasing by more than two-and-a-half times from about $30 a barrel in early December 2003 to a peak of about $77 a barrel around mid-July 2006. While prices dropped by around $20 a barrel in the 3 to 4 months following this peak, several energy experts believe that the fundamentals of the tight market still exist and are a cause for continuing concern. In recent years, rapid growth in energy demand by major developing countries, such as China and India, and continued steady growth of demand by many industrialized nations has contributed to tighter oil markets. The main consumers of oil continue to be the advanced economies. The United States, OECD Europe, and Japan together account for about half of annual global oil consumption. However, consumption in the major developing countries has generally been increasing at a faster pace. China, in particular, has gained prominence because its demand has grown so fast. One expert noted that China’s demand in 2004 rose by an extraordinary 16 percent compared with 2003 and served as a “demand shock,” or unexpected surge in demand. From 2000 to 2004, total world demand for oil grew by about 8 percent, increasing from nearly 77 million barrels per day to about 82 million barrels per day. China’s demand for oil rose by 33 percent over this period, followed by India’s growth in demand of 15 percent, while U.S. demand increased by about 5 percent, and OECD Europe by about 2 percent. The data used to measure both oil demand and supply are subject to limitations described later in this report, including lack of timeliness and transparency, definitional inconsistencies, and national sensitivities. The estimates provided represent the broad trends from the most current market information used in forecasting and determining cost. Table 2 shows the top world consumers of oil—countries that consumed more than 2 million barrels per day—with their level of demand and the percentage change from 2000 to 2004, as well as their share of the world oil market in 2004. The United States far exceeds the rest of the world in its volume of consumption, accounting for a quarter of world demand, with about 21 million barrels per day in 2004. Most of U.S. oil demand arises from usage in the transportation sector. China’s demand surpassed Japan’s in 2003, and it became the second largest consumer of oil, with about 6 million barrels per day, or about 8 percent of world demand. India’s demand is also growing quickly. It consumed about 2 million barrels per day, the sixth highest level of demand. As demand has risen, so have oil import needs. For instance, while the United States produced almost 9 million barrels per day of oil in 2004, making it the third largest world producer, its production met only 42 percent of its demand, with net oil imports of about 12 million barrels per day meeting the remaining 58 percent of demand. China’s import dependence has also grown, and it imported about 45 percent of its oil in 2004. Figure 3 shows the top world net oil importers in 2004, countries importing more than 1 million barrels per day net. Of these 9 countries, 6 were totally or almost totally import dependent for their oil consumption. For instance, Japan and South Korea were totally dependent on imports, predominantly from the Persian Gulf, while many European countries also imported from Algeria, Libya, and Nigeria. The largest net oil exporter to the United States was Canada, followed by Mexico, Saudi Arabia, and Venezuela. While the world supply of oil and refined products has risen to meet increased demand, supply constraints have also increased, eroding certain market cushions and contributing further to market tightness. Increased political or energy sector frictions in countries such as Iran, Iraq, Nigeria, and Venezuela and decreased spare crude oil production capacity have exerted pressure on crude oil markets. Given the tight market situation, marked by less spare production capacity and other cushions, any oil supply disruption can cause the price of oil to rise dramatically. One factor contributing to constrained oil supplies is that the political, or energy sector, friction in key oil producing nations led to supply disruptions and diminished production capacity, in some cases. Participation by international oil companies in the oil sector has been affected by political tensions in Iraq, Venezuela, and Nigeria, and economic sanctions on Iran and Libya. For example, in April 2006, Venezuela seized two oil fields operated by two foreign oil companies because the companies did not comply with new rules imposed by the Venezuelan government. In Nigeria, recent disruptions due to militant actions have shut-in about 650 thousand barrels per day of production. A second contributing factor is that world production of oil is 84 to 85 million barrels of oil per day, and the rate of production increase has not kept pace with the rate of increased demand. Furthermore, there is very little spare production capacity given existing infrastructure. Spare oil production capacity— the ability to produce extra barrels of production in the short-term—is a key market cushion for responding to market disruptions. Since the mid-1980s, growth in world oil production capacity has lagged relative to growth in global oil demand, with the result that spare capacity has declined from a high in recent times of 5.6 million barrels per day in 2002 to between 1 and 1.3 million barrels per day today. Most of this spare capacity is held within Saudi Arabia. While in previous oil supply disruptions, the U.S. government has been able to negotiate with senior officials in Saudi Arabia and other oil-producing countries to increase their supply of crude oil, many oil industry officials, experts, and U.S. government officials said that today such efforts would be less effective given the limited levels of spare oil production capacity in world markets. Downstream investment in pipelines and tankers has also lagged behind the growth in global oil demand in recent years, contributing to potential bottlenecks. Additionally, private inventories of oil have been in a long- term declining trend, in part because of a trend toward just-in-time inventory, according to energy experts. Oil production is capital-intensive and heavily dependent on continuous investment to maintain existing wells, drill new wells for crude oil production, and develop and maintain the infrastructure supporting the production network. Extensive investment in the oil sector will be required to meet future oil demand and maintain spare capacity, according to energy experts. Looking ahead, there are additional uncertainties related to future supplies of oil. Expected new supplies of crude oil may be in places that are difficult to access and could involve high extraction and processing costs, as with offshore reserves and unconventional crude oils. There is also an ongoing peak oil debate—disagreement among oil market experts as to when the world will reach its level of peak production of conventional oil and then begin to decline. For a discussion of the growing role of natural gas in world energy markets, see appendix III. In an energy market characterized by relatively high oil prices and increasing energy demand, the growing participation and market influence of national oil and gas companies—which are majority owned by national governments—from both energy consuming and producing countries has contributed to limited access to oil and natural gas resources in some producing countries. National oil companies from producing countries already control about 90 percent of the world’s crude oil reserves, according to DOE. In contrast, the ability of the international oil and gas companies—the large, privately owned and publicly traded oil and gas industry entities—to maintain current production levels by replacing their energy assets with new reserves is affected by increasingly limited access to energy resources around the world. Additionally, access to capital and technical expertise by the national oil and gas companies of consuming countries has enabled them to compete with the international oil companies in the global energy markets. The impact of this industry shift is unclear, but some concerns have arisen over (1) the ability of some national oil and gas companies from consuming nations to efficiently bring energy resources to the market and (2) the constrained investment climates in some producing countries dominated by national oil and gas companies that may inhibit the investment necessary to ensure continued production and growth. The influence of the national oil and gas companies is perceived to be growing, as the ability of international oil and gas companies to replace their energy resource holdings becomes increasingly limited. According to DOE Secretary Samuel W. Bodman, in a speech to the National Petroleum Council in June 2006, 90 percent of the world’s untapped conventional oil reserves are controlled by governments and their national oil and gas companies, many of which are in politically unstable regions of the world. Figure 4 indicates that 7 of the top 10 companies are national or state- sponsored oil and gas companies, ranked on the basis of oil production. The three international oil companies that are among the top 10 are Exxon Mobil, BP, and Royal Dutch Shell. Ranked on the basis of oil reserve holdings, 9 of the top 10 companies are national or state-sponsored oil and gas companies. These top 10 oil and gas companies accounted for an estimated 42 percent of world daily oil production and an estimated 64 percent of world oil reserves holdings in 2004, based on EIA data for world estimates. Figure 5 shows a similarly strong position for the national or state-sponsored oil and gas companies, with respect to natural gas production and reserves holdings. These top 10 oil and gas companies accounted for an estimated 44 percent of world daily natural gas production in 2004 and an estimated 62 percent of world natural gas reserves holdings, based on EIA data for world estimates. Some agency officials and energy experts believe that, should some countries with national oil and gas companies continue to limit competition and investment opportunities in their energy sectors, the ability of international oil and gas companies to replace their energy resource holdings will become increasingly limited to locations marked by high geological, political, and financial risks. Competition among consuming countries to procure oil and gas assets has also been affected by the growing participation of national oil and gas companies. Some energy experts stated that increased access to capital, combined with increased access to technical expertise available for hire from third-party service companies, has allowed these consuming countries’ national oil and gas companies to compete with the international oil companies in the global marketplace for energy resources. Additionally, with political leverage and potential financial support provided from their governments, some national oil and gas companies may be willing to operate at a lower discount rate and a potentially lower profit margin. For example, there has been an increasing trend by some national oil and gas companies from energy consuming countries such as Brazil, China, India, and Malaysia to become active, competitive bidders for acquiring exploration rights to energy resources in other producing countries. Some experts say these national oil and gas companies may benefit from increased financial support and political leverage in negotiations with the host supplier countries. According to some agency officials and energy experts, there are two main concerns about participation of national oil and gas companies in the energy market. One concern is that some national oil and gas companies from consuming countries may not have the combination of capital, technical expertise, and managerial expertise necessary to efficiently and effectively develop certain oil and gas projects, preventing some of the production from getting to the global energy market in a timely manner. Some energy experts stated that a Chinese national oil and gas company, for example, may have the capital to compete for the rights to explore for energy resources but may not have the technology and managerial expertise to develop some projects. Additionally, agency officials and energy experts expressed concern over the impact of national oil and gas companies procuring energy assets based on national policy goals rather than on commercial market business strategy. Whereas, international oil and gas companies typically seek to maximize returns, some national oil and gas companies’ operations may be driven primarily by their government’s energy policy interests and revenue requirements. This may result in (1) a national oil and gas company potentially preventing some or all of the production from the resource base under their control from getting to the global energy market in a timely way or (2) a national company entering into bilateral exploration based on foreign policy purposes. Some energy experts added that, although there may also be a net gain in the resulting energy supply on the market due to increased activity by national oil and gas companies from consuming countries, the ability of some of these companies to bring those energy assets to market is varied and remains a concern. A second concern is that constrained investment climates in some producing countries’ energy markets will inhibit the investment necessary to ensure continued production and growth is maintained by the country’s national oil and gas company. For some energy producing countries, the national oil and gas companies serve as a source of general government revenues and funding for social programs, and as a result can be marked under-investment in the company that is required to maintain the country’s energy output. In addition, some energy producing countries dominated by national oil and gas companies have failed to open their investment climates or reinvest sufficiently. Experts cited national oil companies in Russia, Mexico, Venezuela, and Indonesia as examples of oil sectors with constrained investment climates and insufficient government reinvestment in the energy sector. Energy officials and experts state that more reliable energy market information is an increasingly important element for market stability. The reliability of oil and natural gas market information is questionable due to systemic factors such as reporting delays, definitional differences, and lack of transparency. In a tight energy market, the negative impacts of uncertainty in market information on planning and current and future needed investment are amplified. Energy experts and officials question the reliability of oil and natural gas market information in large part due to (1) concerns about historical demand and supply data based on a lack of timeliness in reporting, definitional differences, and national or industry sensitivities and (2) concerns about future demand and supply estimates based on unreliable historical data and insufficient transparency about projection assumptions and methodologies. For example, concerns about oil demand information include the following: Historical demand data: Uncertainty results from successive revisions of data, a lack of timeliness in reporting, and questionable reliability of data, particularly from rapidly growing non-OECD countries such as China and India. Final demand data are generally available about 16-20 months after the reference year. By the time final data are reported, initial estimates may have been revised repeatedly. EIA officials also question the basic reliability of demand data for non-OECD countries like China and India. For example, Chinese demand estimates are derived from “apparent demand”—as a sum of estimated production and estimated net imports— or on inference from Chinese gross domestic product (GDP) growth estimates. As an example of the uncertainty that results from such methods, the EIA indicated that Chinese oil demand had grown at roughly 500,000 barrels per day in 2005, while a widely quoted Morgan-Stanley report indicated that Chinese demand had declined. Projections: Uncertainty results from projected demand estimates that rely upon questionable historical data and that may not fully incorporate data revisions. For example, both the EIA and IEA use historical demand and estimated economic growth as a basis for their demand projections. However, both the EIA and IEA forecasts failed to anticipate the surge in Chinese and global demand growth in 2004 due to the poor quality of Chinese data. According to economic experts, uncertainty in future demand is further compounded by insufficient transparency in EIA and IEA methodologies for projecting impacts of a high-price future. Concerns about oil supply information include: Historical supply data: Uncertainty in production and stock (inventories) data results from the proprietary nature of the data, differences in definitions and conversion rates, and political sensitivities. According to the EIA, for example, OPEC countries often do not accurately report their current production levels. An EIA official reported that estimates of OPEC’s June 2006 crude oil production varied by over 700,000 barrels per day, from a low of 29.3 million barrels per day by the Petroleum Intelligence Weekly, to 30 million barrels per day by the IEA. Reliability of OPEC production data is further complicated by OPEC quotas that are based on estimated reserves, which are suspected to have been inflated in order to generate higher quotas. For Russia, swings of up to 100,000 barrels per day have occurred in its production data since Russian data do not break out gas condensate from oil production, and conversion rates for a combined stream are uncertain. Production data for some countries may be inferred from combining oil exports, oil demand, and changes in oil stocks. However, in addition to problems with demand data, oil stock data is incomplete and does not generally include stocks held in non- OECD countries (such as in China or India where stock data is considered a state secret) or in independent storage within OECD countries. In a previous GAO study, we found that missing stock data in IEA statistics, referred to as “missing barrels,” were present in 24 of 26 years between 1973 and 1998. Both IEA and EIA data for 1999 through 2005 still reflected these gaps. Projections: Uncertainty results from projected supply estimates that rely upon questionable historical data and an unknown level of oil reserves. For example, both the EIA and IEA use historical data as a basis of projecting future world demand and future non-OPEC supply. Then, both agencies assume that OPEC production will “fill the gap.” IEA and EIA projections call for around a 50 percent increase in current OPEC production, but there is growing debate over OPEC’s ability to meet this requirement. Supply projections are also based on widely debated estimates of oil reserves due to differences within and between industry and governments about the definitions and measurement of “known,” “proven,” “probable,” or “undiscovered” reserves, the impact of technology on those reserves, and the rate of decline in certain oil fields. According to energy experts, uncertainty in projected supply is further compounded by insufficient transparency in EIA and IEA assumptions about the impacts of high prices on future production. Many of the concerns about oil demand and supply data also apply to natural gas data. Both the EIA and IEA have indicated the need to improve the timeliness and accuracy of natural gas demand, production, and stock information. Data reliability issues occur due to the increasing number of participants in natural gas markets, unspecified exports due to a multitude of small players, large increases in inter-regional trade and the loss of trade origin, longer supply chains, and industry sensitivities in response to increasing market competition. Reliable energy market information is important for reducing price volatility and facilitating planning and needed investment. For example, the EIA reported that unanticipated world oil demand growth in 2004 contributed to depletion in oil stocks and resulted in the recent high oil prices. Uncertainty about demand growth also negatively impacts needed investment for future expansion of world oil and natural gas supplies (including an estimated $3 trillion in each sector from 2005 to 2030 by the IEA), particularly given the long lead times and payback periods required for such investments. Oil and natural gas producer nations have stated the need to better understand future demand in order to undertake costly investment—according to the OPEC Secretariat, uncertainty about future oil demand, future non-OPEC production, and needed OPEC investment is the largest challenge facing the organization. Similarly, Russia’s Gazprom has indicated the need for future demand certainty, indicating possible supply curtailments if its European consumers seek to diversify their supplies away from Russia. Oil and natural gas consuming nations have also indicated the need for more certainty in future supply. This is particularly important given needed infrastructure investment in non- OECD countries to use natural gas—EIA projects that 73 percent of future natural gas demand will occur in countries outside the OECD—and needed worldwide investment to expand the use of LNG. The U.S. government has pursued emerging energy market issues through its participation in international energy cooperation forums; however, these forums, by their nature, can be constrained in the degree to which they can have an impact on these issues. The greatest constraint on the forums’ ability to impact energy issues comes from the sensitivity of sovereign nations to discussing their domestic energy policies. Forum efforts are also constrained by limitations in membership, consensus- based decision making, and voluntary participation. However, within these constraints, the United States has tried to mitigate energy market imbalances through efforts such as promoting emergency preparedness and outreach to developing countries. While the United States has not directly addressed the impact of the growing participation of national oil companies on the energy market at the forums, it has pursued related areas such as improving the investment climate. Finally, the United States has supported international efforts to improve energy information through various data sharing agreements, standardization, and capacity building— though EIA involvement has for the most part been indirect and ad hoc, and U.S. data submissions to the IEA have lacked timeliness. International energy cooperation forums, by their nature, can be constrained in the degree to which they can have an impact on energy market issues. The greatest constraint comes from the sensitivity that sovereign nations bring to discussing their domestic energy policies. Supplier countries may resist international efforts to increase opportunities for foreign investment in their energy sectors, and consuming countries, like the United States, may resist international efforts to influence their energy demand levels. For this reason, discussion of energy issues at international energy cooperation forums is almost always addressed through an agenda decided by consensus. This generally means that forums focus on noncontroversial issues, like energy efficiency and technology, according to U.S. officials. Forum efforts are also constrained by inherent limitations in restricted membership, consensus- based decision making, and the voluntary nature of participation and follow-up. For the United States, however, the consensus-based agenda does have the advantage of “de-Americanizing” some issues, according to U.S. officials. In some cases, an issue or action may be more likely to be addressed on its own merits than if the United States is seen to be the primary force behind it. Peer pressure can also be an important factor when a group of countries is endorsing an issue or approach. The United States has tried to mitigate the imbalances resulting from the recent tightening of the energy market through its participation in international energy cooperation forums. U.S. efforts have primarily focused on support for emergency preparedness, including development of strategic petroleum reserves and contingency plans. The challenges to these efforts lie in factors such as key developing countries not being members of the forum, such as China and India not having IEA membership, or in the voluntary nature of participation and follow-up. The United States has sought to address tight energy markets and associated market imbalances primarily by supporting emergency preparedness in both IEA and the APEC Energy Working Group. IEA is the premier forum at which the United States addresses emergency preparedness. It has an emergency response plan—called “Coordinated Emergency Response Measures”—ready for use, supplemented by periodic emergency scenario planning exercises that allow member countries to practice how they would implement the plan in case of a real emergency. This IEA emergency response plan was used in response to Hurricane Katrina in September 2005, although such a situation had never been anticipated in IEA scenario planning. A senior IEA official told us that IEA’s response to Hurricane Katrina showed the market that IEA would act to mitigate supply shortfalls by releasing oil stocks. He said that IEA does not act to affect price but showed that it would act to affect supply, and this had helped restore confidence in the market. The United States has strongly supported the APEC Energy Working Group’s Energy Security Initiative, which is also designed to respond to the volatility resulting from the recent tightening of the market. Short-term measures include improving the transparency of the global oil market through improvement of APEC energy data and participation in JODI, monitoring efforts to strengthen sea-lane security, implementing the Real- Time Emergency Information Sharing System, and encouraging members to have emergency mechanisms and contingency plans in place. DOE’s policy and international affairs office and strategic reserve office both also worked with APEC Energy Working Group partners to identify best practices for strategic oil stocks. DOE then hosted a follow-up workshop in July 2005. Another way in which the United States has tried to address market imbalances has been through outreach to major developing nations in both IEA and the APEC Energy Working Group. For example, IEA conducts a major outreach effort to developing countries and has established a separate office, the Office of Non-Member Countries, for this purpose. It has concluded “memoranda of policy understanding” to strengthen cooperation with China and India and has conducted numerous workshops, seminars, and training exercises. IEA held its first oil security workshop with China in 2001, at which it provided training in emergency response measures and strategic reserve management. China’s 5-Year Plan for 2000-2005 had raised the possibility of building a national strategic petroleum reserve, and it subsequently is building petroleum reserve tanks and has begun filling them, according to DOE. IEA also invited China to attend its emergency response training and disruption simulation exercise in October 2004 and hosted a follow-up workshop with China on oil security in October 2006. IEA held a similar oil security workshop with India in 2004. It has also conducted numerous workshops and training efforts with Brazil, members of the Association of Southeast Asian Nations, and others. In contrast to IEA, U.S. outreach efforts to major developing countries at the APEC Energy Working Group are more direct since many of the major developing nations, such as China and Singapore, are members, providing a continuing opportunity to conduct outreach. The focus in the APEC Energy Working Group is on developing and sharing best practices and technology insights. The United States has also promoted best practices, training, and research across a broad range of energy issues. IEA and the APEC Energy Working Group both sponsor numerous conferences, workshops, and seminars designed to share information and technology and to encourage members to adopt practices and policies that are considered most beneficial. An example of this approach is the APEC Energy Working Group’s focus on best practices in developing an Asian LNG market. The United States hosted an APEC Energy Working Group workshop in San Francisco in March 2004 to identify best practices for LNG trade, which were later endorsed by members’ Energy Ministers. A follow-up workshop was held in Taipei in March 2005 to encourage acceptance of these best practices. That workshop resulted in the launch of an LNG Public Education and Communication Information Sharing Initiative to improve public understanding of the benefits of LNG, as well as to address safety concerns. These forums also conduct economic analyses and research projects. IEA annually publishes its flagship World Energy Outlook, which provides global long-term energy market analysis. It also conducts extensive energy policy analyses to promote conservation and the efficient use of energy, as well as increased use of alternatives to oil (energy diversification). The Asia Pacific Energy Research Centre also publishes studies of global, regional, and domestic energy demand and supply trends and related policy issues. In the area of research, IEA’s Energy Technology Collaboration Program currently sponsors more than 40 international collaborative energy research, development, and demonstration projects, known as “Implementing Agreements.” Their purpose is to help coordinate national technology efforts so there are no redundancies of effort across participating countries, which can include nonmember countries. A final element of U.S. efforts to address market imbalances has been support for greater cooperation with producer countries. IEA’s Office of Non-Member Countries has conducted outreach activities with producer countries, as well as developing countries. It studies oil developments in major emerging non-OPEC regions such as Russia, the Caspian, and West Africa. For example, IEA has a memorandum of understanding with Russia and has conducted workshops and training with Russia. It completed an energy survey of Russia in 2002 that incorporated a review of its energy situation, policies, electricity regulatory reviews, and resulting recommendations. In addition, the United States participates in IEF, which is a producer-consumer dialogue that promotes the exchange of information among all parties with an interest in the energy market. The challenges to these efforts to mitigate market imbalances lie in the inherent constraints of each forum. Since IEA was established within the framework of OECD, a prerequisite for IEA membership is OECD membership, which means that the applicant country must be a democracy and have a market-based economy. This is one factor that complicates the issue of extending IEA membership to fast-growing, energy consuming countries like China. Another complicating factor is the requirement that IEA members hold at least 90 days of oil reserves, which would be difficult for most developing countries to achieve. For IEA, deepening relations with nonmember countries is a delicate balancing act. A senior IEA official said that IEA wants to improve its relationship with developing countries like China and India—and, in fact, is considering how to offer them observer status—but it also does not want to give away the equivalent of membership without these countries having to meet the basic requirements of membership. Another inherent limitation to what can be achieved in these forums is that participation and follow-up are voluntary. Apart from IEA’s treaty obligations related to emergency preparedness (i.e., holding 90 days of oil reserves), IEA and APEC Energy Working Group activities are voluntary, and decisions are made by consensus. These forums can take steps to strongly encourage actions by members but cannot compel them. For instance, IEA country reviews, conducted every 4 years for each member, examine their energy policies and make recommendations. Two years later, brief standard reviews update the main energy developments and report on progress in implementing the recommendations. But, it is up to each country whether, and to what degree, it will take the recommended steps. The international energy forums do not directly address the impact of the growing participation of national oil companies on the energy market. The forums, however, do focus on the development of open, competitive energy markets within countries. Opening the investment climates in energy producing countries can provide increased access and competition for the international energy companies. However, forum efforts are constrained by inherent limitations in consensus-based decision making, membership, and voluntary participation. Both DOE officials and the Executive Director of the IEA stated that contributing to opening up energy investment climates is a high priority at the IEA and is an issue that has significant overlap with the emerging influence of national oil companies. The IEA Offices of Long-Term Cooperation and Non-Member Countries conduct in-depth reviews of the energy policies of both IEA member countries and nonmember countries to focus on their investment climate status and related regulatory reforms. The IEA Shared Goals of participating member countries are in part based on the establishment of free and open markets as a fundamental starting point. For reviews of nonmember countries’ energy policies, the IEA provides observations on the status of a country’s investment climate and the regulatory reforms needed to enhance competitive access to its domestic energy markets. For example, the IEA conducted a 2002 Russia Energy Survey that identified the need for regulatory and legislative reform within Russia and focused on increasing competition and on opening its energy markets. Similarly, the IEA has also performed reviews of some of China’s energy sectors that have focused on market liberalization and the transparency of the country’s oil market and related transactions, among other issues. Other international energy forums also contribute to encouraging the development of open investment climates and competitive access opportunities within member countries. For example, NAEWG focuses on improving the integration of the energy economies of Canada, Mexico, and the United States through data and information sharing across government-owned and privatized energy sectors. In addition, one DOE official stated that NAEWG efforts to demonstrate the benefits of open markets and expose the tight nature of gas supplies in North America, limiting the amount and affecting the price of pipeline supplied gas that would be available to Mexico, supported the development of the LNG market as a significant private investment opportunity for companies in what is primarily a government-owned energy sector in Mexico. The APEC Energy Working Group also encourages APEC member economies to create conditions to facilitate energy infrastructure investment through its Energy Security Initiative. For example, the APEC Energy Working Group developed a list of best practices for member countries to follow in financing energy infrastructure projects so as to develop a competitive energy investment climate. The goals and processes of the international forums do not lend themselves to directly addressing the impact of the growing participation of national oil companies on the energy market. U.S. agency officials and energy experts stated that the consensus approach and limitations of membership in the international energy forums covered in this review create challenges to addressing this emerging energy market issue. Related efforts for more open investment climates, such as through the IEA country reviews, or APEC Energy Working Group’s development of investment best practices, have also been hindered by the voluntary nature of members’ responses to forum recommendations. The contentious nature of the topic of growing participation of national oil companies on the energy market conflicts with the general approach of the international energy forums in achieving consensus on the energy issues covered. DOE and Department of State officials stated that an international energy forum is not an appropriate venue for addressing potentially contentious issues because a forum’s studies and action items are agreed to by consensus. Some energy experts interviewed also questioned what, if any, role the international energy forums can play on this issue. These experts emphasized that the international energy forums are essentially organizations that allow for gathering and exchanging of important energy data and information, but they do not have either the negotiating leverage or the focus needed to address this particular issue. One expert added that the increasing influence of national oil companies in the international oil markets may create a competition issue among the private sector players in the market, but it is not a problem for energy security or an issue that the international energy forums should or can address directly. Limited membership in the international energy forums also inhibits addressing the impact of the growing participation of national oil companies on the energy market directly. Some of the major players influencing the topic, such as China and India, are not active participants in the discussion. For example, national oil companies from China and India have been increasingly active in oil and gas exploration by pursuing a policy of procuring access to energy resources in various countries around the world. However, both have not been active members in the international energy forums. Similarly, Russia is one of the most influential energy producing countries in the world, with its domestic energy market dominated by national oil companies; but, it has not been an active participant in any of the major international energy forums. Related efforts for more open investment climates are hindered by the voluntary nature of members’ responses to forum recommendations. International energy forums like the IEA make recommendations for member countries and observations for nonmember countries to follow in order to move to market pricing and open up their investment climates. However, the forums lack the authority or mandate to require that these recommendations actually be implemented. For example, despite consistent recommendations to open up its energy markets from both multilateral and bilateral forums, Russia has actually reversed the liberalization of its energy sector and investment climate over the last 2 years. According to DOE and IEA officials, its energy sector is now less efficient, and the investment climate has worsened. Similarly, despite the IEA’s efforts to engage Mexico in participating in a review of its energy policies, Mexico has shown no interest in the review or implementation of the recommendations that typically result. Improved energy market transparency is an important theme for each of the major international energy forums. Through its participation in the forums, the United States has supported improving energy information with measures such as data sharing, data standardization, and capacity building (i.e., improving a country’s ability to collect and analyze energy data). However, forum efforts often remain challenged in improving data quality and timeliness, for example, due to authority limitations and continued capacity needs in developing countries. Additionally, U.S. support for forum efforts has not benefited from consistent use of EIA expertise, and the United States has not provided timely data submissions to the IEA. International energy cooperation forums aim to facilitate the sharing and collection of information across multiple governments. JODI is one key data sharing effort and includes monthly oil data for over 90 countries, representing around 95 percent of global demand and supply. IEA officials report that, through JODI, the international community is able to view timelier world oil data and assess the current quality of that data. Forum officials also reported that JODI was receiving high-level political support and contributing to increased transparency in some cases—China has begun collecting and releasing some data on changes in levels of oil stocks to IEA, and the IEA “Oil Market Report” is now incorporating timelier OPEC production data. Additionally, through JODI, IEA and APEC are working to standardize data collection by agreeing to use the same oil market questionnaire. Both organizations are also considering developing a similar natural gas data initiative in the future. In addition to data sharing and standardization, the forums have several efforts to improve energy information through capacity building. Such efforts include IEA memorandums of understanding with China and India to improve data sharing, the IEA Energy Statistics Manual, and the 2005 and 2006 G-8’s political endorsements of the Extractive Industries Transparency Initiative through which data is collected on developing country revenues from extractive industries. While the United States has supported forum efforts to improve international energy information, EIA expertise has not been leveraged in a consistent manner beyond the data exchange activities, as discussed above. For example, the United States supports forum initiatives such as JODI or NAEWG statistical sharing, and EIA is a member of the IEA Energy Statistics Workgroup that develops reporting standards for IEA data submissions. The United States also supports the Extractive Industries Transparency Initiative through U.S. Agency for International Development funding and through participation in an International Advisory Group. However, when asked about consistent leveraging of EIA expertise for forum efforts to improve the quality and reliability of international data, a senior EIA official described the administration’s involvement as indirect and ad hoc. For example, while EIA has provided briefings and analysis to DOE’s policy office for its cooperation efforts, EIA has not been directly and consistently involved with international forum initiatives to improve data collection efforts in other countries or in training workshops. The EIA official also described EIA’s participation as increasing and decreasing with staff availability. International cooperation has been a small part of EIA’s overall mission; however, given the importance of reliable international energy data for market stability and the emphasis on comprehensive and timely energy data reporting in the National Energy Policy, we believe that EIA’s expertise can contribute to enhanced international energy data improvement efforts. Another challenge for international cooperative efforts to improve energy market data is the fact that the forums must depend on independent member countries to be responsive. According to IEA officials, the IEA is criticized for providing annual statistical publications that are 18 months old. These officials believe that the IEA could publish annual data with only a 9-month lag if member countries submitted their data within requested time frames. However, according to the IEA officials we met with, several countries do not meet the requested time frames—including the United States. For 2004 annual data, for example, the United States had not provided its complete data submission to the IEA until March 17, 2006, although the data was requested by September 30, 2005. According to a senior EIA official, the United States is unable to meet IEA’s requested time frames, however, due to a national schedule for data collection that does not correspond with the IEA’s data collection schedule and the fact that the United States may have to wait for data from industry entities such as the American Petroleum Institute. The United States anticipates submitting 2005 annual data to the IEA by February 2007 (around 4 months after the requested date but earlier than the previous year’s submission). Authority limitations also challenge international cooperative efforts to collect detailed and consistent information on oil reserves and production levels. Energy experts have emphasized the need for international field-by- field production data and a better understanding of future oil resources, as well as the true cost of developing them. Currently, however, reserve estimates are unaudited figures, and there are no common informational disclosure requirements for reserves under international accounting standards. Capacity limitations, particularly in emerging market economies, are another challenge for international cooperative efforts to improve energy market information. While establishment of JODI has generally been considered a success by forum participants, periodic quality reviews of the database reveal a mixed record of improvement, for example. When asked about JODI data reliability, U.S. and IEA officials report that data from developing countries may lack reliability due to capacity limitations and that, despite organizational efforts to support JODI, the forums must ultimately rely on the political will of countries to improve and share their data. Exacerbating capacity limitations, the IEA has also emphasized challenges resulting from rapidly expanding data demands. According to IEA officials, interest has grown in information on natural gas, renewable energies, and energy efficiency. Additional statistical resources may be needed to acquire such information from new markets—many of them smaller and more dispersed, such as with renewable energies like biofuels—and to provide data at a more detailed level, such as within the household on energy use by vehicle or appliance. IEA reports that statistical resources to fill these additional needs are insufficient. Both oil importing and oil exporting countries seek stable, predictable energy markets to support continued economic growth. Oil importing countries, such as the United States and China, are concerned about security of oil supply. Over the past few years, the unanticipated growth in demand for oil has outpaced the growth in oil supplies. Oil exporting countries have not been able to increase supply levels accordingly, and spare capacity has declined to the point where political, economic, and weather-related events can have disruptive effects on the market. Increasing future supplies of crude oil and refined oil will require high levels of investment and technical expertise because new discoveries are expected to take place in remote, offshore, and often politically risky locations. In some of these locations, the producing country lacks the capital and expertise to develop the resources and also lacks a predictable investment climate, open to foreign investment—thus raising questions about when potential supplies might come to the market. Energy market experts expect the tight supply situation to continue in the medium and long term. At the same time, oil exporting countries are concerned about the security, or predictability, of oil demand. In the 1990s, demand for oil was affected by a global economic slowdown, including the Asian financial crisis of 1997-1998, and oil exporters experienced generally low oil prices. With exploration costs so high now, some exporting countries are concerned about committing to long-term investment projects without clear indications of demand predictability. International cooperation among importers and exporters can be founded on the recognition that each group has a shared interest in market stability. If the market does not provide this stability and questions about demand and supply growth persist, “cooperation” may move more in the direction of bilateral agreements covering oil and gas exploration and pipeline routes. Such agreements may be perceived as excluding other countries. International forums can serve an important overall purpose in providing the opportunity for oil importers and oil exporters to discuss common interests and concerns. The forums have not directly addressed matters that involve sovereign, sensitive decisions—such as Mexico’s foreign investment prohibitions or the competitive practices of some national oil companies—but they do serve to keep channels of communication open and improve understanding of various members’ concerns. By working on matters of interest to forum members—such as technical advice on emergency preparedness and management of strategic petroleum reserves and on ways to achieve cleaner, more efficient energy production—they can build on shared interests and contribute to the longer-term remediation of the demand-supply imbalance that has caused volatile prices. International forums can serve another critical role by improving energy demand and supply statistics to facilitate investment planning. In examining concerns about current energy market issues, a common thread is that more reliable energy market information is increasingly important for market stability, as well as to facilitate investment planning. As recognized by the National Energy Policy report, comprehensive and timely world energy data are needed. While the United States has provided important leadership in international emergency preparedness and the establishment of energy information systems, with the increased importance of reliable energy market information in a tight market, a greater effort may be needed to improve energy statistics. To enhance the impact of international cooperation for improving energy statistics needed for market stability and investment, we recommend that the Secretary of Energy emphasize the priority of improving energy information efforts within the international forums, particularly by taking the following two actions: examining how EIA expertise can contribute to international forum data examining how U.S. data submissions to the IEA can be made more timely. We provided a draft of this report to DOE and the Departments of Commerce and State. All three agencies provided written comments, which are reproduced in appendixes IV, V, and VI, respectively. The Department of Commerce agreed with our recommendations. The Department of State provided information in its letter about steps it has recently taken, through organizational changes, in order to highlight the importance of global energy challenges. DOE stated that the U.S. government has been actively engaged in international energy forums to advance U.S. energy security objectives and that our report makes many valuable points regarding the nature and the potentials of various international forums in which it participates. DOE also stated that our report adds to the greater understanding of the U.S. commitment to international energy cooperation. DOE disagreed with our characterization that EIA expertise has not been leveraged in a consistent manner to improve international energy data through the multilateral forums. DOE emphasized that EIA has been an active member in each of the four international forums that are the focus of this report. However, DOE also acknowledged that funding issues have constrained EIA efforts to assist other countries to improve their energy data and that this is an area where additional funding would be useful. We have modified our report language to emphasize that EIA has been more active in data exchange activities rather than efforts to assist other countries in data collection and modeling, such as through training workshops. DOE expressed concern with our description of how U.S. data submissions to IEA have not been timely, and it provided additional details about several timeliness issues. We have modified our report language to incorporate these clarifications. Additionally, while we recognize the challenge for improving U.S. data submissions due to an EIA survey schedule that does not correspond with IEA’s scheduled due dates, we maintain our recommendation that DOE examine ways to improve the timeliness of U.S. data submissions. One consideration could include the suggestion provided in DOE’s comments to this report that the IEA use EIA miniquestionnaires and monthly submissions to generate preliminary U.S. data. Finally, DOE stated that it was concerned that GAO asserts that more data and more timely data will resolve energy market and security issues. GAO makes no such assertion. Our findings highlight the increased importance of reliable energy market information in a tight market and, therefore, we recommend that DOE give greater priority to improving energy information efforts within the international forums. We specifically recommend that DOE address two relevant areas in which we saw opportunities for improvement, by examining how EIA expertise can be better leveraged and by examining how U.S. data submissions to IEA can be made more timely. Improving energy statistics is one important way in which the international forums can enhance the impact of international cooperation, especially as regards global energy market transparency. DOE and the Departments of Commerce and State also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to interested Congressional Committees and to the Departments of Commerce, Energy, and State. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To determine how the U.S. government participates in international energy cooperation forums, we reviewed: (1) the key international energy forums in which the U.S. pursues energy cooperation, (2) the key emerging energy market issues that are important for international energy cooperation, and (3) how the United States is addressing these issues through its participation in these forums. Our review focused mainly on the following key international energy cooperation forums: the International Energy Agency (IEA), the Asia Pacific Economic Cooperation (APEC) Energy Working Group, the North American Energy Working Group (NAEWG), and the International Energy Forum (IEF). We neither evaluated these forums and their impacts on energy policy and the global energy market nor did we evaluate U.S. energy policy goals, which are based on private sector approaches. Rather, we reviewed the forums’ mission, structure, and activities. In addition, our review focused on the oil and natural gas sectors of the energy market. These sectors provide the bulk of current energy traded in the market. For this reason, the nuclear, coal, renewable, and alternative energy sectors were outside the scope of our review. To determine how the United States pursues energy cooperation in key international energy forums, we reviewed documents and interviewed officials responsible for international energy cooperation at the Departments of Energy (DOE), State, and Commerce. We conducted fieldwork at the IEA and the U.S. Mission to the Organization of Economic Cooperation and Development (OECD) in Paris, France, where we reviewed documents and interviewed officials. We also exchanged correspondence with the Secretariat of the APEC Energy Working Group and conducted telephone interviews with U.S. members of the IEA and APEC Energy Working Group business advisory groups. In addition, we interviewed several private sector energy experts and industry representatives. While we had planned to conduct fieldwork in Mexico City, Mexico, and Ottawa, Canada, with Mexican and Canadian government officials responsible for NAEWG, we did not conduct this fieldwork because the Department of State declined to facilitate our access to these officials. To identify the key emerging issues in the international oil and natural gas markets in the past 5 years that are important for international energy cooperation, we reviewed documents and data and interviewed officials at DOE and the Departments of State and Commerce. We also reviewed relevant reports and studies, including past GAO reports, and discussed them with energy experts. We developed a list of three key emerging issues and verified them with agency, forum, and energy industry officials. We did not seek to independently verify the nature and extent of these energy market changes but rather relied on analysis by energy experts, officials, and key market studies, as well as prior GAO work. Our report discusses various reliability concerns with international oil and gas data. While data improvement is required, we believe key international data, such as that from the DOE’s Energy Information Agency is sufficiently reliable to indicate broad trends in world demand and supply. To determine how the United States is addressing these emerging energy market issues through its participation in these forums, we reviewed documents and interviewed officials at DOE and the Departments of State and Commerce, as well as at IEA, the APEC Energy Working Group, and their business advisory groups, and private sector energy experts. We conducted our work from January 2006 to November 2006 in accordance with generally accepted government auditing standards. In addition to the international energy cooperation forums discussed previously, we also reviewed the July 2006 Group of Eight (G-8) Summit held in St. Petersburg, Russia, which functioned as an ad hoc forum focused on energy security, and U.S. participation in several selected bilateral energy cooperation forums, which comprise an important part of U.S. energy security and cooperation efforts. We focused on bilateral energy cooperation with five key nations: Canada, China, India, Mexico, and Russia. The G-8 is an unofficial forum of the heads of the leading industrialized democracies—Britain, Canada, France, Germany, Italy, Japan, Russia, and the United States, where the European Commission is also represented and fully participates. One of the priority themes of the July 2006 G-8 Summit, hosted by Russia, was to formulate political commitments of the member states toward enhancing global energy security. The G-8 is not an international organization based on an international agreement and does not have formal admission criteria, a charter, or a permanent secretariat. G-8 summit meetings are held regularly in the partner states, and the host country acts as the Chair of G-8 for a calendar year. Russia has chaired the G-8 during 2006. The Chair organizes the summit and ministerial meetings and the expert and working meetings, manages the agenda, and coordinates the routine work of the group. At the summit meetings, discussions of the heads of state and government are held behind closed doors with decisions adopted by consensus. In preparation for the July 2006 G-8 Summit, the G-8 Energy Ministers met in March 2006 to discuss issues such as global energy security, energy efficiency, and energy saving. This meeting also included Energy Ministers from Brazil, India, China, Mexico, and South Africa, as well as representatives of the World Bank, the Organization of Petroleum Exporting Countries, the International Atomic Energy Agency, IEA, and the International Energy Forum. The July 2006 G-8 Summit resulted in the St. Petersburg Plan of Action, which is a high-level commitment on behalf of the G-8 members to enhance global energy security through efforts across several related issue areas, such as increasing the transparency and stability of global energy markets, improving the investment climate in the energy sector, and ensuring physical security of critical energy infrastructure. The United States participates in many bilateral energy cooperation forums; we reviewed five selected bilateral forums—those with Canada, China, India, Mexico, and Russia. According to a DOE official, bilateral energy cooperation forums tend to address focused issues that may be of specific interest to the two parties. DOE and the Departments of Commerce and State play a role in the bilateral energy cooperation forums, with DOE personnel often co-chairing many of the working groups involved in the efforts. In some bilateral energy cooperation forums private industry is included in the discussion of certain issues, whereas other bilateral energy cooperation forums mainly focus on high-level dialogue between government representatives on energy policies and initiatives. The following are examples of some of the main bilateral energy cooperation forums: The DOE official responsible for managing U.S. participation in the U.S.- China Energy Policy Dialogue stated it was established in 2004 for high- level dialogue between the two countries on energy issues such as energy policy, energy efficiency measures and related technologies, renewable energy, and energy sector reforms. Other areas of focus have included shared concern over supply security and energy transport issues. The U.S.- China Oil and Gas Industry Forum, established in 1998, is a public-private bilateral relationship involving government and industry representatives from both countries. The forum is driven by consensus-based dialogue on commercial policy and on common goals such as development of secure, reliable, and economic sources of oil and natural gas while facilitating investment in the energy industry. The DOE official responsible for managing U.S. participation in the U.S.- India Energy Dialogue stated it was established in 2005 with the primary goal of promoting energy security, increasing trade and investment in the energy sector, and deploying clean energy technologies. This forum consists of a steering committee and five working groups focused on oil and gas, power and energy efficiency, coal, new technology and renewable energy, and a civil nuclear initiative. Negotiations on a memorandum of understanding on energy information exchange began in 1996 and it was signed in February 2006. According to DOE, the U.S.-Canada Energy Consultative Mechanism was established in 1979 as a means for discussing key energy issues of interest or concern to the United States and Canada. The two federal governments meet annually to share policy positions, identify areas of potential dispute, and clarify understanding on energy issues without requiring commitments regarding future actions. Subject areas usually covered include world oil market developments; domestic policy developments; and bilateral oil, natural gas, electricity, and nuclear issues. According to DOE, the U.S.-Mexico Binational Commission Energy Working Group was established in 1996 as one of 16 working groups under the commission and includes issues of bilateral interest such as energy policy and legislative developments in each country, cross-border natural gas and electricity issues, science and technology cooperation, and world oil market developments. The Energy Working Group meets as requested by either country, but, in recent years, bilateral energy issues have been taken up under the auspices of the North American Energy Working Group. The DOE official responsible for managing U.S. participation in the U.S.- Russian Energy Dialogue stated it was established in 2002 and brought under its umbrella the U.S.-Russian Energy Working Group that had been established in 2001. The goal of this forum is to promote energy efficiency, alternative energy, data exchange, energy technology initiatives, and energy trade between the two countries while reducing barriers to investment in the energy sector. The bilateral energy forum originally met two times each year but, in 2005, reduced this to one meeting each year. While the tightening of the world energy market in recent years has mostly been the story of the world petroleum market, there have also been important developments in the natural gas market. Many countries have increasingly relied on natural gas. For instance, while the European Union’s dominant fuel in 2003 was oil, accounting for 40 percent of energy demand, natural gas has been the fastest growing fuel over the past decade and accounted for 24 percent of energy demand in 2003, according to the Energy Information Administration (EIA). Historically, natural gas has not been a major fuel in China, but its share in the country’s energy market is rapidly increasing, almost doubling from 1997 to 2004, according to EIA. While natural gas only accounted for about 3 percent of total energy consumption in China in 2004, this share is expected to increase. The natural gas market has long been dominated by pipelines that deliver the natural gas from producers to consumers. For instance, 85 percent of U.S. natural gas imports were provided through natural gas pipelines from Canada in 2005. Much of Europe is served by pipelines from Russia, which provides around two-thirds of its imports. However, a gas supply pricing conflict between Russia and Ukraine in late December 2005 resulted in Russia’s Gazprom shutting off gas supplies to Ukraine on January 1, 2006, resulting in an energy crisis for all of Europe. Although Russia had threatened a cutoff to demand higher natural gas prices in recent years, this was the first time that a supply disruption had affected flows to Europe. While the immediate crisis was soon resolved, the incident deeply undercut Europe’s sense of energy security. The United States was the largest consumer of natural gas in 2004, with about 23 percent of world demand. Russia had the second largest demand. Germany, in third place, had about a quarter of Russia’s level of demand. Figure 6 shows the top world natural gas consumers in 2004. However, as figure 7 shows, Germany’s net imports accounted for 80 percent of its natural gas demand in 2004, while this share was only 15 percent for the United States. Of the seven top natural gas importers, six depended on imports for more than 75 percent of their demand—including Ukraine, which met about 78 percent of its natural gas demand through imports. In terms of natural gas production, Russia was the largest producer, with about 23 percent of total world production, as shown in figure 8. The United States accounted for 19 percent of total production. Canada, in a distant third place, produced about 7 percent of total production. Until recently, as long as most natural gas was delivered by pipelines that required geographic proximity, there was not the possibility of developing a global market in which gas could be shipped to customers not connected with a pipeline. This has changed recently with the growing development of a liquefied natural gas (LNG) market, which is made possible because LNG can be shipped via LNG tankers that can go anywhere in the world where there is an LNG regasification terminal. LNG technology is not new—it had declined by the 1980s in the United States, for example, in part because it could not compete with lower priced domestic natural gas provided through pipelines. However, interest in LNG imports has been renewed due to higher U.S. natural gas prices in recent years, along with increased competition, and advances in LNG technology that have lowered its costs, according to EIA. LNG is expected to be particularly valuable for so-called “stranded” natural gas reserves, which are located in areas too remote from major demand centers to affordably be developed using pipelines. In 2005, Japan was by far the largest importer of LNG, with about 42 percent of total world LNG imports. Its major suppliers included Indonesia, Malaysia, Australia, Qatar, Brunei, and the United Arab Emirates. South Korea, in second place, accounted for about 16 percent of total world LNG imports, from some of the same suppliers and Oman, while Spain, ranked third, imported about 11 percent of the total, mainly from Algeria, Nigeria, Qatar, and Egypt. The United States ranked fourth, with about 9 percent of the total, mostly imported from Trinidad and Tobago. While China is not yet an important consumer in the LNG market, it is taking steps to significantly increase its LNG profile. With its natural gas use increasing rapidly, and uncertainties surrounding the potential of piped Russian natural gas, China is increasingly considering LNG. Its first LNG import terminal received its first shipment in May 2006, and over a dozen new terminal projects are either under way or being considered, according to EIA. The following are GAO’s comments on DOE’s letter dated December 12, 2006. DOE’s cover letter and comments 2, 4 to 6, 40, 41, 47, and 64 generally addressed our key findings and recommendations. We considered the technical comments provided in comments 1, 3, 7 to 39, 42 to 46, and 48 to 63 and incorporated them where appropriate. 1. GAO does not assert that more data and more timely data will resolve energy market and security issues. Rather, our findings highlight the increased importance of reliable energy market information for mitigating market instability and facilitating investment and, therefore, we recommend that DOE give greater priority to improving energy information efforts within the international forums. We agree that achieving improved international energy statistics is not something the United States can do alone. However, we specifically recommend that DOE address two relevant areas in which we saw opportunities for U.S. improvement, by examining how EIA expertise can be better leveraged and by examining how U.S. data submissions to IEA can be made more timely. Improving energy statistics is one important way in which the international forums can enhance the impact of international cooperation. 2. We have clarified and modified language in the report to reflect EIA’s support for international data exchange, particularly through efforts such as the Joint Oil Data Initiative (JODI) and the APEC Expert Group on Energy Data and Analysis. However, EIA expertise has not been consistently leveraged for efforts to improve the quality of international data through, for example, assisting other countries in data collection and modeling or training workshops. Consistent with DOE’s comment emphasizing the role of funding constraints in EIA’s ability to assist with such efforts, we acknowledged that EIA’s participation has been dependent on staff availability. Further, while we acknowledge in our report that international cooperation is a small part of EIA’s overall mission, we maintain that DOE should examine how EIA expertise can contribute to international forum data efforts. 3. GAO’s recommendation states that DOE should examine how U.S. data submissions to the IEA can be made more timely. In our report, we acknowledge that the current EIA survey schedule does not correspond with IEA’s current scheduled due dates. Nonetheless, we maintain that DOE should examine whether there are options for addressing the timeliness of U.S. data submissions to the IEA. One consideration could include the suggestion provided in DOE’s comments to this report that the IEA use EIA miniquestionnaires and monthly submissions to generate preliminary U.S. data. 4. In our report, we recognize that other IEA member countries also do not submit their data within the requested IEA time frames. We have modified language regarding U.S. data submissions to reflect additional information DOE has provided. In addition to the individual named above, Virginia Hughes, Assistant Director; Leyla Kazaz; Kendall Schaefer; Hugh Paquette; and Michelle Munn made key contributions to this report. Other contributors include Godwin Agbara, Karen Deans, Mark Dowling, Amanda Miller, and Anne Stevens.
Rising oil prices, resulting from growth in energy consumption by rapidly developing Asian nations and by most industrialized nations, have increased concern about competition over oil and natural gas resources. In particular, Congress expressed interest in how the United States participates in energy cooperation through international forums. GAO was asked to review: (1) what are the key international energy forums in which the United States pursues energy cooperation, (2) what are some of the key emerging energy market issues that are important for international energy cooperation, and (3) how is the United States addressing these issues through its participation in these forums. GAO's work is based on contacts with agency officials and energy experts and review of documents. The United States pursues energy cooperation through several international energy forums designed to meet specific cooperative needs. They include a formal institution with binding petroleum reserve obligations, regional associations, and informal gatherings designed to facilitate information exchange. Major forums include the International Energy Agency (IEA), the Asia Pacific Economic Cooperation Energy Working Group, the North American Energy Working Group, and the International Energy Forum. GAO identified three energy market issues that are important for U.S. efforts in international energy cooperation. First, a tighter energy market with higher, more volatile, prices has developed. This is due to (1) an unanticipated rise in energy demand and (2) constrained supply due to less spare crude oil production capacity and increased political frictions in certain supplier countries. Second, market participation of national oil and gas companies, which are majority owned by governments, has led to limitations on access to resources. Third, more reliable energy market information is needed to facilitate market stability and plan investment. The U.S. government has addressed these issues through its participation in international energy cooperation forums; however, the nature of the forums can limit their impact. Forums have restricted membership, consensus-based agendas and decisions, and voluntary participation. They generally focus on noncontroversial issues such as energy efficiency and technology. Within these constraints, the United States has tried to mitigate effects of tight markets by supporting emergency preparedness. It has not directly addressed the impact of national oil companies, but it has pursued related areas. It has sought to improve energy information, but Energy Information Administration (EIA) statistical expertise has not been consistently leveraged for purposes beyond data exchange, and U.S. data submissions to the IEA have not been timely.
The General Schedule (GS) is the federal government’s primary pay and classification system for white-collar employees. Under this system, federal employees are paid according to governmentwide rules and procedures, and federal positions are classified according to the difficulty and responsibility of the work performed. The GS system was created in 1949, when most federal positions involved clerical work or revolved around the execution of established, stable processes—for example, posting census figures in ledgers or retrieving taxpayer records from vast file rooms. Over time, however, federal positions have become increasingly specialized and more highly skilled. In light of this change, a number of federal agencies have attempted to provide managers with greater flexibility in hiring and awarding pay raises to employees by implementing human capital initiatives, such as performance management systems, that reward employees’ performance and contribution to the agency’s mission. The need for human capital reform regarding these systems has been the subject of a number of previous GAO reviews. These reviews have noted, for example, that federal agencies must have modern, effective, credible, and validated performance management systems that are capable of supporting pay and other personnel decisions, and that pay for performance works only with adequate safeguards and appropriate accountability mechanisms in place to ensure that the safeguards are implemented in a fair, effective, and credible manner. In November 2003, the Congress included a provision in the National Defense Authorization Act for Fiscal Year 2004 providing DOD with the authority to establish a pay-for-performance management system as part of NSPS. In April 2004, the Secretary of Defense appointed an NSPS Senior Executive to, among other things, design, develop, and implement NSPS. Under the Senior Executive’s authority, the PEO was established as the central policy and program office for NSPS. The PEO’s responsibilities include designing the human resource/pay-for-performance systems, developing communication and training strategies, modifying personnel information technology, and preparing joint enabling regulations and internal DOD implementing regulations, called implementing issuances. As the central DOD-wide program office, the PEO also directs and oversees the four components’ NSPS program managers, who report to their parent components and the PEO. These program managers serve as their components’ NSPS action officers and also participate in the development, planning, implementation, and deployment of NSPS. Beginning in April 2006, DOD began phasing (or spiraling) civilian employees into NSPS; however, in January 2008 the National Defense Authorization Act for Fiscal Year 2008 prohibited the Secretary of Defense from converting more than 100,000 employees to NSPS in any calendar year and excluded Federal Wage System employees from coverage under NSPS. Further, in March 2009, DOD announced that it would delay the conversion of additional organizations to NSPS pending the outcome of a joint review of the system by DOD and OPM. According to DOD, this decision affected roughly 2,000 employees in organizations scheduled to convert to NSPS during the spring of 2009. As a result of these and other legislative changes that resulted in revisions to the NSPS regulations, the PEO has not developed new timelines for phasing any additional civilian employees into NSPS. As table 1 shows, according to DOD, almost 220,000 civilian employees have been phased into NSPS as of September 2009. The NSPS performance management process is ongoing and consists of several phases that are repeated during each annual performance cycle. The process begins with a planning phase that involves supervisors (or rating officials) and employees working together to establish performance plans. This includes developing job objectives—the critical work employees perform that is aligned with their organizational goals and focused on results—and identifying contributing factors—the attributes and behaviors that identify how the critical work established in the job objectives is going to be accomplished (e.g., cooperation and teamwork). After the planning phase comes the monitoring and developing phase, during which ongoing communication between supervisors and employees occurs to ensure that work is accomplished; attention is given to areas that need to be addressed; and managers, supervisors, and employees have a continued and shared understanding of expectations and results. In the rating phase, the supervisor prepares a written assessment that captures the employee’s accomplishments during the appraisal period. In the final—or reward—phase, employees should be appropriately rewarded or compensated for their performance with performance payouts. During this phase, employee assessments are reviewed by multiple parties to determine employees’ ratings and, ultimately, performance payouts. The performance management process under NSPS is organized by pay pools. A pay pool is a group of employees who share in the distribution of a common pay-for-performance fund. The key participants that make up pay pools are the employee, supervisor, higher-level reviewer, pay pool panel, pay pool manager, performance review authority, and, in some instances, the sub-pay pool, as shown in figure 1. Within a pay pool, each participant has defined responsibilities under the performance management process: Employees are encouraged to be involved throughout the performance management cycle. This includes initially working with their supervisors to develop job objectives and identify associated contributing factors; identifying and recording accomplishments and results throughout the appraisal period; and participating in interim reviews and end-of-year assessments, for example, by preparing a self- assessment of their performance during the annual appraisal period. Supervisors (or rating officials) are responsible for effectively managing the performance of their employees. This includes clearly communicating performance expectations; aligning performance expectations and employee development with organization mission and goals; working with employees to develop written job objectives that reflect expected accomplishments and contributions for the appraisal period and identifying applicable contributing factors; providing employees meaningful, constructive, and candid feedback relative to performance expectations, including at least one documented interim review; making meaningful distinctions among employees based on performance and contribution; and providing recommended ratings of record, share assignments, and payout distributions to the pay pool. The higher-level reviewer, typically the rating official’s supervisor, is responsible for reviewing and approving job objectives and recommended employee assessments. The higher-level reviewer is the first step in ensuring consistency of ratings because this individual looks across multiple ratings. The pay pool panel (or, in some cases, the sub-pay pool panel) is a board of management officials who are usually in positions of line authority or in senior staff positions with resource oversight for the organizations, groups, or categories of employees making up the pay pool membership. The primary function of the pay pool panel is the reconciliation of ratings of record, share distribution, and payout allocation decisions. For example, the pay pool panel may adjust a supervisor’s recommended rating of record in order to help ensure the equity and consistency of ratings across the pay pool. Each pay pool has a manager who is responsible for providing oversight of the pay pool panel. The pay pool manager is the final approving official of the rating of record. Performance payout determinations may be subject to higher management review by the performance review authority or equivalent review process. Finally, the performance review authority provides oversight of several pay pools and addresses the consistency of performance management policies within a component, major command, field activity, or other organization as determined by the component. DOD continues to take some steps to implement each of the safeguards we reported on in September 2008. However, opportunities exist to continually involve employees in the system’s implementation, and implementation of some safeguards—for example, providing effective training—could be improved. Also, we previously reported that continued monitoring of the safeguards was needed to help ensure that DOD’s efforts were effective as implementation of NSPS proceeded. However, we found that while DOD monitors some aspects of the implementation of NSPS, it does not monitor how the safeguards specifically are implemented across the department. Because DOD does not monitor the safeguards’ implementation, decision makers in DOD lack information that could be used to determine whether the department’s actions are effective and whether the system is being implemented in a fair, equitable, and credible manner. DOD has taken a number of steps to involve employees and stakeholders in the design and implementation of NSPS. In our September 2008 report, we noted that DOD solicited comments from employees and unions representing DOD employees during the design of NSPS. We also noted that DOD involved employees during the implementation of the system in workshops and other efforts aimed at refining the system’s performance factors. DOD continues to take such steps. For example: DOD solicited comments from employees and unions on the system’s final rule, which was published in the Federal Register in September 2008. According to the Federal Register, the final regulations, which became effective in November 2008, include revisions based on 526 comments submitted during the public comment period and on comments from 9 of the department’s 10 unions with national consultation rights. DOD involved employees in efforts to improve the usability of the automated tools that support the NSPS performance and pay pool management processes. Specifically, the PEO and the department’s Civilian Personnel Management Service held a series of meetings with employees, rating officials, pay pool managers, and human resource practitioners in early 2008 to address concerns regarding the usability of the automated tools. These meetings allowed the department to gather requirements for the next version of the NSPS automated tools based on lessons learned and user input. Subsequently, DOD established six separate working groups to develop and evaluate the requirements for each of its automated tools. In addition, DOD initiated separate efforts to enhance the usability of the Performance Appraisal Application—the DOD-wide tool for employee self- assessments and appraisals. Specifically, the contractor that developed the Performance Appraisal Application enlisted the assistance of software usability experts to evaluate the tool and recommend changes that would enhance users’ experience with it. As a part of this effort, the contractor observed and worked with employees and rating officials to identify changes that could be made to the Performance Appraisal Application to make it more user-friendly. DOD also tested the functionality and usability of the enhancements that were made to the Performance Appraisal Application with over 300 users. DOD has taken steps to involve the components in the implementation of NSPS through biweekly conference calls held at key phases of the performance management process. According to the PEO, during these calls, PEO and Civilian Personnel Management Service representatives discuss topics submitted by the components, respond to questions regarding such things as NSPS policy and the system’s automated tools, and share lessons learned with participants. Further, according to the PEO, these conference calls allow participants to address systemic problems through feedback shared between different levels of the organization. At the locations we visited outside the continental United States, we found that some steps were generally being taken to involve employees in the implementation of NSPS as well. For example, officials at each of the eight locations reported that organizations identified lessons learned that were generally based upon employee feedback gathered following the mock pay pool. According to these officials, lessons learned were used to make changes to, among other things, training materials, business rules, and the use of job objectives and contributing factors. For example, two locations limited the number of contributing factors employees should use in their performance plans based upon lessons learned, while one location responded to employee feedback regarding a lack of time to devote to NSPS by mandating that employees be allowed to take time to complete NSPS training. While DOD has taken a number of steps to involve employees in the design and implementation of NSPS thus far, as stated above, we note that one way the department could continue to involve employees as implementation of the system proceeds is to develop and implement an action plan to address employees’ perceptions of NSPS, as we recommended in our first assessment of NSPS. However, DOD has not yet done so, which we discuss further in the second objective of this report. DOD continues to take steps to link employee job objectives to the agency’s strategic goals, mission, and desired outcomes. As we noted in our 2008 report, DOD’s automated tool for employee self-assessments and appraisals—the Performance Appraisal Application—provides a designated area for the mission of the employee’s command to be inserted as a guide while employees compose their job objectives and self- assessments. In May 2009, DOD published its evaluation of NSPS for 2008, entitled National Security Personnel System (NSPS) – 2008 Evaluation Report, which included an evaluation of employee performance plans to determine the extent to which employee job objectives are aligned with higher-level organizational goals and thus ensure that employee performance contributes to the achievement of organizational success. The evaluation included 240 employee performance plans encompassing a range of job series, pay schedules, pay bands, and organizations within each of the four components. The evaluation found that 95 percent of the objectives were strongly aligned to higher-level goals and demonstrated a clear, direct, and strong linkage to the organizational mission or relevant strategic goals. During our site visits, we found that the organizations were taking steps to ensure that employees understood how their job objectives link to the organization’s strategic goals. This was generally accomplished through documentation requirements in the Performance Appraisal Application and reinforced during employees’ discussions with their supervisors. Some organizations have taken additional steps to ensure that employee job objectives link to the organization’s strategic goals. For example, at one location we visited the commanding general issued a memorandum to managers emphasizing the importance of ensuring that employee objectives are linked to the command’s mission and objectives and that employees understand their roles in achieving those objectives. However, officials at five locations also reported that employees do not always understand this linkage. DOD continues to take steps to provide employees with required and other training on the implementation and operation of the NSPS performance management system, but has not yet evaluated the effectiveness of the training that it provides. In our September 2008 report, we noted that DOD encouraged employees who were transitioning to NSPS to receive training that covered skills and behaviors necessary to implement and sustain NSPS; foster support and confidence in the system; and facilitate the transition to a performance-based, results-oriented culture. DOD and each of the components continue to take such steps to provide employees with required and other training on the system, including introductory training for employees converting to NSPS and sustainment training for employees already under the system. While the components are responsible for providing employees with training on the NSPS performance management system, the PEO supports their efforts by offering a variety of departmentwide training courses and other materials. For example, Web- based training modules that the PEO has developed, such as its NSPS 101 and iSuccess courses (see fig. 2 for sample screen shots), provide employees with basic knowledge about NSPS and performance management principles in general, and are used by employees to develop their job objectives. As another example, the PEO developed training guides to educate employees on changes to the NSPS classroom materials resulting from the revised NSPS regulations and implementing issuances. The PEO also developed a Web site for accessing NSPS learning materials, resources, and other tools. In addition, we found that the Air Force has begun incorporating training on NSPS as a normal part of its operations and is working to embed NSPS topics in the regular training provided to Air Force civilians and servicemembers. Although DOD and the components continue to take steps to provide employees with training on NSPS, the department has not yet evaluated the effectiveness of the training provided. We previously reported that it is increasingly important for agencies to measure the real impact of training and thus evaluate their training—for example, by establishing clear goals about what the training is expected to achieve along with agreed-upon measures or performance indicators to ascertain progress toward achieving those goals. DOD has outlined the fundamental requirements, or goals, of the training that it provides to employees on NSPS. Specifically, DOD has noted that for NSPS, a training program must be implemented that enables employees to understand better how to succeed, and enables supervisors to communicate performance expectations to their employees, provide feedback to them on their performance against these expectations, and tell them what steps they can take to improve their performance and competencies and manage their careers. However, while DOD has undertaken efforts to understand employees’ perceptions of its training, the department has not yet evaluated the effectiveness of the training that it provides. For example, in early 2009 the PEO conducted what PEO officials describe as an ad hoc study of training needs. The PEO’s study consisted of conducting sensing sessions with 120 human resource practitioners at 11 locations across the department. According to the PEO, the purpose of these sessions included obtaining feedback on existing NSPS learning products and support and exploring options for next generation products. While the PEO’s study identified some needed improvements, it does not constitute an evaluation of the department’s training—for example, because it did not assess the department’s progress toward achieving the goals specified for the training. As another example, DOD’s 2008 evaluation report also highlighted deficiencies with the department’s training on NSPS. Specifically, the report notes that without effective communication and training, NSPS cannot achieve its goal of being a credible and trusted system. Further, three of the report’s six key recommendations focus on the need to enhance the effectiveness of the training provided to employees on NSPS: (1) provide more training on the performance management system; (2) provide high-level training for employees and supervisors that explains the pay pool process; and (3) hold mock pay pool panels, which serve as refreshers for continuing panel members and as training for new members. However, like the PEO’s study, DOD’s 2008 evaluation report does not constitute an evaluation of the department’s training—for example, because it did not include an in-depth assessment of DOD’s training and also did not assess the department’s progress toward achieving the goals for the training, per agreed-upon measures or performance indicators. Of the components, we found that only the Army has taken some steps to evaluate the training it provides to employees on the system. Specifically, the Army assesses the adequacy of NSPS training during on-site reviews that it conducts as part of its implementation of the system. The Army conducted three such assessments during 2008 and an additional four such assessments during 2009. With respect to our site visits, although we found that each of the eight locations provided training on NSPS to employees, officials at each location also expressed concerns over the effectiveness of the training provided. For example, officials at seven locations told us that additional training was needed on writing job objectives or employee self- assessments under the system, while other officials noted that additional training was needed on the pay pool process, use of the system’s automated tools, and how the reconsideration process works. Similarly, officials at two locations raised concerns that the training they received did not provide them with the skills they needed to use the Performance Appraisal Application. For example, officials told us that they received training too early and had either forgotten it or the training had become outdated by the time they actually used the Performance Appraisal Application. Further, some program officials raised concerns about their organizations’ ability to provide employees with adequate training on the system when the employees are located outside the continental United States. For example, program officials at one location in Germany reported challenges providing employees located in the field with training on NSPS because they lack the resources to send NSPS trainers to those locations. However, until DOD evaluates its training, it will not be able to determine whether the training meets the needs of its employees or the department is making progress toward achieving the goals it specified for the training. DOD continues to take steps to ensure that employees receive timely performance feedback that is meaningful, constructive, and in accordance with the department’s requirements. In our first assessment of NSPS we noted that DOD’s implementing issuances required at least one documented interim performance review and an annual performance appraisal and that the Performance Appraisal Application allowed supervisors and employees to document these feedback sessions. Since then, DOD has taken additional steps to enhance the Performance Appraisal Application by modifying the tool to allow supervisors and employees to identify where they are in the performance appraisal process and help them accomplish required actions in a timely manner. During our site visits, officials at all eight locations told us that NSPS helps ensure the occurrence of performance feedback between employees and supervisors. For example, officials noted that use of the Performance Appraisal Application encourages employee feedback by allowing employees to document and track feedback sessions, and that NSPS encourages direct discussions about performance-related issues, such as developing effective job objectives and establishing performance expectations. DOD continues to take steps to better link individual pay to performance as well. As we noted in our 2008 report, the NSPS performance management system uses a multirating system of five rating categories—of which the lowest rating is “1” (unacceptable performance) and the highest rating is a “5” (role model performance)—that allows distinctions to be made in employee performance and therefore compensation. Since then, DOD added details to the NSPS regulations to facilitate uniform, equitable practices across the department that accord with merit system principles. These include specifying specific share assignment ranges, rounding rules for converting raw performance scores to ratings, and formulas for determining share value and the calculation of performance payouts under NSPS. DOD also clarified that a common share value should apply throughout an entire pay pool. According to the PEO, these changes, in addition to the higher-level review of performance expectations, recommendations for ratings of record, share assignment, and payout distribution have helped to promote a more equitable method for appraising and compensating employees. However, during our site visits, officials at seven of the eight locations told us that they saw the potential for factors other than performance to influence employees’ ratings, such as the quality of employees’ and supervisors’ writing skills. For example, rating officials at one location commented that NSPS does not reward employees based on their performance but rather on how well employees and supervisors can communicate in writing what the employee achieved during the performance management cycle. Similarly, at another location, a pay pool panel member told us that the paperwork submitted to the panel by both the employee and the supervisor must be of very high quality because it can be difficult to defend a high rating recommended for an employee if the assessments are poorly written. DOD’s 2008 evaluation report also highlighted concerns from employees and managers over the department’s success in linking pay to performance under NSPS. For example, DOD’s report noted that while some employees believed that they saw some level of pay for performance under NSPS, others were uncertain. Further, DOD’s report also noted that some managers observed that the quality of written assessments contributed significantly to the way in which ratings were substantiated. We found that DOD also continues to take steps to ensure that adequate agency resources are allocated to NSPS. According to 5 U.S.C. § 9902(e)(4), to the maximum extent practicable, for fiscal years 2004 through 2012, the overall amount of money allocated for compensation of civilian employees in organizations under NSPS shall not be less than the amount that would have been allocated under the GS system. To meet the requirements of 5 U.S.C. § 9902(e)(4), DOD’s implementing issuances require that the components certify in writing to the Deputy Secretary of Defense through the Under Secretary of Defense for Personnel and Readiness that the amount expended for NSPS performance-based pay increases is no less than what would have been expended had these positions not been converted into NSPS. Following the 2008 NSPS performance management cycle, each of the components certified that it met this requirement. DOD also continues to capture NSPS implementation costs, and for fiscal year 2008, the PEO reported that NSPS implementation costs were about $61.8 million. According to the PEO, continuing implementation of NSPS will result in some additional program implementation costs, such as for training for NSPS, conducting outreach to employees and others, and improving the system’s automated tools. However, the PEO estimates that once the conversion of all non-bargaining unit employees is complete, the system’s implementation costs will decrease significantly unless there is a decision to convert bargaining unit employees. Thereafter, according to the PEO, the cost of administering NSPS will be no different than that of any other personnel system. In our 2008 report, we recommended that DOD take steps to better ensure the consistency and equity of the performance management process by requiring a third party to perform predecisional demographic and other analysis of the pay pool results. DOD did not concur with our recommendation, stating that its postdecisional analysis of the rating results was useful for identifying barriers and any needed corrective action. DOD also stated that if the information obtained from a postdecisional demographic analysis demonstrated that the results were not fair or equitable, for any reason, the process used to achieve those results could be examined with the intent to identify and eliminate barriers to a fair and equitable outcome. However, our review of the postdecisional analyses that the PEO and each of the components completed following the 2007 NSPS performance management cycle and the analyses that each of the eight organizations we visited for our review completed following the most recent performance management cycle in 2008 found these analyses were inconsistent, did not always include an analysis of the rating results by demographics, and were generally conducted at too high a level to provide decision makers with sufficient information to identify potential barriers or corrective actions. For example, we found that following the 2007 performance management cycle, the PEO analyzed the rating results for more than 100,000 employees by select demographic groups, such as race, gender, ethnicity, age, veteran status, and targeted disability, but limited its analysis to the aggregate data from its pay pools. That is, the PEO did not analyze the rating results at the level where decisions are made—in the case of NSPS ratings and payouts, the pay pool level. Similarly, in analyzing the postdecisional analyses that the components conducted following the 2007 performance management cycle, we found inconsistencies in their approaches, primarily because the components were allowed to develop their own approaches for conducting this analysis. For example, only the Army and Fourth Estate included an analysis of the rating results by demographics as part of their respective postdecisional analyses. However, we also found that neither the Army’s nor the Fourth Estate’s demographic analysis of the ratings provided decision makers with sufficient information to identify possible barriers or corrective actions that could be taken to address such barriers. As with the PEO, this problem results because neither the Army nor the Fourth Estate conducted its analysis at the pay pool level. Of the eight locations we visited for our review, we found that only one organization’s postdecisional analysis following the 2008 performance management cycle included an analysis of its ratings results by demographics. Since we issued our 2008 report, DOD has taken steps to promote a degree of consistency in its postdecisional analysis of NSPS ratings and payout data. Specifically, in December 2008, DOD revised its implementing issuance to require the heads of DOD components to conduct an annual analysis of NSPS performance ratings and payouts for subordinate elements, and issue guidance to lower echelons and otherwise act to identify, examine, and remove barriers to similar rating and payout potential for demographic and other groups in the workforce. Further, in May 2009, the PEO issued guidance, entitled Guidance for Conducting Annual Analysis of NSPS Performance Ratings and Payouts, in order to provide the components with a framework and suggested approaches for conducting their annual analysis and to serve as a starting point for identifying and examining barriers. For example, the guidance notes that the NSPS performance management system’s rating and payout process has four main outcomes—the rating of record, number of shares awarded, payout, and the distribution of the payout—and that each outcome should be reviewed. The guidance also notes that analysis is best done at the level where decisions are made—in the case of NSPS ratings and payouts, the pay pool level. Further, the guidance expresses the expectation that as the components conduct their analyses, changes and improvements to the guidance will be made; is careful to ensure that the components understand base parameters for conducting the analysis so it is conducted in a manner that is methodologically sound; encourages consultation with experts, such as statisticians and human resources researchers, to assist with determining the most suitable analytical models to employ, the statistical tools to utilize, and the standards to adopt in relation to understanding, measuring, and reporting significant findings; and makes responsibility for conducting the analysis a shared responsibility between various offices, including the components’ legal, equal employment opportunity, and human resources offices, but notes that the components should consider tasking their Office of General Counsel or Office of the Judge Advocate General, whose staff are well positioned to ensure that the components are in compliance with applicable statutes, regulations, and policies, with primary responsibility for conducting the analysis and ensuring that adequate resources are provided in support of the function. While issuance of the May 2009 guidance represents a noteworthy step, the guidance does not address all steps suggested by the Equal Employment Opportunity Commission for identifying and addressing potential barriers to fair, consistent, and equitable ratings. The Equal Employment Opportunity Commission’s Instructions to Federal Agencies for EEO Management Directive 715 provides guidance that agencies can use in identifying and addressing potential barriers. The instructions state that “barrier identification and elimination is the process by which agencies uncover, examine, and remove barriers to equal participation at all levels of the workforce.” Management Directive 715 further states that “where it is determined that an identified barrier serves no legitimate purpose with respect to the operation of an agency, this Directive requires that agencies take immediate steps to eliminate the barrier.” In conducting their analysis, the components’ data may uncover barriers or other potential problems. However, understanding why the barrier or problem exists, or what to do to address it, may require that the components take additional steps. To identify and eliminate potential barriers, the directive outlines a four-step process: (1) analyzing workforce data to identify potential barriers; (2) investigating actual barriers and their causes; (3) eliminating barriers, which includes devising a plan for improvement and developing overall objectives for barrier elimination, with corresponding action items, responsible personnel, and target dates; and (4) assessing the success of the plan. The PEO’s guidance aims to promote a degree of uniformity and standardization in conducting postdecisional analyses. However, the PEO’s guidance does not specify what process the components should follow to investigate potential barriers and their causes, nor does it specify a process for eliminating barriers that are found. By not specifying such steps in its guidance, the components may not follow a consistent approach when investigating barriers, which could hinder their efforts to eliminate them. While not predecisional, we recognize that DOD’s approach does provide some benefits, some of which are similar to those of a predecisional analysis. For example, DOD’s approach lays out a method of analyzing ratings that would address some of the merit principles in 5 U.S.C. § 2301—for example, that employees should receive fair and equitable treatment in all aspects of personnel management and that employees should be protected against arbitrary action, personal favoritism, or coercion for partisan political purposes. However, as stated previously, because DOD does not specify what process the components should follow to investigate and eliminate potential barriers, the components may not follow a consistent approach, which could hinder their efforts to ensure fair, consistent, and equitable ratings. While DOD continues to take steps to ensure a reasonable amount of transparency in its implementation of NSPS, concerns about the overall transparency of the system continue to exist. To improve the transparency of the NSPS performance management system, our September 2008 report recommended that DOD require commands to publish the final rating results to employees. DOD concurred with our recommendation and, in November 2008, amended its NSPS regulations and implementing issuances to require commands to publish the final rating results to employees. Under DOD’s revised guidance, performance review authorities are required to communicate the general pay pool results to the NSPS workforce in writing. At a minimum, this includes the number of pay pools (if aggregate pay pool results are necessary), the number of employees rated, the rating and share distributions, the average rating, the average share assignment, the share value (or average share value), and the average payout expressed as a percentage of base salary. At the eight locations we visited, we found that each of pay pools shared this information with employees following the 2008 NSPS performance management cycle. DOD continues to take other steps to ensure a reasonable amount of transparency of the NSPS performance management system. In May 2009, the PEO launched “NSPS Connect,” a centralized Web portal for employees, managers, and others to access NSPS products, such as online training courses, fact sheets, tips sheets, and information on the system’s automated tools. The PEO also continues to take steps to collect and share lessons learned on the department’s experiences implementing NSPS and facilitate lessons learned briefings with the components following each performance management cycle. Further, the PEO continues to report periodically on internal assessments and employee survey results relating to the NSPS performance management system. For example, in May 2009, the PEO published the results of its 2008 evaluation of NSPS. Similarly, as we previously reported, DOD posts the results of its survey of civilian employees on a Web site that is accessible to DOD employees, supervisors, and managers. Officials at each of the eight locations we visited told us that efforts were being made to help ensure transparency of the NSPS performance management system and the rating process. According to officials, among the steps being taken to help ensure transparency, for example, were establishing multiple communities of practice, disseminating business rules and other guidance on NSPS to employees and managers under the system, and publishing monthly newsletters on NSPS. At seven of the locations, officials told us that town hall meetings were held to keep employees informed of NSPS-related developments, and officials at six locations told us that mock pay pool panels were held to show employees how the pay pool process works. However, our site visits revealed some concerns about the overall transparency of the system. For example, at three locations officials expressed concerns over a lack of transparency with regard to their pay pools’ business rules, indicating that their business rules either had not been published or were published well after the performance management cycle had begun. At one location, pay pool panel members told us that though it was 6 months into the current performance management cycle, they did not yet have copies of their business rules. Similarly, rating officials at four locations told us that they did not understand what constituted a rating of “4” because neither their pay pools nor business rules provided clear criteria. Although our site visits revealed concerns over the extent to which meaningful distinctions in individual employee performance are being made under NSPS, DOD has taken some steps that include addressing a recommendation we made in our 2008 report aimed at encouraging the use of all available rating categories. Specifically, at all eight locations we visited, officials told us that they did not believe NSPS was being implemented in a manner that encouraged meaningful distinctions in individual employee performance. For example, officials at four locations told us that they were hesitant to give ratings lower than a “3,” and officials at six locations told us that they believed that there was a forced distribution of the ratings or manipulation of the ratings in order to achieve a predetermined quota. Further, at one location we found that management stressed the importance of maintaining employee share value at above 2.0, which would result in a higher payout, regardless of the employee’s rating. According to the PEO, any effort to limit share value is a roundabout way to establish preset limits on ratings and would constitute forced distribution, which the NSPS regulations prohibit. However, in response to concerns about the potential for the forced distribution of performance ratings under NSPS, in April 2009 the department posted to its NSPS Web site a fact sheet emphasizing that the forced distribution of ratings is prohibited under NSPS and describing how meaningful distinctions in performance are made under the system. DOD’s fact sheet provides guidance specifying what constitutes the forced distribution of ratings, why the forced distribution of ratings is prohibited, how use of standard performance indicators minimizes the potential for individual bias or favoritism, and how organizations can best apply this information when rating and rewarding employee performance under NSPS. DOD has also noted that if employees believe their rating did not result from meaningful distinctions or are unfair, the system affords them the opportunity to challenge their ratings through a formal process known as reconsideration. See appendix II for further information on the NSPS reconsideration process. Within DOD, both the PEO and the components are responsible for monitoring the implementation of NSPS. As part of its efforts to monitor the implementation of NSPS, the PEO conducts broad, annual evaluations of NSPS to determine whether the system is on track to achieve certain goals, or key performance parameters, by, among other things, monitoring employee perceptions of the system using DOD’s survey of civilian employees. In May 2009, the PEO published its first evaluation of NSPS, which focused on determining whether NSPS, as implemented in spiral 1 organizations, was on track to achieve some of the goals specified by DOD and if any improvements were needed. While some of DOD’s goals for the system lend themselves to comparisons to the safeguards— for example, one of its goals is ensuring a credible and trusted system (which could align with the transparency safeguard)—PEO officials stated that to date, their focus has been limited to understanding how the components have generally implemented NSPS and not on monitoring or assessing the components’ implementation of the safeguards. With respect to the components, DOD’s implementing issuances state that the heads of DOD components are accountable for the manner in which management in their organizations carries out NSPS policy, procedures, and guidance. However, we found that only the Army has taken steps similar to the PEO to assess whether it is on track to achieve DOD’s goals for the system—the other components have not done so. Furthermore, none of the components monitors how the safeguards specifically are implemented within their organizations because there is no requirement to do so. We have previously reported that transitioning to a more performance- oriented pay system is a huge undertaking that requires constant monitoring. Further, in our 2008 assessment of NSPS, we noted that continued monitoring of the safeguards was needed to ensure that DOD’s actions were effective as implementation of NSPS proceeded. While DOD’s efforts to date have helped provide decision makers with some information on how the department is implementing NSPS, including potential areas for changes or improvements, they do not provide decision makers in DOD and the Congress with information to determine whether the safeguards specifically have been implemented effectively. Without monitoring the safeguards’ implementation, decision makers in DOD and the Congress lack information that could be used to determine whether the department’s actions are effective and whether the system is being implemented in a fair, equitable, and credible manner. For example, in conducting our review, we identified some issues related to the implementation of the safeguards that illustrate the need to monitor their implementation. These include the following: Ensuring that adequate agency resources are allocated for the system’s design, implementation, and administration. We found that each of the components generally lacks visibility over the reasons why organizations have supplemented their pay pool funds. For example, Air Force NSPS program officials told us that for the 2009 payout, while they knew that 8 of the Air Force’s 18 major commands supplemented their pay pool funds, they did not know the specific reasons why. According to PEO officials, organizations might elect to supplement their pay pool funds for a variety of reasons—for example, to recruit or retain employees or to compete with other organizations for talent. However, because they do not understand the reasons why pay pools supplement their pay pool funding—which could help DOD and the components understand the extent to which adequate resources have been allocated to the system—decision makers cannot identify or assess any trends in these practices. Indeed, DOD’s 2008 evaluation report notes that some employees in organizations that supplemented their pay pools’ funding questioned whether the higher funding levels could be sustained over the long term. Ensuring reasonable transparency of the system and its operation. We found evidence that three of the pay pools we visited deviated from their business rules during the last performance management cycle, indicating a lack of transparency of the performance management process in some instances. DOD’s guidance states that a pay pool’s business rules are the guiding principles or ground rules that are used throughout the pay pool process, that pay pool panels should establish these principles and hold one another accountable for following them, and that a pay pool’s policies—which would include its business rules—will be made available to employees before the end of the performance cycle. However, at three of the locations we visited, we found evidence that the pay pool had deviated from its business rules during the course of the last performance management cycle. For example, at one location the pay pool’s business rules required all recommended ratings be reviewed, noting that the pay pool panel will ensure that all employees receive appropriate consideration and that ratings are fair and consistent. However, officials we spoke with at that location told us that they did not review all recommended ratings in accordance with their business rules. Rather, only the recommended ratings of “4” or “5” were reviewed. As another example, we found evidence that a pay pool at another location used different criteria than what was specified in its business rules for allocating the number of shares to employees. According to component-level NSPS program officials, in order to ensure transparency of the system, pay pools should not deviate from their business rules once those rules are published. However, none of the components requires its pay pools to notify it when such an event occurs, or of the reasons why, though doing so could help provide decision makers with information on the extent to which pay pools are implementing the system in a manner that is transparent to employees. DOD civilian personnel have mixed perceptions about NSPS, and although DOD has taken some steps toward addressing employees’ concerns, it has not yet developed and implemented an action plan to address areas where employees express negative perceptions of the system. DOD’s most recent survey of civilian employees reveals that NSPS employees have mixed perceptions about NSPS. The responses to questions specifically asking about NSPS show positive perceptions about some aspects of performance management under NSPS, including connecting pay to performance, but show negative perceptions about other aspects of performance management, such as the appraisal process. Further, the most recent data indicate that the perceptions of those employees who have worked under NSPS the longest appear to have remained largely unchanged from the negative perceptions we reported in 2008. Moreover, during discussion groups we held at eight locations outside the continental United States, civilian employees and supervisors expressed consistent concerns and negative perceptions of NSPS that are similar to those identified in our 2008 report, although they also identified positive aspects of the system. DOD has taken some steps to address employees’ negative perceptions of the system; however, the department has yet to develop and implement an action plan that meets the intent of our prior recommendation because it does not specify such things as the actions DOD intends to take, who will be responsible for taking the action, and the timelines for doing so. We continue to believe that implementing such an action plan is important and note that doing so would be a step that DOD could take to involve employees in the system’s implementation—which is one of the safeguards we previously discussed. According to DOD’s most recent survey data, some NSPS employees recognize that positive aspects of performance management, such as connecting pay to performance, exist under the system. For example, as shown in table 2, DOD’s survey data for 2008 indicate that an estimated 38 percent of NSPS employees agree that differences in their performance are recognized in meaningful ways, as compared with an estimated 33 percent of non-NSPS employees. Further, an estimated 42 percent of NSPS employees agree that pay raises depend on how well employees perform their jobs, as compared with an estimated 25 percent of non-NSPS employees. When asked about how poor performers are dealt with, an estimated 34 percent of NSPS employees, versus an estimated 27 percent of non-NSPS employees, agreed that steps are taken to deal with poor performers. In comparison, an estimated 47 percent of non-NSPS employees, as compared with an estimated 44 percent of NSPS employees, agreed that their current performance appraisal system motivates them to perform well. Further, an estimated 34 percent of non-NSPS employees, as compared with an estimated 29 percent of NSPS employees, agreed that their performance appraisal system improves organizational performance. Table 3 shows additional comparisons between NSPS and non-NSPS employee responses to questions about performance appraisals. In our first assessment of NSPS, we reported that the results of DOD’s Status of Forces Survey of Civilian Employees indicated that the perceptions of employees who had been under the system the longest had become more negative on questions related to performance management. However, the results of DOD’s most recent administration of the survey in 2008 indicate that spiral 1.1 employee perceptions are about the same as the May 2007 survey, as shown in table 4. For example, from the November 2006 through February 2008 administrations of DOD’s survey, the percentage of spiral 1.1 employees that agreed that they understood what they had to do to be rated at a different performance level declined from an estimated 59 percent in November 2006 to an estimated 53 percent in May 2007, then remained largely unchanged in February 2008 at an estimated 54 percent. In addition, when asked about the overall impact that NSPS will have on personnel practices in DOD, spiral 1.1 employees’ perceptions have become significantly more negative since first converting to NSPS in 2006, but showed little change between the May 2007 and February 2008 surveys. Specifically, the results of the 2008 survey indicate that that an estimated 22 percent of spiral 1.1 employees responded that the overall impact of NSPS on the department’s personnel practices would be positive, as compared to an estimated 23 percent in May 2007 and an estimated 25 percent in November 2006. Table 5 shows a comparison of spiral 1.1 employee responses over time about the overall impact of NSPS on personnel practices in DOD. As with our first review of NSPS, DOD civilians in our discussion groups at locations outside the continental United States continue to express wide- ranging but consistent concerns about the NSPS performance management system. Although the results of our discussion groups are not generalizable to the entire population of DOD civilian employees, the themes that emerged provide valuable insight into employees’ perceptions about the implementation of NSPS thus far. Specifically, during these discussion groups, participants at six locations told us that they were initially optimistic about the intent of NSPS and the concept of pay for performance. Further, some participants indicated that they remain positive about the amount of performance pay and flexibilities afforded to supervisors to rate their employees under the system. However, as with our first review, discussion group participants at all eight locations we visited primarily expressed frustration with and concern about certain aspects of NSPS implementation and the system. The prevalent themes that emerged during our discussion groups include concerns over (1) the negative impact of NSPS on employee motivation and morale, (2) the excessive amount of time spent navigating the performance management process, (3) challenges with job objectives, (4) factors undermining employee confidence in the system, and (5) factors unrelated to job performance affecting employees’ final performance ratings. As we noted in 2008, the themes that emerged during our discussion group sessions are not surprising. Our prior work, as well as that of OPM, has recognized that organizational transformations, such as the adoption of a new performance management system, often entail fundamental and radical changes that require an adjustment period to gain employees’ trust and acceptance. As a result, we expect major change management initiatives in large-scale organizations to take several years to be fully successful. A prevalent theme from our discussions with both employees and supervisors was that several aspects of NSPS have had a negative impact on employee motivation and morale—consistent with our first assessment of NSPS. Specifically, discussion group participants at all eight locations discussed how various aspects of NSPS—for example, their perception that a rating of “3” is only average—have negatively affected their motivation and morale. Discussion group participants at six of the eight locations also told us they have negative perceptions of what a rating of “3” means. At five of those locations, discussion group participants told us that they continue to believe that a rating of “3” means only “average,” in contrast to “valued performer,” as it was initially defined to the workforce by DOD. Discussion group participants at five locations also discussed how achieving a rating higher than a “3” seemed to be an unattainable goal. For example, employees at four locations told us that they felt NSPS either does not provide incentives for high performance or encourages only mediocre performance from employees under the system because of the high number of employees receiving “3”-level ratings each year. As another example, supervisors at one location noted that across the installation there is a general feeling that everyone receives a rating of “3,” and therefore such a rating is considered average, no matter how DOD defines it. Similarly, discussion group participants at seven locations told us that they felt it was difficult for employees in certain positions to receive a rating of “5” because of the nature of their work or the perceived value their management placed on those positions. At one of those locations, supervisors told us that they felt such things as how the pay pool’s business rules were structured affected whether an employee could receive a high rating. At that location, the pay pool’s business rules specified that an employee must receive a higher-level award, such as a command or agencywide award, to receive a rating of “5.” However, discussion group participants told us that they felt some employees were not in a position to receive such awards because of their positions or the type of work they did. In addition, discussion group participants at all eight locations we visited expressed frustration over the amount of the annual performance payout provided under NSPS. For example, discussion group participants noted that they felt the payout was not significant enough to encourage anything other than average performance. Discussion group participants at six of the eight locations also told us that they felt NSPS discourages employees from seeking additional responsibilities and opportunities that fall outside the scope of their objectives because their payout may not reflect their additional work. In addition, discussion group participants at six locations told us that because supervisory positions under NSPS require such a significant increase in responsibility and effort, and because the maximum allowable pay increase for reassignments is capped at 5 percent, some employees may not seek promotion opportunities. Similarly, a discussion group participant at another location expressed frustration that some employees only received their payout in the form of a bonus and not an increase in salary. Discussion group participants at three of the eight locations also expressed concerns that they felt performance payouts under NSPS tended to benefit higher-paid employees at the expense of lower-paid employees. For example, employees at one location expressed concerns that in their pay pool, the higher payouts under NSPS seemed to go to employees at the top of the pay bands. Another prevalent theme at seven of the eight locations and also highlighted in our first assessment was that employees spend an excessive amount of time navigating the performance management process. While the discussion group participants’ complaints about the time- and labor- intensive nature of the system were not limited to any one part of the process, discussion group participants at seven locations pointed out that the time and effort required to complete the steps of the NSPS performance management process were significantly greater than what was required of them under previous systems. For example, one supervisor we spoke with speculated that his supervisory duties under NSPS took him six times as long to perform as they had under the GS system, while another supervisor told us that he may have spent from 45 to 50 hours assessing the performance of three employees, a task he could have completed in 10 hours under the GS system. At five of the locations we visited, employees expressed concerns about NSPS potentially affecting their ability to complete their jobs or affecting the mission because of what they perceived as an excessive amount of time required of employees and supervisors in navigating the NSPS performance management process. In some instances, employees spoke of impacts NSPS was having on their supervisors, while others spoke of their own experiences navigating the NSPS performance management process. At three locations, discussion group participants described how what they perceived as an excessive amount of time navigating the NSPS performance management process affected their ability to complete their job-related duties, requiring completion of some NSPS tasks, such as self- assessments and employee ratings, after work hours or on weekends. One employee described feeling inundated with information on NSPS and that it was difficult to stay on top of things while simultaneously performing his job, while another employee estimated that she spent about 2 hours per week on NSPS-related tasks. In some instances, discussion group participants told us that they saw the potential for the excessive time commitment required by NSPS to affect the missions of their organizations. According to one supervisor, any task that takes employees away from their daily work affects the mission, and any task that takes the time and patience of the command’s leadership detracts from the mission. Further, in discussing during a site visit the potential for NSPS to impact the organization’s mission, one general officer we spoke with described NSPS as “mission ineffective.” Another prevalent theme that emerged from our discussions with both supervisors and employees at all eight locations was that there are challenges with employee job objectives under NSPS. According to DOD, the NSPS performance management system is designed to provide a fair and equitable method for appraising and evaluating performance. As part of the system, DOD established the concept of “job objectives,” which are the required tasks of a given job as determined by managers and supervisors, and directed that job objectives be developed and used as the standards for evaluating employee performance. However, supervisors and employees at each of the eight locations discussed challenges they experienced developing their job objectives under NSPS. Specifically: Although DOD guidance encourages employees to develop job objectives that are specific, measurable, aligned, realistic, and timed— an approach summarized by the acronym S.M.A.R.T.—employees and supervisors we met at six of eight locations discussed how they found it challenging to develop job objectives that are measurable or that follow the S.M.A.R.T. approach. Supervisors at one location objected to the S.M.A.R.T. approach, particularly the “specific” portion, because they felt that job objectives needed to be broad enough to allow employees to discuss any accomplishments they make if they complete additional job activities or other tasks that might arise during the year. Supervisors at two locations discussed how the work they did was nebulous and unpredictable, which made it challenging to develop job objectives that not only reflected the nature of their work but that they could exceed. Similarly, supervisors at another location expressed concerns that employees’ job objectives may not reflect the work they do by the end of the performance management cycle because of constant changes within their organization. According to discussion group participants at four locations, guidance for developing job objectives is either limited or nonexistent, which may result in different approaches to developing job objectives across an organization. At one of these locations, employees told us that their management had not established consistent ground rules for developing job objectives and that as a result some employees’ job objectives were based on out-of-date position descriptions. One organization we visited used a mixture of mandatory and employee- specific job objectives; but, according to one employee, little guidance exists to help employees and supervisors when they need to develop personalized job objectives. Employees at another location told us that there were significant differences in the amount of involvement they had in developing their job objectives. For example, one individual told us that employees in her office develop their own objectives, while another said employees in her office are assigned mandatory objectives and were thus unable to provide input into their objectives. Discussion group participants at six locations expressed concerns that it can be difficult to achieve a high rating for some job objectives. Some locations we visited used mandatory job objectives, which left employees concerned that their job objectives did not accurately capture the full responsibilities of the work they performed. For example, at one location, a uniform, mandatory supervisory objective accounted for half of supervisors’ ratings, which, according to one supervisor, diminished the value of the other responsibilities they had. The supervisor expressed further concern that some mandatory job objectives, such as those assigned to government purchase card holders, require a pass-fail evaluation, making it difficult, if not impossible, for the employee to receive a high rating. In one instance, a location we visited required all employees to be rated against a mandatory safety objective. However, according to some supervisors, it did not make sense for everyone to have the mandatory safety objective because for many employees, safety issues were out of their control. During our discussion groups, participants at all eight locations also discussed how various factors undermine employees’ confidence in the system and its implementation thus far. For example, discussion group participants at six locations commented that they do not believe that the NSPS performance management system has the ability to rate employees fairly. At the locations we visited, discussion of these concerns centered on such things as the perception of subjectivity and the potential for favoritism under NSPS; a lack of transparency surrounding the pay pool panel process, including a lack of understanding of what employees needed to do to receive higher ratings; and the perception of inconsistencies in interpretations of the standards used to determine employee ratings. One prevalent theme at all eight locations involved perceptions of subjectivity, such as the potential for favoritism under NSPS during the rating and pay pool panel processes. At five locations, participants discussed their frustration with how NSPS takes the responsibility for rating employees out of the hands of supervisors and places it in the hands of the pay pool panel members, who may or may not have any direct knowledge of individual employees’ performance. One supervisor told us that NSPS may inadvertently favor employees who work closely or are in direct contact with members of the pay pool panel because those individuals have direct knowledge of the employees and, sometimes, their performance. Similarly, supervisors at another location told us that they did not feel that their pay pool panel understood their jobs and what they do and expressed frustration that the pay pool panel did not seem to be reaching out to their supervisors and higher-level reviewers for additional input on their performance. At five of the eight locations, discussion group participants also told us that they saw the potential for the employee- supervisor relationship to affect an employee’s rating—either to the benefit or detriment of the employee. Another prevalent theme at six of the eight locations—a theme also highlighted in our first assessment of NSPS—was a lack of transparency and understanding of the pay pool panel process. Specifically, supervisors at two locations commented that their organizations’ pay pool panel processes were not transparent. A supervisor at one location commented that everything “goes into a black vacuum,” while another likened the process to a “black box.” Employees at that same location referred to the organization’s pay pool panel process as a “star chamber,” where decisions are made but are not explained to employees. Employees and supervisors at five locations expressed concerns about the amount of information they received from their pay pools and about the process itself; some desired further information to help them better understand the pay pool panel process. In addition, at six of the eight locations, discussion group participants told us that they did not understand what they needed to do to receive a higher rating. For example, an employee at one location told us that she was told by her supervisor that all employees had to receive a rating of “3” because they would have had to “walk on water” to receive a higher rating. Discussion group participants at two other locations also discussed how “walking on water” was a perceived standard for receiving a high rating under NSPS. At three locations, supervisors commented that they were unclear about what they could do to help their employees receive better ratings, while employees at four locations were unclear about what they could do to achieve higher ratings. Discussion group participants at six locations also raised concerns about inconsistent interpretation of the standards used when evaluating civilian employees under NSPS. Discussion group participants reported concerns that military supervisors may rate employees using more stringent standards than their civilian counterparts. Discussion group participants also reported concerns that some military supervisors may not value the NSPS performance management process and sometimes devote less time and effort to the process, which could affect employees’ ratings. One civilian supervisor told us that some military supervisors with whom he attended NSPS training had a much harsher perspective of employee performance than their civilian counterparts. For example, he noted that the military supervisors indicated that giving a rating of “1” or “2” was acceptable, whereas he believed civilian supervisors would be more inclined to give an employee a rating of “3.” Employees also told us that they do not believe some military supervisors value the work of employees who perform certain job functions, such as providing child care on an installation. A prevalent theme expressed by discussion group participants at all eight locations we visited is that factors unrelated to performance may affect employees’ final performance ratings. Such factors include the existence of a forced distribution or quota of ratings, the writing ability of employees and supervisors, and pay pool panel members’ knowledge of employees. For example: Discussion group participants at all eight locations expressed concerns that their pay pool panels used a forced distribution or quota for ratings, which dictated the number of ratings in each category that could be awarded. Employees at one location told us that they were aware of their management’s attempts to artificially preserve a higher share value for employees by primarily awarding ratings of “3,” regardless of the employees’ performance. Further, at three locations discussion group participants told us that their management told them that all employees should expect to receive a rating of “3.” Moreover, some discussion group participants told us that they doubted that their actual performance had the bearing it was supposed to have on their final ratings, while others felt the use of a forced distribution or quotas was in direct conflict with the principles of pay for performance under NSPS. While no discussion group participants we met with were aware of any explicit guidance provided to pay pool panels or supervisors that limited the number of certain ratings they assigned employees, employees and supervisors from at least three locations believed that informal guidelines existed or that pay pool panels or supervisors were encouraged to limit the number of certain ratings they could assign. Discussion group participants at all eight locations also expressed concerns that the writing ability of employees and supervisors may affect ratings—a theme also highlighted in our first assessment of NSPS. Supervisors at one location likened the process of developing employees’ assessments under NSPS to a writing contest. Moreover, supervisors told us that they felt their writing ability could unintentionally affect their employees’ ratings, noting, for example, that a supervisor’s ability to articulate an employee’s achievements in writing plays a significant role in supporting a higher rating for that employee. Employees shared the supervisors’ concerns, noting that they believed that succeeding under NSPS depended on the quality of their written assessments, rather than their job performance, and that their ratings could suffer if their supervisors did not provide the pay pool panel with well-written assessments. In discussing the potential influence that employees’ and supervisors’ writing skills may have on a pay pool panel’s assessment of an employee, officials at seven of the eight locations acknowledged that in some instances writing skills had affected employees’ ratings and could overshadow employees’ performance. Discussion group participants at seven locations also expressed frustration that employee ratings were potentially affected by the extent to which pay pool panel members have personal knowledge of employees or understand the nature of their work in general. Some discussion group participants felt that pay pool panel members’ personal knowledge of employees helped some employees receive higher ratings, while others told us that they felt that members of the pay pool panel were too far removed from them and lacked direct knowledge of the work they performed. One employee believed that individuals who were involved in implementing NSPS worked closely with pay pool panel members, or were senior managers who were more likely to receive higher ratings under NSPS than others. Other employees told us that they were concerned about the potential for pay pool panel members to advocate in some way for employees they personally know—for example, by encouraging the pay pool panel to contact a specific employee’s supervisor to seek additional information or justification for a rating. As a result, they felt that pay pool panel members’ personal knowledge of employees could benefit some employees, but not others. In our first assessment of NSPS, we recommended that DOD develop and implement a specific action plan to address employees’ perceptions of NSPS, based on guidance published by OPM for conducting annual employee surveys and providing feedback to employees on the results. The guidance suggests that after an agency’s survey results have been reviewed, the agency has a responsibility to provide feedback to employees on the results of the survey, as well as to let employees know the intended actions to address the results and the progress made on these actions. Further, the guidance suggests that agencies consider the following when developing action plans: who will be responsible for taking action; who will be responsible for providing oversight; if the individuals taking the action have the necessary authority to make things happen; what coordination, if any, is required, and how it will be accomplished; how agencies will adjust given any changes or delays in their actions. Since then, in June 2009, the PEO issued a departmentwide memorandum entitled “Addressing Key NSPS Workforce Concerns”; however, issuance of this memorandum does not fully meet the intent of our 2008 recommendation. Specifically, the PEO’s June 2009 memorandum summarizes key concerns from the department’s 2008 evaluation of NSPS, summarizes departmentwide actions that had been taken to date to address employees’ concerns about the system, and suggests approaches to enhance local efforts to address workforce concerns. The PEO identified five key areas of concern, which are similar to those identified in our own discussion group sessions with DOD employees and supervisors: (1) performance communication and feedback, (2) understanding of performance management and the pay pool process, (3) trust in the system and its processes, (4) training and information, and (5) the amount of time needed to fulfill performance management responsibilities. The PEO’s memorandum urged the components to leverage information from the department’s 2008 evaluation of NSPS and focus on the five areas discussed above as they plan their own actions. Further, the PEO’s memorandum noted that DOD has taken some steps to address employees’ concerns about NSPS—for example, developing and fielding a pay pool training course for employees and rating officials, modifying its implementing issuances to require all performance review authorities to review pay pool panel results on an annual basis, and providing guidance to employees on the prohibition against the forced distribution of ratings. Issuance of the PEO’s memorandum represents an important first step. However, because the memorandum does not specify actions the department intends to take, who will be responsible for taking the action, and timelines for addressing areas where employees express negative perceptions of the system, it does not fully meet the intent of our 2008 recommendation. In developing an action plan, we note that OPM recently issued guidance that agencies can use in developing action plans for improving employee satisfaction. According to OPM, action plans should clearly (1) state the objectives, (2) identify actions to be taken, (3) provide outcome measures and improvement targets, and (4) describe how progress will be tracked. In addition to identifying the specific actions that will be taken to achieve improvements, OPM’s guidance also suggests that agencies specify time frames for accomplishing the actions, who will be responsible for implementing the actions, who will be affected by the actions, the resources required, and a plan to communicate these actions to managers and employees. We continue to believe that developing and implementing a plan to address employees’ perceptions of NSPS could help DOD make changes to the system that could lead to greater employee acceptance and, ultimately, the system’s successful implementation. Further, we note that having such a plan is an approach that DOD could take to involve employees in the system’s implementation—which is one of the safeguards we previously discussed. As we noted in our first assessment, DOD’s implementation of NSPS placed the department at the forefront of a significant transition facing the federal government. However, toward the end of this review, the future of NSPS became uncertain, given the proposed legislation that, if enacted, would terminate the system and require any future system created by DOD to use safeguards similar to those discussed in our report, including ensuring employee involvement in the system and providing adequate training and retraining. In light of the contingent nature surrounding NSPS and the possibility of implementing a different system, sustained and committed leadership will be imperative to provide focused attention necessary to implement any pay-for-performance system within DOD. Key to implementing a fair, effective, and credible system is including safeguards early on in the design of the system. Since we issued our first assessment of NSPS in 2008, we note that DOD has continued to take steps to meet the intent of each of the safeguards. However, with this latest assessment, we note that the department has not implemented the safeguards systematically; for example, it has not ensured that the training provided to employees on the system’s operations is effective. Further, DOD has not monitored how the safeguards specifically are implemented by lower-level organizations across the department. As a result, decision makers in DOD lack information that could be used to determine whether the department’s actions are effective and whether the system is being implemented in a fair, equitable, and credible manner. Additionally, while DOD has gained experience operating under NSPS, at the time of our review it had not yet developed an action plan for addressing employees’ perceptions of the system, as we recommended in 2008. As DOD moves forward with implementing a pay-for-performance system—whether NSPS or another—we believe that it is important for the department to improve upon its implementation of the safeguards and address employees’ concerns. Left unchecked, these issues could undermine any future human capital reform efforts within DOD. To help implement a fair, effective, and credible performance management system for its civilian employees—whether NSPS or another—we recommend that the Secretary of Defense take the following three actions: Review and evaluate the effectiveness of the department’s training. Ensure that guidance is in place for conducting a postdecisional analysis that specifies what process the components should follow to investigate and eliminate potential barriers to fair and equitable ratings. Include, as part of the department’s monitoring of the implementation of its system, efforts to monitor and evaluate how the safeguards specifically are implemented by lower-level organizations across the department. In September 2009, we provided DOD with a draft of this report that included three recommendations to better address the safeguards and improve implementation of the NSPS performance management system. Specifically, we recommended that DOD (1) evaluate NSPS training, (2) review and revise its guidance for conducting postdecisional analysis of NSPS ratings, and (3) monitor how the safeguards specifically are implemented. In commenting on a draft of our report, DOD partially concurred with our three recommendations. DOD’s comments are reprinted in appendix III. DOD partially concurred with our recommendations, noting the expectation that the Congress would require the department to terminate NSPS by January 1, 2012, and this action, in turn, would require the department to focus on drawing down NSPS in an orderly manner. DOD further stated that it would consider acting on our recommendations to the extent they are relevant as the department moves forward with any future performance management system. We believe that this is a reasonable approach. As discussed above, we recognize the contingent nature surrounding NSPS as a result of provisions in the proposed National Defense Authorization Act for Fiscal Year 2010, which recently passed both Houses of Congress. Accordingly, we revised our recommendations to apply to any future performance management system for the department’s civilian employees—whether NSPS or another system. However, we also note that provisions of the proposed legislation would require DOD to implement certain safeguards and issue regulations for that system to provide a fair, credible and transparent performance appraisal system. We therefore continue to believe that our recommendations have merit. We are sending copies of this report to the appropriate congressional committees. We will make copies available to others upon request. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to the report are listed in appendix IV. As with our first assessment of the National Security Personnel System (NSPS) in 2008, we limited our scope in conducting this review to the performance management aspect of NSPS. Therefore, we addressed neither performance management of the Senior Executive Service at the Department of Defense (DOD) nor other aspects of NSPS, such as classification and pay. To determine the extent to which DOD has implemented safeguards as part of the NSPS performance management system and monitored the implementation of the safeguards, we used the following safeguards, which we also reported on in our 2008 review: Involve employees, their representatives, and other stakeholders in the design of the system, to include employees directly involved in validating any related implementation of the system. Assure that the agency’s performance management systems link employee objectives to the agency’s strategic plan, related goals, and desired outcomes. Provide adequate training and retraining for supervisors, managers, and employees in the implementation and operation of the performance management system. Provide a process for ensuring ongoing performance feedback and dialogue between supervisors, managers, and employees throughout the appraisal period, and for setting timetables for review. Implement a pay-for-performance evaluation system to better link individual pay to performance, and provide an equitable method for appraising and compensating employees. Assure that certain predecisional internal safeguards exist to help achieve consistency, equity, nondiscrimination, and nonpoliticization of the performance management process (e.g., independent reasonableness reviews by a third party or reviews of performance rating decisions, pay determinations, and promotions before they are finalized to ensure that they are merit-based, as well as pay panels who consider the results of the performance appraisal process and other information in connection with final pay decisions). Assure that there are reasonable transparency and appropriate accountability mechanisms in connection with the results of the performance management process, to include reporting periodically on internal assessments and employee survey results relating to performance management and individual pay decisions while protecting individual confidentiality. Assure that performance management results in meaningful distinctions in individual employee performance. Provide a means for ensuring that adequate agency resources are allocated for the design, implementation, and administration of the performance management system. To assess implementation of the safeguards, we reviewed the legislative requirements and obtained and analyzed regulations and other guidance for implementing the NSPS performance management system. We also obtained and analyzed other documents, such as DOD’s rating results and reconsideration statistics, for the 2007 and 2008 NSPS performance management cycles. We also interviewed knowledgeable officials in DOD’s NSPS Program Executive Office and the NSPS program offices of the four components—the Army, the Air Force, the Navy, and the Fourth Estate—to obtain a comprehensive understanding of their efforts to implement NSPS and each of the safeguards, as well as the processes, procedures, and controls used for monitoring and overseeing implementation of the system. In addition, we conducted site visits to select organizations located outside the continental United States to assess implementation of the safeguards. To allow for appropriate representation by each component, we visited two organizations per component, or eight organizations in total. The organizations we visited were selected based on a number of factors, such as the presence of a large number or concentrated group of civilian employees under NSPS and, when possible, the presence of employees who had converted to NSPS under both spirals 1 and 2. We focused our efforts for this review on visiting organizations located outside the continental United States because our 2008 review focused on assessing implementation of NSPS and the safeguards at locations that were geographically distributed throughout the United States. We elected to focus our site visits in Germany and Hawaii because of the civilian employees located outside the continental United States who had converted to NSPS at the time we initiated our review, more than half were located in either Germany or Hawaii. Also, we wanted to determine whether civilian employees located outside the continental United States were experiencing any unique problems or challenges with the system. In Germany, the organizations we visited were the 5th Signal Command; the 435th Air Base Wing; the Defense Finance and Accounting Service; and the George C. Marshall European Center for Security Studies, part of the Defense Security Cooperation Agency. In Hawaii, the organizations we visited were the Commander, Navy Region Hawaii; Headquarters, Pacific Air Force; the Naval Facilities Engineering Command, Hawaii; and the U.S. Army Corps of Engineers, Honolulu District. For each of the organizations we visited, we met with or interviewed the performance review authority, pay pool managers, pay pool panel members, rating officials, and the NSPS program manager or transition team, among others, to discuss the steps they have taken to implement the safeguards or otherwise ensure the fairness, effectiveness, and credibility of NSPS. To assess the organizations’ implementation of the safeguards, we compared and contrasted the information obtained during our interviews and supplemented this testimonial evidence with the other relevant documentation we obtained, such as the organizations’ pay pool business rules, lessons learned, and training materials. To determine how DOD civilian employees perceive NSPS, we analyzed two sources of employee perceptions or attitudes. First, we analyzed the results of DOD’s survey of civilian employees to identify employee perceptions of NSPS and examine whether and how these perceptions may be changing over time. Second, we conducted small group discussions with civilian employees who had converted to NSPS and administered a short questionnaire to the participants at each of the eight organizations we visited. As with our first assessment of NSPS, our overall objective in using the discussion group approach was to obtain employees’ perceptions about NSPS and its implementation thus far. We analyzed the results of the Defense Manpower Data Center’s (DMDC) Status of Forces Survey of Civilian Employees—including the May 2006, November 2006, May 2007, and February 2008 administrations—to gauge employee attitudes toward NSPS and performance management in general and identify indications of movement or trends in employee perceptions. As we reported in September 2008, we have reviewed the results of prior administrations of DMDC surveys and found the survey results, including the results of the Status of Forces Survey of Civilian Employees, sufficiently reliable to use for several GAO engagements. However, to understand the nature of any changes that were made to its survey methods for administering the survey for 2008 as compared with previous administrations, we also received responses to written questions from and discussed these data with officials at DMDC. Based on these responses and discussions, we determined that DMDC’s survey data remain sufficiently reliable for the purpose of our reports on DOD civilian employees’ perceptions of NSPS. We also conducted small group discussions with DOD civilian employees and administered a short questionnaire during site visits in February and March 2009. Specifically, we conducted two discussion groups—one with nonsupervisory employees and another with supervisory employees—at each of the eight locations we visited, for a total of 16 discussion groups. As with our first assessment of NSPS in 2008, our objective in using this approach was to obtain employees’ perceptions about NSPS and its implementation thus far because discussion groups are intended to provide in-depth information about participants’ reasons for holding certain attitudes about specific topics and to offer insights into the range of concerns about and support for an issue. Further, in conducting our discussion groups, our intent was to achieve saturation—the point at which we were no longer hearing new information. As we previously reported, our discussion groups were not designed to (1) demonstrate the extent of a problem or to generalize the results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, our discussion groups provide in-depth information about participants’ reasons for holding certain attitudes about specific topics and offer insights into the range of concerns about and support for an issue. Although the results of our discussion sessions are not generalizable to the entire NSPS civilian population, the composition of our discussion groups was designed to ensure that we spoke with employees from each of the four components at locations outside the continental United States. Because supervisory and nonsupervisory employees have distinct roles with respect to NSPS, we held separate discussion sessions for these groups. To select the discussion group participants, we requested that the organizations we visited provide us with lists of employees who had converted to NSPS. From the lists provided, we selected participants based on their supervisory and nonsupervisory status. To ensure maximum participation of the selected employees, we randomly selected up to 20 participants from each group with the goal of meeting with 8 to 12 individuals in each discussion group and provided the employee names and a standard invitation to GAO’s points of contact to disseminate to the employees. At the majority of locations, we reached our goal of meeting with 8 to 12 individuals in each discussion group; however, since participation was not compulsory, in two instances we did not reach our goal of 8 participants per discussion group. Table 6 provides information on the composition of our discussion groups. To facilitate our discussion groups, we developed a discussion guide to help the moderator in addressing several topics related to employees’ perceptions of the NSPS performance management system. These topics include employees’ overall perception of NSPS and the rating process, the training they received on NSPS, the communication they have had with their supervisors, positive aspects they perceive of NSPS, and any changes they would make to the system, among others. Each discussion group was scheduled for a 2-hour period and began with the GAO moderator greeting the participants, describing the purpose of the study, and explaining the procedures for the discussion group. Participants were assured that all of their comments would be discussed in the aggregate or as part of larger themes that emerged. The GAO moderator asked participants open-ended questions related to NSPS, while at least one other GAO analyst observed the discussion group and took notes. Following the conclusion of all our discussion group sessions, we performed content analysis of the sessions in order to identify the themes that emerged and to summarize the participants’ perceptions of NSPS. We reviewed responses from several of the discussion groups and created a list of themes and subtheme categories. We then reviewed the comments from each of the 16 discussion groups and assigned each comment to the appropriate category, which was agreed upon by two analysts. If agreement was not reached on a comment’s placement in a category, another analyst reconciled the issue by placing the comment in either one or more of the categories. The responses in each category were then used in our evaluation and discussion of how civilian employees perceive NSPS. Following each discussion group we administered a questionnaire to the participants to obtain further information on their background, tenure with the federal government and DOD, and attitudes toward NSPS. We received questionnaires from 164 discussion group participants. In addition to collecting demographic data from participants for the purpose of reporting with whom we spoke (see table 7), the purpose of our questionnaire was to (1) collect information from participants that could not easily be obtained through discussion, for example, information participants may have been uncomfortable sharing in a group setting, and (2) collect some of the same data found in past DOD surveys. Specifically, the questionnaire included questions designed to obtain employees’ perceptions of NSPS as compared to their previous personnel system, the accuracy with which they felt their ratings reflected their performance, and management’s methods for conveying overall rating information. Since the questionnaire was used to collect supplemental information and was administered solely to the participants of our discussion groups, the results represent the opinions of only those employees and cannot be projected across DOD, a component, or any single pay pool we visited. Commander, Navy Region Hawaii, Pearl Harbor, Hawaii Naval Facilities Engineering Command, Hawaii, Pearl Harbor, Hawaii NSPS Program Office, Navy Yard, Washington, D.C. Office of Civilian Human Resources, Navy Yard, Washington, D.C. We conducted this performance audit from November 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Department of Defense (DOD) civilian employees who receive performance ratings under the National Security Personnel System (NSPS) have the option of challenging their ratings through a formal process known as reconsideration. The reconsideration process is the sole and exclusive agency administrative process for nonbargaining unit employees to challenge their ratings. However, DOD’s NSPS regulations also allow for alternative dispute resolution techniques, such as mediation or interest-based problem solving, to be pursued at any time during the reconsideration process consistent with component policies and procedures. Under the reconsideration process, employees may challenge their ratings of record or individual job objective ratings; employees cannot challenge their performance payout, the number of shares assigned, the share value, or the distribution of their performance payout between salary increase and bonus, nor can they challenge their recommended ratings of record, interim reviews, or applicable closeout assessments. In addition, employees who allege that their performance ratings are based on prohibited discrimination or reprisal may not use the reconsideration process; rather, such allegations are to be processed through the department’s equal employment opportunity discrimination complaint procedure. Employees who wish to challenge their rating have 10 calendar days from the receipt of their ratings of record to submit written requests for reconsideration to their pay pool managers. Within 15 calendar days of the pay pool manager’s receipt of an employee’s request for reconsideration, the pay pool manager must render a written decision that includes a brief explanation of the basis of the decision. The pay pool manager’s decision is final, unless the employee seeks further reconsideration from the performance review authority. Specifically, if the employee is dissatisfied with the pay pool manager’s decision, or if none is provided within the prescribed time frames, the employee may submit a written request for final review by the performance review authority or his or her designee. This request must be submitted within 5 calendar days of receipt of the pay pool manager’s decision or within 5 calendar days of the date the decision should have been rendered. The performance review authority then is allotted 15 calendar days from receipt of the written request from the employee to make a decision, which is final. If the final decision is to change the rating of record or job objective rating, the revised rating takes the place of the original one, and a revised performance appraisal is prepared for the employee. According to DOD’s 2008 evaluation report, for the 2007 NSPS performance management cycle, 2,302 civilian employees out of the 100,465 employees who were rated under NSPS elected to file a request for reconsideration of their ratings, and of these, about 33 percent of the requests were granted. For the 2008 NSPS performance management cycle, according to the NSPS Program Executive Office, as of June 2009, 4,296 civilian employees out of the 170,149 employees who were rated under NSPS elected to file requests for reconsideration of their ratings, and of these, about 52 percent of the requests were granted. In addition to the contact named above, Ron Fecso, Chief Statistician; Marion Gatling, Assistant Director; Margaret G. Braley; Virginia A. Chanley; William Colwell; Emily Gruenwald; K. Nicole Harms; Cynthia Heckmann; Wesley A. Johnson; Lonnie McAllister; Carolyn Taylor; John W. Van Schaik; Jennifer L. Weber; Cheryl A. Weissman; and Gregory H. Wilmoth made key contributions to the report. Human Capital: Continued Monitoring of Internal Safeguards and an Action Plan to Address Employee Concerns Could Improve Implementation of the National Security Personnel System. GAO-09-840. Washington, D.C.: June 25, 2009. Questions for the Record Related to the Implementation of the Department of Defense’s National Security Personnel System. GAO-09-669R. Washington, D.C.: May 18, 2009. Human Capital: Improved Implementation of Safeguards and an Action Plan to Address Employee Concerns Could Increase Employee Acceptance of the National Security Personnel System. GAO-09-464T. Washington, D.C.: April 1, 2009. Human Capital: Opportunities Exist to Build on Recent Progress to Strengthen DOD’s Civilian Human Capital Strategic Plan. GAO-09-235. Washington, D.C.: February 10, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Human Capital: DOD Needs to Improve Implementation of and Address Employee Concerns about Its National Security Personnel System. GAO-08-773. Washington, D.C.: September 10, 2008. The Department of Defense’s Civilian Human Capital Strategic Plan Does Not Meet Most Statutory Requirements. GAO-08-439R. Washington, D.C.: February 6, 2008. Human Capital: DOD Needs Better Internal Controls and Visibility over Costs for Implementing Its National Security Personnel System. GAO-07-851. Washington, D.C.: July 16, 2007. Human Capital: Federal Workforce Challenges in the 21st Century. GAO-07-556T. Washington, D.C.: March 6, 2007. Post-Hearing Questions for the Record Related to the Department of Defense’s National Security Personnel System (NSPS). GAO-06-582R. Washington, D.C.: March 24, 2006. Human Capital: Observations on Final Regulations for DOD’s National Security Personnel System. GAO-06-227T. Washington, D.C.: November 17, 2005. Human Capital: Designing and Managing Market-Based and More Performance-Oriented Pay Systems. GAO-05-1048T. Washington, D.C.: September 27, 2005. Human Capital: Symposium on Designing and Managing Market-Based and More Performance-Oriented Pay Systems. GAO-05-832SP. Washington, D.C.: July 27, 2005. Human Capital: DOD’s National Security Personnel System Faces Implementation Challenges. GAO-05-730. Washington, D.C.: July 14, 2005. Questions for the Record Related to the Department of Defense’s National Security Personnel System. GAO-05-771R. Washington, D.C.: June 14, 2005. Questions for the Record Regarding the Department of Defense’s National Security Personnel System. GAO-05-770R. Washington, D.C.: May 31, 2005. Post-hearing Questions Related to the Department of Defense’s National Security Personnel System. GAO-05-641R. Washington, D.C.: April 29, 2005. Human Capital: Agencies Need Leadership and the Supporting Infrastructure to Take Advantage of New Flexibilities. GAO-05-616T. Washington, D.C.: April 21, 2005. Human Capital: Selected Agencies’ Statutory Authorities Could Offer Options in Developing a Framework for Governmentwide Reform. GAO-05-398R. Washington, D.C.: April 21, 2005. Human Capital: Preliminary Observations on Proposed Regulations for DOD’s National Security Personnel System. GAO-05-559T. Washington, D.C.: April 14, 2005. Human Capital: Preliminary Observations on Proposed Department of Defense National Security Personnel System Regulations. GAO-05-517T. Washington, D.C.: April 12, 2005. Human Capital: Preliminary Observations on Proposed DOD National Security Personnel System Regulations. GAO-05-432T. Washington, D.C.: March 15, 2005. Human Capital: Principles, Criteria, and Processes for Governmentwide Federal Human Capital Reform. GAO-05-69SP. Washington, D.C.: December 1, 2004. Human Capital: Building on the Current Momentum to Transform the Federal Government. GAO-04-976T. Washington, D.C.: July 20, 2004. DOD Civilian Personnel: Comprehensive Strategic Workforce Plans Needed. GAO-04-753. Washington, D.C.: June 30, 2004. Human Capital: Implementing Pay for Performance at Selected Personnel Demonstration Projects. GAO-04-83. Washington, D.C.: January 23, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. DOD Personnel: Documentation of the Army’s Civilian Workforce- Planning Model Needed to Enhance Credibility. GAO-03-1046. Washington, D.C.: August 22, 2003. Posthearing Questions Related to Proposed Department of Defense (DOD) Human Capital Reform. GAO-03-965R. Washington, D.C.: July 3, 2003. Human Capital: Building on DOD’s Reform Effort to Foster Governmentwide Improvements. GAO-03-851T. Washington, D.C.: June 4, 2003. Posthearing Questions Related to Strategic Human Capital Management. GAO-03-779R. Washington, D.C.: May 22, 2003. Human Capital: DOD’s Civilian Personnel Strategic Management and the Proposed National Security Personnel System. GAO-03-493T. Washington, D.C.: May 12, 2003. Defense Transformation: DOD’s Proposed Civilian Personnel System and Governmentwide Human Capital Reform. GAO-03-741T. Washington, D.C.: May 1, 2003. Defense Transformation: Preliminary Observations on DOD’s Proposed Civilian Personnel Reforms. GAO-03-717T. Washington, D.C.: April 29, 2003. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Acquisition Workforce: Status of Agency Efforts to Address Future Needs. GAO-03-55. Washington, D.C.: December 18, 2002. Military Personnel: Oversight Process Needed to Help Maintain Momentum of DOD’s Strategic Human Capital Planning. GAO-03-237. Washington, D.C.: December 5, 2002. Managing for Results: Building on the Momentum for Strategic Human Capital Reform. GAO-02-528T. Washington, D.C.: March 18, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Human Capital: Taking Steps to Meet Current and Emerging Human Capital Challenges. GAO-01-965T. Washington, D.C.: July 17, 2001. Human Capital: Major Human Capital Challenges at the Departments of Defense and State. GAO-01-565T. Washington, D.C.: March 29, 2001. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001.
In 2004, the Department of Defense (DOD) began implementing the National Security Personnel System (NSPS)--a human capital system for DOD civilians. NSPS significantly redesigned the way DOD civilians are hired, compensated, and promoted. Pub. L. No. 110-181 mandated that GAO conduct reviews of the NSPS performance management system in calendar years 2008, 2009, and 2010. In this report, GAO assessed (1) the extent to which DOD has implemented certain internal safeguards to ensure the fairness, effectiveness, and credibility of NSPS, and monitored their implementation, and (2) how DOD civilian personnel perceive NSPS, and the actions DOD has taken to address those perceptions. GAO analyzed relevant documents and employee survey results, interviewed DOD officials, and conducted discussion groups with DOD employees at eight locations outside of the continental United States. Toward the end of GAO's review, both Houses of Congress passed proposed legislation that, if enacted, would terminate NSPS and require any future performance management system for DOD civilians to include certain internal safeguards DOD continues to take steps to implement internal safeguards as part of NSPS, but implementation of some safeguards could still be improved, and continued monitoring of all safeguards' implementation is needed. In general, DOD has taken some steps to meet the intent of each of the safeguards, and it has implemented some of the recommendations from GAO's September 2008 report. However, opportunities exist for DOD to improve implementation of some safeguards. For example, DOD has not yet evaluated the effectiveness of the training employees receive, although doing so could help DOD measure the impact of its training and its progress toward achieving agency goals. In addition, DOD has not specified in its guidance what process the components should follow to investigate and eliminate potential barriers to fair and equitable ratings. Consequently, the components may not follow a consistent approach when investigating potential barriers, which could hinder their efforts to eliminate them. Further, GAO previously noted that continued monitoring of the safeguards was needed to ensure that DOD's actions were effective. While DOD monitors some aspects of the system's implementation, it does not monitor how or the extent to which the safeguards specifically are implemented across the department. As a result, decision makers lack information that could be used to determine whether the department's actions are effective and whether the system is being implemented in a fair, equitable, and credible manner. DOD civilian personnel have mixed perceptions about NSPS, and while the department has taken some steps toward addressing employee concerns, it has not yet developed and implemented an action plan to address areas where employees express negative perceptions of the system, as GAO recommended in 2008. DOD's survey data from 2008 revealed that overall, NSPS employees responded positively about some aspects of performance management, such as connecting pay to performance, and negatively about others, such as the performance appraisal process. According to the most recent survey data, the negative perceptions of employees who worked under NSPS the longest remain largely unchanged from what was reported by GAO in 2008. Further, as GAO reported in 2008, employees and supervisors continue to express negative perceptions in discussion groups about NSPS--for example, voicing concerns about the negative impact of NSPS on employees' motivation and morale, and about the excessive amount of time spent navigating the performance management process. Such negative perceptions are not surprising given that large-scale organizational transformations often require an adjustment period to gain employees' trust and acceptance. DOD has taken some steps to address employees' perceptions of NSPS--for example, by issuing a memorandum with suggested actions the components could take to address employee concerns. However, DOD has not yet developed and implemented an action plan that fully meets the intent of GAO's 2008 recommendation. Specifically, DOD has not yet specified such things as its intended actions, who will be responsible, and the time frames for these actions. GAO continues to believe that implementing such a plan has merit.
According to HHS, widespread use of health information technology could improve the quality of care received by patients and reduce health care costs. One such technology, electronic prescribing, can be used, for example, to electronically transmit a prescription or prescription-related information between a health care provider and a pharmacy or to provide other technological capabilities, such as alerting a provider to a potential interaction between a drug and the patient’s existing medications. In traditional, or paper-based, prescribing, health care providers that are licensed to issue prescriptions for drugs (e.g., physicians or others licensed by the state) write a prescription, and calling it into or have the patient take that prescription to a dispenser (e.g., pharmacy) to be filled. In contrast, use of an electronic prescribing system consists of a licensed health care provider using a computer or hand-held device to write and transmit a prescription directly to the dispenser. Before doing so, the health care provider can request the beneficiary’s eligibility, formulary, benefits, and medication history. Figure 1 illustrates an example of the flow of information during the electronic prescribing process. In order to transmit a prescription electronically, multiple entities need to have access to an individual’s identifiable health information in an electronic format. Federal laws and regulations dictate the acceptable use and disclosure activities that can be performed with individually identifiable health information, defined as protected health information (PHI). These activities include treatment, payment, health care operations, and—provided certain conditions are met—public health or research purposes. For example, electronic health information can be held by covered entities that perform treatment functions for directly providing clinical care to a patient through electronic prescribing. These covered entities and business associates, such as medical professionals, pharmacies, health information networks, and pharmacy benefit managers, work together to gather and confirm patients’ electronic health information for prescribing, such as a beneficiary’s eligibility, formulary, benefits, and medication history. To electronically transmit prescription drug data between a health care provider and a pharmacy, an electronic health record can be used to obtain information about the health of an individual or the care provided by a health practitioner. In both paper-based and electronic prescribing, information is also provided to the individual’s health plan for payment, which would include the identification of the beneficiary, the pharmacy, and the drug cost information. In the case of Medicare beneficiaries’ prescription drug data, the information is provided to CMS for Part D payment calculations. Every time a beneficiary fills a prescription under Medicare Part D, a prescription drug plan sponsor must submit a summary record called prescription drug event data to CMS. The prescription drug event data record contains PHI, such as date of birth, the pharmacy that filled the prescription, and the drug dispensed, that enables CMS to make payments to plans. Appendix II provides a summary of the permitted uses and disclosures of PHI. Under certain circumstances, PHI, including prescription drug use information, can be used for purposes not related to directly providing clinical care to an individual. For example, CMS makes Medicare beneficiaries’ prescription drug event data available for use in research studies. Release of these elements outside of CMS must be in accordance with its policies and data-sharing procedures. For example, in order to obtain access to this information interested parties must send in an application and submit a user agreement. Table 1 provides other examples of using prescription drug use data for purposes other than directly providing clinical care. Depending on the nature of the use, the prescription drug use information is used and transmitted in identifiable form or in de-identified format, which involves the removal of PHI (e.g., name, date of birth, and Social Security number) that can be used to identify an individual. Key privacy and security protections associated with individually identifiable health information, including prescription drug information used for purposes other than directly providing clinical care, are established in two federal laws, HIPAA and the HITECH Act. Recognizing that benefits and efficiencies could be gained by the use of information technology in health care, as well as the importance of protecting the privacy of health information, Congress passed HIPAA in 1996. Under HIPAA, the Secretary of HHS is authorized to promulgate regulations that establish standards to protect the privacy of certain health information and is also required to establish security standards that require covered entities that maintain or transmit health information to maintain reasonable and appropriate safeguards. HIPAA’s Administrative Simplification Provisions provided for the establishment of national privacy and security standards, as well as the establishment of civil money and criminal penalties for HIPAA violations. HHS promulgated regulations implementing the act’s provisions through its issuance of the HIPAA rules–the Privacy Rule, the Security Rule, and the Enforcement Rule. The rules cover PHI and require that covered entities only use or disclose the information in a manner permitted by the Privacy Rule, and take certain measures to ensure the confidentiality and integrity of the information and to protect it against reasonably anticipated unauthorized use or disclosure and threats or hazards to its security. HIPAA provides authority to the Secretary to enforce these standards. The Enforcement Rule provides rules governing HHS’s investigation of compliance by covered entities, both through the investigation of complaints and the conduct of compliance reviews, and also establishes rules governing the process and grounds for establishing the amount of a civil money penalty for a HIPAA violation. The Secretary has delegated administration and enforcement of privacy and security standards to the department’s Office for Civil Rights (OCR). The HITECH Act, enacted as part of the American Recovery and Reinvestment Act of 2009 (Recovery Act), is intended to promote the adoption and meaningful use of health information technology to help improve health care delivery and patient care. The act adopts amendments designed to strengthen the privacy and security protections of health information established by HIPAA and also adopts provisions designed to strengthen and expand HIPAA’s enforcement provisions. Table 2 below provides a brief overview of the HITECH Act’s key provisions for strengthening HIPAA privacy and security protection requirements. Under the HITECH Act, the Secretary of HHS has significant responsibilities for enhancing existing enforcement efforts, providing public education related to HIPAA protections, and providing for periodic audits to ensure HIPAA compliance. In implementing the act’s requirements, OCR’s oversight and enforcement efforts are to be documented and reported annually to Congress. These annual reports provide information regarding complaints of alleged HIPAA violations and the measures taken to resolve the complaints. These reports and other related information are required by the HITECH Act to be made publicly available on HHS’s website. In response to requirements set forth in HIPAA and the HITECH Act, HHS, through OCR, has established a framework for protecting the privacy and security of individually identifiable health information, including Medicare beneficiaries’ prescription drug use information used for purposes other than directly providing clinical care. This framework includes (1) establishing regulatory requirements, (2) issuing guidance and performing outreach efforts, and (3) conducting enforcement activities to ensure compliance with the rules. However, OCR has not issued required guidance to assist entities in de-identifying individually identifiable health information due to—according to officials—competing priorities for resources and internal and external reviews. Furthermore, although it has recently initiated a pilot audit program, the office has not implemented periodic compliance audits as required by the HITECH Act. Until these requirements are fulfilled, OCR will have limited assurance that covered entities and business associates are complying with HIPAA regulations. The Secretary of HHS issued regulations, such as the HIPAA rules, that implement HIPAA requirements and amendments required by the HITECH Act to govern the privacy and security of individually identifiable health information, known as PHI. These rules establish the required protections and acceptable uses and disclosures of individually identifiable health information, including Medicare beneficiaries’ prescription drug use information. HIPAA provided for the Secretary of HHS to, among other things, (1) issue privacy regulations governing the use and disclosure of PHI and (2) adopt security regulations requiring covered entities to maintain reasonable and appropriate technical, administrative, and physical safeguards to protect the information. In December 2000, to address the privacy regulation requirement, HHS issued the Privacy Rule. The Privacy Rule regulates covered entities’ use and disclosure of PHI. Under the Privacy Rule, a covered entity may not use or disclose an individual’s PHI without the individual’s written authorization, except in certain circumstances expressly permitted by the Privacy Rule. The Privacy Rule reflects basic privacy principles for ensuring the protection of personal health information, as summarized in table 3. The Privacy Rule generally requires that a covered entity make reasonable efforts to use, disclose, or request only the minimum necessary PHI to accomplish the intended purpose. Further, the Privacy Rule establishes methods for de-identifying PHI. Under the rule, once identifiers are removed from a data set, it is no longer considered individually identifiable health information and the HIPAA protections no longer apply. De-identification provides a mechanism for reducing the amount of PHI used and disclosed. The Privacy Rule establishes two ways in which PHI can be de-identified. The Safe Harbor Method requires the removal of 18 unique types of identifiers from a data set coupled with no actual knowledge that the remaining data could be used to reidentify an individual, either alone or in combination with other information. The expert determination method requires a qualified statistician or other appropriate expert, using generally accepted statistical and scientific principles, to determine that the risk is very small that an individual could be identified from the information when used alone or in combination with other reasonably available information. In February 2003, to implement HIPAA security requirements for protecting PHI, HHS issued the HIPAA Security Rule. To ensure that reasonable safeguards are in place to protect electronic PHI, including Medicare beneficiaries’ health information, from unauthorized access or disclosure, the Security Rule specifies a series of administrative, technical, and physical safeguards for covered entities to implement to ensure the confidentiality, integrity, and availability of electronic PHI. Table 4 summarizes these security safeguards. The Security Rule, which applies only to PHI in electronic form, states that covered entities have the flexibility to use any security measures that allow them to reasonably and appropriately implement specified standards. Specifically, the rule states that in deciding what security measures are appropriate, the covered entity must take into account elements such as its size, complexity, technical infrastructure, cost of security measures, and the probability and criticality of potential risks to its PHI. The HITECH Act set additional requirements for the Secretary of HHS and expanded and strengthened certain privacy and security requirements mandated under HIPAA and the HIPAA rules. Specifically, to implement provisions of the HITECH Act, the Secretary was required to (1) issue breach notification regulations to require covered entities and business associates under HIPAA to provide notification to affected individuals and the Secretary concerning the unauthorized use and disclosure of unsecured PHI; (2) establish enforcement provisions for imposing an increased tiered structure for civil money penalties for violations of the Privacy and Security Rules; and (3) extend certain Privacy and Security Rule requirements to business associates of covered entities. Such required activities are intended to strengthen protections for PHI, including Medicare beneficiaries’ prescription drug use information. To implement these provisions of the act, OCR issued two interim final rules—the Breach Notification for Unsecured Protected Health Information Rule, known as the “Breach Notification Rule,” and the HITECH Act Enforcement Rule—and has developed a draft rule intended to, among other things, extend the applicability of certain requirements of the Privacy and Security Rules to business associates. OCR issued the Breach Notification for Unsecured Protected Health Information Rule in August 2009. This rule contains detailed requirements for HIPAA-covered entities and business associates to notify affected individuals and the Secretary following the discovery of a breach of unsecured PHI. In addition, in October 2009, OCR issued the HITECH Enforcement Rule, which amends the HIPAA rules to incorporate HITECH Act provisions establishing categories of violations based on increasing levels of culpability and correspondingly increased tier ranges of civil money penalty amounts. In addition, in July 2010, OCR issued a notice of proposed rulemaking to modify the HIPAA Privacy, Security, and Enforcement Rules to implement other provisions of the HITECH Act. According to the OCR website, the proposed rule is intended to, among other things, make modifications to extend the applicability of certain Privacy and Security Rule requirements to the business associates of covered entities, strengthen limitations on the use or disclosure of PHI for marketing and fundraising and prohibit the sale of PHI, and expand individuals’ rights to access their information and obtain restrictions on certain disclosures of protected health information to health plans. According to OCR officials, the proposed rule is currently under review by the Office of Management and Budget (OMB), and OCR officials have not determined an estimated time frame for its issuance. The HITECH Act also requires HHS to educate members of the public about how their PHI, which may include Medicare beneficiaries’ prescription drug use information, may be used. In addition, the HITECH Act requires HHS to provide guidance for covered entities on implementing HIPAA requirements for de-identifying data—that is, taking steps to ensure the data cannot be linked to a specific individual. Specifically, the act requires HHS to provide information to educate individuals about the potential uses of PHI, the effects of such uses, and the rights of individuals with respect to such uses. In addition—to clarify the de-identification methods established in the HIPAA Privacy Rule—the HITECH Act required OCR to produce guidance by February 2010 on how best to implement the HIPAA Privacy Rule requirements for the de- identification of protected health information. OCR has undertaken an array of efforts since the rules were issued, as well as to implement the HITECH Act’s requirements to promote awareness of the general uses of PHI and the privacy and security protections afforded to the identifiable information. For example, the office has made various types of information resources publicly available. Through its website, the office provides a central hub of resources related to HIPAA regulations, ranging from guidance to consumers on their rights and protections under the HIPAA rules to compliance guidance to covered entities. More specifically, the office has developed resources to guide covered entities and business associates in implementing the provisions of the Privacy and Security Rules, which include, among other things, examples of business associate contract provisions for sharing PHI, answers to commonly asked questions, summaries of the HIPAA rules, and information on regional privacy officers designated to offer guidance and education assistance to entities and individuals on rights and responsibilities related to the Privacy and Security Rules. Table 5 below provides a brief overview of OCR’s guidance and education outreach activities in regard to their target audience, purpose, and guidance materials. In another effort to promote awareness, OCR–—in conjunction with the Office of the National Coordinator for Health Information Technology— established a Privacy and Security Toolkit to provide guidance on privacy and security practices for covered entities that electronically exchange health information in a network environment. The toolkit was developed to implement the Nationwide Privacy and Security Framework for Electronic Exchange of Individually Identifiable Health Information, also known as the Privacy and Security Framework, and includes tools to facilitate the implementation of these practices to protect PHI. Guidance included with the toolkit includes, among other things, security guidelines to assist small health care practices as they become more reliant on health information technology and facts and template examples for developing notices for informing consumers about a company’s privacy and security policies in a web-based environment. Although OCR has initiated these efforts to fulfill its responsibilities to promote awareness of allowable uses and provide guidance for complying with required protections under the HITECH Act, it has yet to publish HITECH Act guidance on implementing HIPAA de-identification methods, which was to be issued by February 2010. OCR officials stated that they have developed a draft of the de-identification guidance, but have not set an estimated issuance date. According to the officials, the draft guidance was developed based on the office’s solicitation of best practices and guidelines from multiple venues and forums, including a workshop panel discussion with industry experts in March 2010 that included discussions on best practices and risks associated with de- identifying PHI. The officials stated that guidance will explain and answer questions about de-identification methods as well as clarify guidelines for conducting the expert determination method of de-identification to reduce entities’ reliance on the Safe Harbor method. The issuance of such implementation guidance could provide covered entities—including those that rely on de-identified prescription drug use information for purposes other than directly providing clinical care—with guidelines and leading practices for properly de-identifying PHI in accordance with Privacy Rule requirements. According to OCR officials, competing priorities for resources and internal reviews have delayed the issuance of the guidance. Officials stated that the draft is currently under government wide review. Although officials stated that the guidance will be issued upon completion of the review, no estimated time frame has been set. Until this guidance is issued, increased risk exists that covered entities are not properly implementing the standards set by the HIPAA Privacy Rule and that identifiers are not properly removed from PHI. Federal laws authorize HHS to take steps to ensure that covered entities comply with HIPAA privacy and security requirements targeted toward protecting patient data, including Medicare beneficiaries’ prescription drug use information. Specifically, HHS has authority to enforce compliance with the Privacy and Security Rules in response to, among other things, (1) complaints reporting potential privacy and security violations and (2) data breach notifications submitted by covered entities. Furthermore, the HITECH Act increased HHS’s oversight responsibilities by requiring the department to perform periodic audits to ensure covered entities and business associates are complying with the Privacy and Security Rules and breach notification standards. OCR has developed and implemented an enforcement process that is focused on conducting investigations in response to actions that potentially violate the Privacy and Security Rules. According to OCR officials, the office opens investigations in response to submitted complaints and data breach notifications, as well as conducts compliance reviews based on other reports of potential violations of which the department becomes aware. If necessary, it then requires covered entities to make changes to their privacy and security practices. OCR receives thousands of complaints and breach notifications each year. Officials stated that these complaints and notifications are reviewed to determine if they are eligible for enforcement and require an OCR investigation. According to information provided by OCR, from 2006 to 2010 the office has received on average about 8,000 Privacy and Security Rule complaints each year. OCR officials reported that as of February 2012, the office conducted investigations of approximately 24,000 complaints alleging compliance violations of the Privacy or Security Rule, resulting in corrective actions by covered entities in 66 percent of the cases. Corrective actions have included training or sanctioning employees, revising policies and procedures, and mitigating any alleged harm. According to OCR’s annual report to Congress on HIPAA Privacy and Security Rule compliance, in instances where an investigation resulted in a determination that a violation of the Privacy or Security Rule occurred, the office first attempted to resolve the case informally by obtaining voluntary compliance through corrective action. Compliance issues investigated most often include impermissible uses and disclosures of PHI and lack of safeguards for or patient access to PHI. As of May 2012, OCR investigations have resulted in the issuance of a resolution agreement in eight cases. According to OCR officials, a resolution agreement is a formal agreement between OCR and the investigated entity and is used to settle investigations with more serious outcomes. A resolution agreement is a contract signed by HHS and a covered entity in which the covered entity agrees to perform corrective actions (e.g., staff training), submit progress reports to HHS (generally for a period of 3 years), and—in some cases—pay a monetary fine. The eight resolution agreements entered into with the investigated entities all included a payment of a resolution amount, and the development or revision of policies and procedures. In six of these cases further submission of compliance reports or compliance monitoring was required for 2 to 3 years. For example, in response to complaints that several patients’ electronic PHI was viewed without permission by university health system employees, OCR initiated an investigation which revealed that unauthorized employees repeatedly looked at the electronic PHI for numerous patients. The university health system agreed to settle potential violations of the Privacy and Security Rules by committing to a corrective action plan and paying approximately $865,000. When a covered entity does not cooperate with an OCR investigation or take action to resolve a violation, the office also has the authority to impose a civil money penalty. OCR can levy civil money penalties for failure to comply with the requirements of the Privacy Rule, Security Rule, and Breach Notification Rule. For each violation, the maximum penalty amount in four separate categories is $50,000. For multiple violations of an identical provision in a calendar year, the maximum penalty in each category is $1.5 million. As of May 2012, OCR had issued one civil money penalty for noncompliance in the amount of $4.3 million. Since February 2010, pursuant to the HITECH Act, OCR has received and used the money from settlement amounts and civil money penalties for enforcement of the HIPAA rules. In June 2011, OCR initiated efforts to conduct pilot audits of 150 covered entities by the end of December 2012. The office contracted for a private firm to identify the population of covered entities from which to select audit candidates. Additionally, the office contracted with a private audit firm to develop the initial audit procedures for covered entities. These procedures—which OCR documentation asserts are to be in accordance with generally accepted government auditing standards—are composed of the requirements from the Privacy, Security and Breach Notification Rules, which include protections afforded to prescription drug use information and uses of it for purposes other than directly providing clinical care. In January 2012, OCR officials stated that the target for audits to complete was revised to 115. According to OCR documentation, during the pilot each audit is conducted based on the following steps: 1. An audit is initiated with the selected covered entity being informed by OCR of its selection and asked to provide documentation of its privacy, security, and breach notification compliance efforts to the contracted auditors. 2. Contracted auditors use the audit procedures developed to assess the compliance activities of the covered entity. According to officials and documentation provided, these procedures correspond to the requirements of the Privacy, Security, and Breach Notification Rules. In this pilot phase, every audit will include a documentation review and site visit. 3. Contracted auditors will provide the audited covered entity the draft findings within 30 days after conclusion of the field work. 4. Audited entities will have 10 days to provide the audit contractor with comments and outline corrective actions planned or taken. 5. Contracted auditors will develop a final audit report to submit to OCR within 30 days of receipt of the comments. The final report will describe how the audit was conducted, what the findings were, and what actions the covered entity is taking in response to those findings as well as describe any best practices of the entity. According to OCR officials, an initial set of 20 pilot audits was completed by March 2012. Officials stated that these initial audits resulted in the identification of both privacy and security issues at covered entities, such as potential impermissible uses and disclosures and not appropriately conducting reviews of audit logs and other reports monitoring activity on information systems. OCR officials stated that the remaining 95 pilot audits, 25 of which were initiated in April 2012, will be completed by the end of December 2012. However, OCR has yet to establish plans for (1) continuing the audit program once the audit pilot finishes in December 2012 and (2) auditing business associates for privacy and security compliance. According to OCR officials, the dedicated Recovery Act funding for the office’s audit effort will expire at the end of December 2012 and officials stated that they have not yet finalized a decision on the future of the program, including the manner in which an audit process will need to be designed to address compliance by business associates. OCR officials stated that the office plans to award a contract in 2012 for a review of the pilot program, including a sample of audits completed during the pilot. OCR officials anticipate that this review will help determine how the office can fully implement an audit function. Implementing a sustained audit program could allow OCR to help covered entities and business associates identify and mitigate risks and vulnerabilities that may not be identified through OCR’s current reactive processes. Furthermore, inclusion of business associates in such a program is important because, according to OCR data, more than 20 percent of data breaches affecting over 500 individuals that were reported to OCR involved business associates. Without a plan for deploying a sustained audit capability on an ongoing basis, OCR will lack the ability to ensure that covered entities and business associates are complying with HIPAA regulations, including properly de-identifying PHI when data on prescription drug use are used for purposes other than directly providing clinical care. Through its issuance of regulations, outreach, and enforcement activities, HHS has established a framework for protecting the privacy and security of Medicare beneficiaries’ prescription drug use information when used for purposes other than directly providing clinical care. It has also promoted public awareness on the uses and disclosures of PHI through its education and outreach activities. Further, OCR has established and implemented a process to enforce provisions of the HIPAA Privacy and Security Rules through investigations. However, it has not issued required implementation guidance to assist entities in de-identifying PHI. By not issuing the guidance, increased risk exists that covered entities are not properly implementing the standards set by the HIPAA Privacy Rule and that PHI is not properly stripped of all identifiers that would identify an individual. In addition, OCR has not fully established a capability to proactively monitor covered entities’ compliance through the use of periodic audits as required by the HITECH Act. Specifically, OCR has yet to establish plans for a sustained audit capability upon completion of its pilot program at the end calendar year 2012 and has yet to determine how to include auditing business associates. Without a plan for deploying a sustained audit capability on an ongoing basis, OCR will have limited assurance that covered entities and business associates are complying with HIPAA regulations, including whether Medicare beneficiaries’ prescription drug use information, when used for purposes other than directly providing clinical care, is being appropriately safeguarded from compromise. To improve the department’s guidance and oversight efforts for ensuring the privacy and security of protected health information, including Medicare beneficiaries’ prescription drug use information, we recommend that the Secretary of HHS direct the Director of the Office for Civil Rights to take the following two actions: Issue guidance on properly implementing the HIPAA Privacy Rule requirements for the de-identification of protected health information.  Establish plans for conducting periodic audits to ensure covered entities and business associates are complying with the HIPAA Privacy and Security Rules and breach notification standards. In written comments on a draft of the report, the HHS Assistant Secretary for Legislation agreed with our two recommendations, but provided qualifying comments for both. HHS’s comments are reprinted in appendix III. Regarding our recommendation that OCR issue guidance on properly implementing the HIPAA Privacy Rule requirements for the de- identification of protected health information, the Assistant Secretary stated that while the department agrees that issuing the guidance will be helpful to covered entities, the department does not agree that without the guidance, covered entities will have limited assurance that they are complying with the HIPAA Privacy Rule de-identification standards. The Assistant Secretary noted that covered entities have been operating under these existing de-identification standards for almost 10 years and that OCR has not found that the standards have been the subject of significant or frequent compliance issues by covered entities. The Assistant Secretary noted that OCR’s purpose in issuing the de- identification guidance was to provide covered entities with the current options and approaches available for de-identifying health information. We agree that the existing agency information on the de-identification standards provide a level of assurance that covered entities have the parameters and requirements needed to properly remove identifiers from PHI and have clarified this in our report. However, the HITECH Act requires HHS to issue de-identification implementation guidance that addresses how covered entities should implement the de-identification standards. OCR officials stated that the planned guidance will explain and answer questions about de-identification methods as well as clarify guidelines for conducting the expert determination method of de- identification to reduce entities’ reliance on the Safe Harbor method. Such information could assist covered entities in determining how to properly implement the de-identification methods. Until such implementation guidance is issued, increased risk exists that covered entities are not properly adhering to the standards set by the HIPAA Privacy Rule and that PHI is not properly stripped of all identifiers that would identify an individual. Regarding our recommendation that OCR establish plans for conducting periodic audits to ensure covered entities and business associates are complying with the HIPAA Privacy and Security Rules and breach notification standards, the Assistant Secretary stated the department did not agree with our report’s conclusion that without such a plan, OCR will lack the ability to ensure that covered entities and business associates are complying with the HIPAA rules. Specifically, he stated that our conclusion did not adequately take into account the considerable impact of the thousands of complaint investigations, compliance reviews, and other enforcement activities OCR conducts annually to ensure covered entities are complying with the rules. He noted that although the audit function is a critical compliance tool for identifying vulnerabilities, the importance of the audit function should not be understood to diminish the effectiveness of OCR’s other enforcement activities for bringing about and enforcing compliance with the HIPAA rules. As our report highlighted, OCR has developed and implemented an enforcement process that is focused on responding to actions that potentially violate the Privacy and Security Rules. OCR conducts this reactive process through processing complaints and conducting thousands of investigations each year. An audit program is an important addition to OCR’s compliance program as it is a tool to identify vulnerabilities before they cause breaches and other incidents. Without the addition of a proactive process, such as an audit capability, OCR will have limited assurance that covered entities are complying with HIPAA regulations. HHS also provided technical comments on the report draft, which we addressed in the final report as appropriate. We will send copies of this report to other interested congressional committees and the Secretary of Health and Human Services. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objective was to determine the extent to which the Department of Health and Human Services (HHS) has established a framework to ensure the privacy and security of Medicare beneficiaries’ protected health information (PHI) when data on prescription drug use are used for purposes other than their direct clinical care. To address our objective, we identified HHS’s and its Office for Civil Rights’ (OCR) responsibilities for protecting the privacy and security of PHI by reviewing and analyzing the Health Insurance Portability and Accountability Act (HIPAA), including the HIPAA Privacy and Security Rules; the Health Information Technology for Economic and Clinical Health (HITECH) Act; and applicable privacy best practices, such as the Fair Information Practices. To obtain information on OCR efforts in implementing HIPAA’s and the HITECH Act’s requirements, we reviewed and analyzed documentation related to the office’s public outreach and guidance efforts, enforcement practices, and regulations for covered entity and business associate compliance provided by the office and through the department’s website and compared those documents to the office’s statutory requirements. To obtain information on the office’s enforcement through complaint and breach notice investigations, we interviewed officials, reviewed agency- provided and public information, and analyzed agency documentation. We conducted interviews with OCR officials to discuss the department’s approaches and future plans for addressing the protection and enforcement requirements of the HIPAA Privacy and Security Rules that applied to covered entities and business associates. We also analyzed plans and documentation provided by OCR officials that described enforcement and compliance activities for developing an audit mechanism and compared them with requirements for the audit program established in the HITECH Act. To describe the uses of prescription drug use data for purposes other than directly providing clinical care, we interviewed representatives from several covered entities, business associates, and medical associations, and reviewed the HIPAA Privacy Rule and academic publications. We conducted this performance audit at the Department of Health and Human Services in Washington, D.C., from August 2011 through June 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Description The provision, coordination, or management of health care and related services among health care providers or by a health care provider with a third party, consultation between health care providers regarding a patient, or the referral of a patient from one health care provider to another. The various activities of health care providers to obtain payment or be reimbursed for their services and of a health plan to obtain premiums, to fulfill their coverage responsibilities and provide benefits under the plan, and to obtain or provide reimbursement for the provision of health care. Certain administrative, financial, legal, and quality improvement activities of a covered entity, as defined in the Privacy Rule, that are necessary to run its business and to support the core functions of treatment and payment. Example A hospital may use protected health information about an individual to provide health care to the individual and may consult with other health care providers about the individual’s treatment. A hospital may send a patient’s health care instructions to a nursing home to which the patient is transferred. A hospital emergency department may give a patient’s payment information to an ambulance service provider that transported the patient to the hospital in order for the ambulance provider to bill for its service. With certain exceptions, to make a communication about a product or service that encourages recipients of the communication to purchase or use the product or service. Marketing includes an arrangement between a covered entity and any other entity, whereby the covered entity discloses PHI to the other entity in exchange for direct or indirect remuneration, for the other entity or its affiliate to make a communication about a product or service that encourages recipients of the communication to purchase or use the product or service. With limited exceptions, such as for face to face communications, the Privacy Rule requires an individual’s written authorization before a use or disclosure of his or her PHI can be made for marketing. Covered entities may disclose protected health information, without authorization, to public health authorities who are legally authorized to receive such reports for the purpose of preventing or controlling disease, injury, or disability. Conducting quality assessment and improvement activities, and case management and care coordination. Business management and general administrative activities, including those related to implementing and complying with the Privacy Rule. Needing an individual’s authorization: A health plan sells a list of its members to a company that sells blood glucose monitors, which intends to send the plan’s members brochures on the benefits of purchasing and using the monitors. Not needing an individual’s authorization: An insurance agent sells a health insurance policy in person to a customer and proceeds to also market a casualty and life insurance policy as well. A systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge. To use or disclose protected health information without authorization by the research participant, a covered entity must obtain either: (1) institutional review board or privacy board waiver of authorization; (2) representations for a preparatory to research activity; (3) representations that the research is on the protected health information of decedents; or (4) a data use agreement with recipient where only a limited data sets is shared. The social services department of a local government might have legal authority to receive reports of child abuse or neglect, in which case the Privacy Rule would permit a covered entity to report such cases to that authority without obtaining individual authorization. Approval of a waiver of authorization by an Institutional Review Board or Privacy Board for research, such as for certain records research, when the Board has determined that the use or disclosure of protected health information involves no more than a minimal risk to the privacy of individuals, and the research could not practicably be conducted without the waiver and without access to the protected health information. In addition to the contact above, John de Ferrari, Assistant Director; Nick Marinos, Assistant Director; Sher`rie Bacon; Marisol Cruz; Wilfred Holloway; Lee McCracken; Monica Perez-Nelson; Matthew Snyder; Daniel Swartz; and Jeffrey Woodward made key contributions to this report.
Prescribing medications and filling those prescriptions increasingly relies on the electronic collection of individuals’ health information and its exchange among health care providers, pharmacies, and other parties. While this can enhance efficiency and accuracy, it also raises privacy and security concerns. Federal law establishes the authority for the Secretary of HHS to develop standards for protecting individuals’ health information (which includes Medicare beneficiaries) and to ensure that covered entities (such as health care providers and pharmacies) and their business associates comply with these requirements. The Medicare Improvements for Patients and Providers Act of 2008 required GAO to report on prescription drug use data protections. GAO’s specific objective for this review was to determine the extent to which HHS has established a framework to ensure the privacy and security of Medicare beneficiaries’ protected health information when data on prescription drug use are used for purposes other than direct clinical care. To do this, GAO reviewed HHS policies and other related documentation and interviewed agency officials. While the Department of Health and Human Services (HHS) has established a framework for protecting the privacy and security of Medicare beneficiaries’ prescription drug use information when used for purposes other than direct clinical care through its issuance of regulations, outreach, and enforcement activities, it has not issued all required guidance or fully implemented required oversight capabilities. HHS has issued regulations including the Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules to safeguard protected health information from unauthorized use and disclosure. Through its Office for Civil Rights (OCR), HHS has undertaken a variety of outreach and educational efforts to inform members of the public and covered entities about the uses of protected health information. Specifically, OCR has made available on its website guidance and other materials informing the public about the uses to which their personal information may be put and the protections afforded to that information by federal laws. It has also made available guidance to covered entities and their business associates that is intended to promote compliance with the HIPAA Privacy and Security Rules. However, HHS has not issued required implementation guidance to assist entities in de-identifying personal health information including when it is used for purposes other than directly providing clinical care to an individual. This means ensuring that data cannot be linked to a particular individual, either by removing certain unique identifiers or by applying a statistical method to ensure that the risk is very small that an individual could be identified. According to OCR officials, the completion of the guidance, required by statute to be issued by February 2010, was delayed due to competing priorities for resources and internal reviews. Until the guidance is issued, increased risk exists that covered entities are not properly implementing the standards set forth by federal regulations for de-identifying protected health information. Additionally, in enforcing compliance with the HIPAA Privacy and Security Rules, OCR has established an investigations process for responding to reported violations of the rules. Specifically, the office annually receives thousands of complaints from individuals and notices of data breaches from covered entities, and initiates investigations as appropriate. If it finds that a violation has occurred, the office can require covered entities to take corrective action and pay fines and penalties. HHS was also required by law to implement periodic compliance audits of covered entities’ compliance with HHS privacy and security requirements; however, while it has initiated a pilot program for conducting such audits, it does not have plans for establishing a sustained audit capability. According to OCR officials, the office has completed 20 audits and plans to complete 95 more by the end of December 2012, but it has not established plans for continuing the audit program after the completion of the pilots or for auditing covered entities’ business associates. Without a plan for establishing an ongoing audit capability, OCR will have limited assurance that covered entities and business associates are complying with requirements for protecting the privacy and security of individuals’ personal health information. GAO recommends that HHS issue de-identification guidance and establish a plan for a sustained audit capability. HHS generally agreed with both recommendations but disagreed with GAO’s assessment of the impacts of the missing guidance and lack of an audit capability. In finalizing its report, GAO qualified these statements as appropriate.
DOD has traditionally approached the acquisition of services differently than the acquisition of products, focusing its attention, policies, and procedures on managing major weapon systems, which it typically does by using the cost of the weapon system as a proxy for risk. For example, DOD classifies its acquisition programs, including research and development efforts related to weapon systems, in categories based upon estimated dollar value or designation as a special interest. The largest programs generally fall under the responsibility of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)), while less complex and risky programs are overseen by the service or component acquisition executive. As of December 2015, DOD managed 78 major defense acquisition programs on which it planned to invest $1.46 trillion over the life of these programs. These 78 programs will require just over one quarter of all DOD’s development and procurement funding over the next 5 years. Conversely, we previously reported that DOD’s approach to buying services is largely fragmented and uncoordinated, as responsibility for acquiring services is spread among individual military commands, weapon system program offices, or functional units on military installations, with little visibility or control at the DOD or military department level. DOD’s January 2016 instruction reiterates that the acquisition of contracted services is a command responsibility. As such, the instruction notes that unit, organization, and installation commanders are responsible for the appropriate, efficient, and effective acquisition of contracted services by their organizations. Services differ from products in several aspects and can offer challenges when attempting to define requirements, establishing measurable and performance-based outcomes, and assessing contractor performance. For example, it can easily take over 10 years to define requirements and develop a product like a weapon system before it can be delivered for field use. Individual service acquisitions generally proceed through requirements, solution, and delivery more rapidly. Further, delivery of services generally begins immediately or very shortly after the contract is finalized. Over the past 15 years, Congress and DOD have identified actions intended to improve, among other things, service acquisition planning, tracking, and oversight (see figure 1). Since 2013, DOD took several additional actions to help improve the acquisition and management of services. For example, in April 2013, the USD(AT&L) appointed the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics, as the Senior DOD Manager for Services Acquisition. Subsequently, in May 2013, DPAP established a Services Acquisition directorate, which is responsible for DOD-level oversight of services as part of its responsibilities; DPAP-SA was the principal author of DOD’s January 2016 instruction. DPAP-SA also leads the Services Acquisition Functional Integrated Product Team, which creates services-acquisition training and tools and provides a forum to share best practices and lessons learned. The Services Acquisition Functional Integrated Product Team is comprised of representatives from DPAP-SA, the Defense Acquisition University, and SSMs, and others from the military departments and defense agencies. The January 2016 instruction calls for the strategic management of the acquisitions of contracted services. The instruction establishes policy, assigns responsibilities, provides direction for the acquisition of contracted services, and establishes and implements a hierarchical management structure for the acquisition of contracted services, including service categories, thresholds and decision authorities, and an SRRB framework. The instruction identifies three key leadership positions, FDEs, CLLs, and SSMs to strategically manage and oversee services. These actions were driven by evidence that DOD was increasingly reliant on contracted services, including complex services such as engineering support, and was obligating more of its contracting dollars on services than it was on products. As we noted in February 2016, DOD’s obligations in fiscal year 2014 on its three largest services—knowledge- based, research and development, and facility-related services were more than double the amount DOD obligated for aircraft, land vehicles, and ships, the three largest product categories DOD acquired. GAO has issued a series of reports that assessed leading commercial practices and DOD’s efforts to improve how it acquires contracted services. In our January 2002 report on commercial practices, for example, we reported that leading companies had examined alternative ways to manage their service spending to stay competitive, respond to market and stockholder pressures, and deal with economic downturns in key overseas markets. In looking at their service acquisitions, these companies discovered that they did not have a good grasp of how much was actually being spent and where these dollars were going. These companies also found that responsibility for acquiring services resided largely with individual business units or functions—such as finance, human resources, manufacturing, engineering, or maintenance—which hindered efforts to coordinate purchases across the company. The companies realized that they lacked the tools needed to make sure that the services they purchased met their business needs at the best overall value. We reported that such challenges were similar to those being experienced by DOD at the time—responsibility for acquiring services was spread among individual military commands, weapon system program offices, or functional units on military bases, with little visibility or control at the DOD or military department level over these acquisitions. The companies we reviewed instituted a series of structural, process, and role changes aimed at moving away from a fragmented acquisition process to a more efficient and effective enterprise-wide process. For example, they often established or expanded the role of corporate procurement organizations to help business managers acquire key services and made extensive use of cross-functional teams to help the companies better identify service needs, select providers, and manage contractor performance. Some companies found that, in establishing new procurement processes, they needed to overcome resistance from individual business units reluctant to share decision-making responsibility and to involve staff that traditionally did not communicate with each other. To do so, the companies found they needed to have sustained commitment from their senior leadership; to clearly communicate the rationale, goals, and expected results from the reengineering efforts; and to measure whether the changes were having their intended effects. We concluded that the strategic approach taken by the leading firms we reviewed could serve as a general framework to guide DOD’s service contracting initiatives. We noted, however, that DOD might find that a “one-size-fits-all” approach would not work for all services and that it would need to tailor its approach to meet its specific needs and requirements. DOD officials acknowledged that some services were acquired department-wide, while other services (such as ship support and maintenance) were unique to specific commands, units, or geographic locations. DOD officials agreed that they would need, as a first step, to obtain and analyze data on DOD’s service spending to identify and prioritize specific services where a more coordinated acquisition approach may be appropriate. Since that January 2002 report, we have issued several reports that examined DOD’s efforts to implement a management structure and address other issues affecting service acquisitions, as illustrated by the following examples: In September 2003, we reported that while DOD and the military departments each had a management structure in place for reviewing individual service acquisitions valued at $500 million or more, that approach did not provide a department-wide assessment of how spending for services could be more effective. In November 2006, we reported that DOD’s approach to managing service acquisitions tended to be reactive and had not fully addressed the key factors for success at either the strategic or transactional level. At the strategic level, DOD had not set the direction or vision for what it needed, determined how to go about meeting those needs, captured the knowledge to enable more informed decisions, or assessed the resources it had to ensure department-wide goals and objectives were achieved. In June 2013, we reported that USD(AT&L) and military department leadership had demonstrated a commitment to improving management of service acquisition, but that they faced challenges in developing goals and metrics to assess outcomes due to limitations with corroborating data between their contracting and financial data systems. We recommended that DOD establish baseline data, specific goals for improving service acquisitions, and associated metrics to assess its progress. DOD concurred with the three recommendations. Most recently, in our February 2016 report, we found, among other things, that DOD program offices we reviewed generally maintained data on current and estimated future spending needs for contracted service requirements, but did not identify spending needs beyond the budget year, since there was no requirement to do so. This limited DOD’s leadership insight into future spending on contracted services. We recommended that the Secretaries of the Army, Navy, and Air Force revise their programming guidance to collect information on how contracted services will be used to meet requirements beyond the budget year. We also recommended that the Secretary of Defense establish a mechanism, such as a working group, to ensure the military departments’ efforts to integrate services into the programming process and to develop forecasts on service contract spending provided the department with consistent data. DOD partially concurred with both recommendations but did not indicate any planned actions to implement the recommendations. DOD has not fully implemented the three key leadership positions— FDEs, CLLs, and SSMs—that were identified in DOD’s January 2016 instruction and which were to enable DOD to more strategically manage service acquisitions. DPAP-SA officials noted that the officials appointed to be FDEs had multiple responsibilities, and considered their FDE roles as secondary. Additionally, CLLs largely existed in name only. Consequently, FDEs and CLLs have had a minimal effect on how DOD manages services. More importantly, we found that SSMs were unsure about the value of FDEs and CLLs and how these positions were to influence decisions made by the commands. In particular, SSM officials cited cultural barriers to implementing the hierarchical approach to service acquisition envisioned in DOD’s January 2016 instruction, in part because each military department has traditionally taken a decentralized approach to managing services. Our analysis of DOD fiscal year 2016 service contract obligations found that DOD could improve the management of services by better targeting individual military commands that were responsible for awarding the majority of their department’s contract obligations for service portfolios. DPAP-SA officials responsible for services were aware of the implementation challenges and have efforts underway to revise the January 2016 instruction, in part to further clarify position authorities and responsibilities. We found that FDEs and CLLs have not been effective in improving DOD’s ability to strategically manage service acquisitions. DOD’s January 2016 instruction formalized a hierarchical approach to more strategically manage service acquisitions by portfolio within both OSD—through the use of FDEs—and the components—through the use of CLLs. Specifically, the January 2016 instruction stated that portfolio management enables a framework for strategic oversight by OSD, coupled with decentralized execution by the DOD components to improve the transparency of requirements across DOD, reduce redundant business arrangements, and increase awareness of alternatives. These positions, which were initially established in 2013 as part of the Better Buying Power initiative, were assigned a broad range of responsibilities and were to coordinate their efforts with the military departments’ SSMs, who are responsible for strategic planning, sourcing, execution, and management of services within each military department (see table 1). Rather than creating new positions within OSD or the military departments to fill these leadership positions, DOD added services acquisitions-related responsibilities to existing positions. For example, see table 2 for the existing positions held by FDEs. DPAP-SA officials explained that the appointment of senior OSD officials was intended to give the positions the necessary visibility to carry out their responsibilities to provide strategic portfolio leadership to achieve greater efficiencies and reduce costs in services acquisition. DPAP-SA officials acknowledged, however, that implementation of the FDE positions has been beset by challenges. The senior OSD officials already had broad departmental management responsibilities and were assigned additional FDE responsibilities that were not within their control. For example, FDEs were tasked with forecasting and budgeting services requirements and developing policies to help prioritize requirements. In this regard, as noted in the DOD January 2016 instruction, the responsibilities for establishing and budgeting for service acquisitions are the responsibility of officials within the military commands and installations performed under DOD’s Planning, Programming, Budgeting, and Execution process. Neither DOD’s October 2013 letter that appointed the FDEs, nor DOD’s January 2016 instruction provided specific guidance on how to accomplish these responsibilities. These senior OSD officials also considered their FDE responsibilities as secondary, other duties as assigned and in some cases were assigned multiple portfolios. For example, the Principal Deputy Assistant Secretary of Defense for Logistics and Materiel Readiness—who served as the FDE for three portfolios that comprised $22.9 billion in obligations in fiscal year 2015—has as his primary duty to serve as the principal advisor to the USD(AT&L) in the oversight of logistics policies, practices, operations, and efficiencies. Similarly, the Deputy Director for DPAP-SA—who is responsible for the technical and programmatic evaluation and functional oversight of all aspects of DOD service acquisitions—was named FDE for two out of six knowledge based services portfolio categories, identified in table 2. DPAP-SA officials told us that given their other responsibilities, FDEs devoted only minimal time to fulfilling their FDE responsibilities. Similarly, we found that the CLLs were generally appointed by the military departments, but were not actively engaged in the strategic management of specific services portfolios, as called for in the January 2016 instruction. For example, Air Force officials said that a previous effort to implement a CLL-like position had been unsuccessfully tried in the past and therefore they were reluctant to establish new CLL positions. Army officials identified staff to serve as CLLs, but acknowledged that the CLLs were not active because it was not a management priority. Navy SSM officials established Portfolio Managers within the SSM’s office to carry out CLL responsibilities, but these positions had not actively managed services at the Navy’s major commands. As a result, CLLs had a minimal effect on how DOD strategically manages and oversees services. Similar to the approach taken to create FDEs and CLLs, each of the three military departments created SSMs by appointing senior officials within their respective acquisition or contract policy offices (see table 3). However, SSMs identified challenges, including the lack of responsibility for developing or approving requirements or related funding requests and difficulties in identifying data or metrics to support strategic management in executing their SSM responsibilities. Further, while SSMs recognize the need to further improve management of services in their respective military departments, they were not convinced that a hierarchical, portfolio-based approach outlined in the January 2016 instruction would achieve the intended benefits. The three SSMs we interviewed were unsure about the value of FDEs and CLLs and how these positions were to influence decisions made by the commands. Further, SSM officials noted cultural barriers to implementation, in that commanders are reluctant to give up responsibilities on determining how and which services are needed to meet their missions. In addition, the January 2016 instruction underscores that the execution of services is a commander’s responsibility. For example, each of the SSMs told us that commanders are responsible for fulfilling services requirements needed to accomplish missions within their allocated resources. Consistent with this perspective, SSMs have not implement a hierarchical, portfolio-based approach to services within their departments. The January 2016 instruction requires SSMs to strategically manage each service portfolio group with CLLs as appropriate to develop metrics, best practices, and data to achieve effective execution of the service contract requirements within each portfolio. SSMs told us, however, that they viewed their appropriate role as helping commands improve existing processes to better acquire and manage services. For example, each SSM conducts an annual services health assessment at each command to provide a qualitative picture of programs’ processes and management. For example, at the Air Force in 2015, each command was asked to self-assess six qualitative performance areas, such as program management and fiscal responsibility. In turn, SSMs are to use this and other information to influence and educate the service acquisition community through working groups, training, and sharing best practices. DPAP-SA officials acknowledged that implementation of the hierarchical approach envisioned in the January 2016 instruction is not working as intended, in part because the approach does not fully address concerns that a more top-down approach to service acquisitions may adversely affect the commanders’ ability to meet their missions. In that regard, our analysis of DOD fiscal year 2016 service contract obligations found that depending on the organization’s structure and mission, specific commands within the military departments award the majority of contract obligations for particular portfolios of services (see table 4). For example, the Army Materiel Command and Air Force Materiel Command obligated almost all of their respective military department’s dollars for logistics management and equipment-related service contracts. Conversely, the Naval Air Systems Command was responsible for a much smaller percentage of obligations for these and other services. Other Navy commands had the vast majority of service contract obligations for particular portfolios. For example, the Naval Facilities Engineering Command obligated 84 percent of the Navy’s dollars for facility-related service contracts in fiscal year 2016. In February 2017, DPAP-SA held an initial meeting with the key stakeholders in the services management structure—for example, FDEs and SSMs—to discuss revising the instruction. This effort includes providing clearer definitions of terms such as service acquisition, revising service acquisition category review thresholds, and determining whether FDEs are needed in light of federal category management efforts. Federal internal control standards state that management should establish an organizational structure, assign responsibilities, and delegate authorities to achieve its objectives. That structure should allow the organization to plan, execute, control, and assesses progress toward achieving its objectives. Further, management should periodically review its reviews policies, procedures, and related control activities for continued relevance and effectiveness in achieving the organization’s objectives, and if there is a significant change in its process, management should review the process in a timely manner after the change to determine that control activities are designed and implemented appropriately. DOD’s ongoing effort to revise the January 2016 instruction provides the department the opportunity to reassess whether the hierarchical approach currently in place would, if fully implemented and resourced, enable the department to achieve its goal of strategically managing service acquisitions, or conversely, if an approach that focuses on strategically managing services at the military department or command level may fare better. DOD’s January 2016 instruction formalized the requirement to hold SRRBs to validate, prioritize, and approve service requirements from a holistic viewpoint—an approach that comprehensively considers service requirements within and across portfolios. We found, however, that the three military commands we reviewed did not implement SRRBs that approved service requirements from a holistic perspective, but instead leveraged their existing contract review boards, which focus their efforts on assuring proposed contract solicitations and awards are in compliance with federal acquisition regulations and DOD guidance. As a result, SRRBs had a minimal effect on supporting trade-off decisions in the service portfolios or assessing opportunities for efficiencies and eliminating duplicative requirements. The January 2016 instruction requires DOD organizations and components to establish a process for senior leaders to review, prioritize, validate, and approve each service requirement with a value of $10 million or greater. DOD guidance for implementing the instruction notes that an SRRB is a structured process that, among other things, is to inform, assess, and support trade-off decisions by senior leaders regarding service requirements cost, schedule, and performance for the acquisition of services; identify opportunities for efficiencies, such as realignment of requirements to better align to mission, identification and elimination of duplicative capabilities, and identification of strategic sourcing capabilities; be holistic and requirement-focused rather than contract-focused; have an outcome of a prioritized list of both funded and non-funded existing and anticipated requirements; be established and managed by and held at the requiring command or organizational unit because that is where the requirement owner and funding is located; be held at least annually, but may be held more often as determined by the requiring organization; and have validated a service requirement before approval of an acquisition strategy. According to the Deputy Director of DPAP-SA, the SRRB process is intended to provide senior leaders more visibility over contracted services and requirements, and to provide opportunities to collect data and assess lessons learned and best practices from contracting, not only at individual level command levels but across the military departments and DOD. However, the instruction did not specify when boards should occur or how the results of the SRRBs would be captured or used to inform programming and budget decisions. Further, the instruction required commands to ensure, prior to contract award, that more tactical contracting elements were considered, such as workforce needs and the sufficiency of market research. The instruction also provided the military departments with flexibility in how they achieved these objectives. As a result, military department SSMs noted rather than creating a new SRRB process, they leveraged existing processes for reviewing and approving proposed service contract actions to meet the intent of the January 2016 instruction. For example, pursuant to Air Force Instruction 63-138, the Air Force Materiel Command utilized its Requirements Approval Document and database as its SRRB process. Air Force officials noted that they have used this process since 2008. Similarly, pursuant to Army Regulation 70-13, the Army Materiel Command used its Service Requirements Review Board or SR2B—established in 2010—as its SRRB process, while the Naval Air Systems Command used its Workload and Force Planning process—established in 2004—as its SRRB process. The Navy’s approval process is governed by its 2012 SRRB guidance. While each of the processes varied in certain regards, these processes are designed to ensure requirements for individual services acquisitions have been validated; sufficient funding is available for the proposed action; appropriate acquisition planning and market research have been the proposed solicitation and proposal evaluation criteria are consistent. Consequently, we found that the commands’ SRRB processes we reviewed did not holistically assess requirements within specific service portfolios as outlined in the January 2016 instruction. Further, since command SRRB processes were centered on approving individual contract actions, we found that SRRBs were held throughout the year and did not identify or document resulting savings or other efficiencies. As a result, SRRBs at the three commands we reviewed had a minimal effect on supporting trade-off decisions in the service portfolios or assessing opportunities for efficiencies and eliminating duplicative requirements that could inform the command’s program objective memorandum (POM) submissions. In contrast, we recently reported that non-military department DOD organizations, in accordance with the instruction, conducted SRRBs that holistically assessed service requirements which led to the identification of hundreds of millions of dollars in cost savings for the period fiscal years 2017-2019 and were incorporated in the department’s fiscal year 2018 to 2022 POM. These organizations included the Defense Logistics Agency and the Defense Threat Reduction Agency, among others. To accomplish these savings, the Deputy Chief Management Officer (DCMO) convened SRRBs that required each of the defense agencies and components they reviewed to identify service contracts by portfolio from a holistic perspective and make trade-off decisions based on risk assessment, timelines, and requirements that could be reduced or eliminated to generate efficiencies. In turn, a Senior Review Panel composed of DCMO (chair), the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics, and Principal Staff Assistants approved the proposed savings or directed alternative reductions. Some military departments are exploring options to expand the role of SRRBs in the future to integrate service contract requirements into their POM process to allow them to better identify or forecast service contracts spending and trends. For example, the Army SSM noted that the Army plans to direct all Army commands to identify all service requirements and their associated contracts in the fiscal year 2018-2022 POM, and that this effort is intended to improve insight into future service contact requirements and to better control spending on service contracts. DPAP- SA and SSM officials also told us that the SRRB process would be more effective if it were better aligned with the POM, but DPAP-SA have not yet decided whether to include this element as part of the instruction update. Federal internal control standards call for agency management to identify, analyze, and respond to risks related to achieving defined objectives. Our work found that DOD and military department officials did not implement a portfolio-based approach when conducting SRRBs and given that the SRRB were held throughout the year, it was unclear whether efficiencies were achieved or how the SRRB process helped inform command POM submissions. In February 2016, we recommended that military departments integrate services into the programming process and update programming guidance to collect budget information on how contracted services will be used to meet requirements. Similarly, moving the SRRB to align with the POM process could help the military departments better identify, prioritize, and validate service requirements to support programming and budget decisions. Until DOD clarifies the purpose and timing of the SRRB process, DOD components may not be achieving the expected benefits of DOD’s SRRB process. DOD’s experience in implementing the January 2016 instruction highlights a number of deeply embedded institutional challenges that must be overcome before DOD can achieve a more strategic and portfolio-based approach to managing services. The January 2016 instruction sought to balance the benefits of a more hierarchical and strategic approach, such as identifying efficiencies and developing useful metrics tailored to portfolios through FDEs and CLLs, while retaining the ability of commanders to meet mission needs. This effort, simply stated, has not been successful. In practice, FDE and CLL positions generally have not produced tangible results or benefits and SSMs have questioned their overall value. Moreover, this concept has faced strong cultural resistance, as it required a change to DOD’s traditional decentralized approach to managing services. As DOD works to update the instruction, it has an opportunity to either reaffirm and empower FDEs and CLLs and then hold them accountable for results, or more broadly reassess and rethink how best to tailor its approach to services. Our past work cautioned that a top-down, one-size-fits-all approach may not work. Our current analysis shows that certain commands already manage or award the majority of a particular service and are more closely aligned to the commanders that are responsible for executing the mission. In turn, this raises the question as to whether they would be in a better position to strategically manage specific service portfolios. Complementing this approach would be to provide clarity on the purpose and timing of the SRRBs to help commanders make better trade-off and resource decisions and inform DOD’s programming and budget processes. Until DOD takes action to address the implementation challenges with the FDEs, CLLs, and SSMs, and clarifies the purpose and timing of SRRBs, its efforts to better manage service acquisitions will not be realized. To help foster strategic decision making and improvements in the acquisition of services, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics take the following two actions as part of its effort to update the January 2016 instruction: Reassess the roles, responsibilities, authorities, and organizational placement of key leadership positions, including functional domain experts, senior services managers, and component level leads; and Clarify the purpose and timing of the SRRB process to better align it with DOD’s programming and budgeting processes. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix I, DOD concurred with our two recommendations. Regarding our first recommendation, DOD concurred with the need to reassess key leadership positions roles and responsibilities. DOD indicated that an internal review of the January 2016 instruction found that portfolio oversight of services through FDEs was not providing the desired benefits, and as such, DOD is considering alternatives. Regarding our second recommendation that DOD clarify the purpose and timing of the SRRB process, DOD concurred, noting that lessons learned from implementation of SRRBs in non-military department organizations showed benefits. DOD stated that a rewrite of the January 2016 instruction will include additional clarifying policy. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Under Secretary of Defense for Acquisition, Technology, and Logistics. In addition the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, W. William Russell (Assistant Director), Joe E. Hunter (Analyst in Charge), Stephanie Gustafson, Julia Kennon, Jonathan Munetz, Claudia A. Rodriguez, Sylvia Schatz, and Roxanna T. Sun made significant contributions to this review.
In fiscal year 2016, DOD obligated about $150 billion, or just over half of its total contract spending, on contracted services. In January 2016, DOD issued an instruction on services that identified three key leadership positions, and clarified their roles and responsibilities, and called for Services Requirements Review Boards to holistically approve service requirements above $10 million. The House Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to report on DOD's acquisition of contracted services. This report assesses implementation of (1) key services acquisitions leadership positions and (2) Services Requirements Review Boards. GAO reviewed the roles and responsibilities of the three key leadership positions identified in DOD's January 2016 instruction. GAO also selected three military commands with large fiscal year 2015 contracted services obligations based on analysis of federal procurement spending; reviewed Review Board implementation for the selected commands; and interviewed responsible DOD, military department, and command officials. The Department of Defense (DOD) has not fully implemented the three key leadership positions—functional domain experts (FDE), component level leads (CLL), and senior services managers (SSM)—that were identified in DOD's January 2016 instruction and which were to enable DOD to more strategically manage service acquisitions (see table). Defense Procurement and Acquisition Policy officials noted that the officials appointed to be FDEs had multiple responsibilities, and considered their FDE roles as secondary. Additionally, CLLs largely existed in name only. Consequently, FDEs and CLLs had a minimal effect on how DOD manages services. GAO also found that SSMs—who are responsible for implementing the January 2016 instruction within their military departments—were unsure about the value of FDEs and CLLs and how these positions should influence decisions made by the commands. Moreover, the SSMs GAO interviewed cited cultural barriers to implementing the hierarchical, portfolio-management approach to service acquisition envisioned in DOD's January 2016 instruction, in part because each military department has traditionally taken a decentralized approach to managing services. Defense Procurement and Acquisition Policy officials responsible for services were aware of these challenges and have begun efforts to revise the January 2016 instruction, in part to further clarify position authorities and responsibilities. Federal internal control standards state that management should establish an organizational structure, assign responsibilities, and delegate authorities to achieve its objectives. Services Requirements Review Boards were intended to prioritize and approve services in a comprehensive portfolio-based manner in order to achieve efficiencies, but the military commands GAO reviewed did not do so. Instead, commands largely leveraged existing contract review boards that occurred throughout the year and focused on approving individual contracts. As a result, the Services Requirements Review Boards at these commands had minimal effect on supporting trade-off decisions within and across service portfolios or capturing efficiencies that could inform the command's programming and budgeting decisions. Federal internal control standards call for management to identify, analyze, and respond to risks related to achieving defined objectives. Until DOD clarifies the purpose and timing of the Services Requirements Review Boards process, DOD components will not achieve the expected benefits as anticipated in the January 2016 instruction. GAO recommends that DOD reassess the roles, responsibilities, authorities, and organizational placement of the three key leadership positions; and clarify policies concerning the purpose and timing of the Review Board process. DOD concurred with the recommendations.
The primary mortgage market features a variety of loan products and relies, in part, on the process of securitization to provide funds for mortgage lending. Over the years, a number of federal and state laws and regulations were implemented to protect mortgage borrowers. In 2007, the bill was introduced to strengthen consumer protections and included provisions that would have created a safe harbor for loans that met certain requirements. The primary mortgage market has several segments and offers a range of loan products: The prime market serves borrowers with strong credit histories and provides the most attractive interest rates and mortgage terms. The Alt-A market generally serves borrowers whose credit histories are close to prime, but the loans often have one or more higher-risk features, such as limited documentation of income or assets. The subprime market generally serves borrowers with blemished credit and features higher interest rates and fees than the prime market. Finally, the government-insured or -guaranteed market primarily serves borrowers who may have difficulty qualifying for prime mortgages but features interest rates competitive with prime loans in return for payment of insurance premiums or guarantee fees. HUD’s Federal Housing Administration (FHA) and the Department of Veterans Affairs (VA) operate the two main federal programs that insure or guarantee mortgages. Across all of these market segments, two types of loans are common: fixed-rate mortgages, which have interest rates that do not change over the life of the loans and adjustable-rate mortgages (ARM), which have interest rates that change periodically based on changes in a specified index. Other more unique loan products, referred to as nontraditional mortgage products, grew in popularity over the last decade (see table 1). Hybrid ARMs—which are fixed for a given period and then reset to an adjustable rate—also became popular in recent years, especially in the subprime market. In particular, a significant portion of subprime loans originated from 2003 through 2006 were 2/28 or 3/27 hybrid ARMs—that is, they were fixed for the first 2 or 3 years before resetting to often much higher interest rates and correspondingly higher mortgage payments. Other nontraditional mortgage products included interest-only or payment- option loans, which allowed borrowers to defer repayment of principal and possibly part of the interest for the first few years of the loan. A number of loan features also became more common over the past decade. While these features potentially expanded access to mortgage credit, they are often associated with higher default rates. These features included the following: Low and no-documentation loans. Originally intended for borrowers who had difficulty documenting income, such as the self-employed, these loans were made with little or no verification of a borrower’s income or assets. High loan-to-value (LTV) ratios. As homebuyers made smaller down payments, the ratio of loan amount to home value increased. Prepayment penalties. Some loans contained built-in penalties for repaying part or all of a loan in advance of the regular schedule. Many loans were originated with a number of these features, a practice known as risk layering. The secondary mortgage market and the process of securitization play important roles in providing liquidity for mortgage lending. Mortgage lenders originate and then sell their loans to third parties, freeing up funds to originate more loans. Securitization, in this context, is the bundling of mortgage loans into investment products called residential MBS that are bought and sold by investors. The secondary market consists of (1) Ginnie Mae-guaranteed MBS, which are backed by cash flows from federally- insured or -guaranteed mortgages; (2) government-sponsored enterprise (GSE) MBS, which are backed by mortgages that meet the criteria for purchase by Fannie Mae and Freddie Mac; and (3) private label MBS, which are backed by mortgages that do not conform to GSE purchase requirements because they are too large or do not meet GSE underwriting criteria. Investment banks have traditionally bundled most subprime and Alt-A loans into private label MBS, although since 2007, the market has slowed dramatically. The Truth in Lending Act (TILA), which was enacted in 1968, and the Home Ownership and Equity Protection Act of 1994 (HOEPA), which amended TILA in 1994, are among the primary federal laws governing mortgage lending. TILA was designed to provide consumers with accurate information about the cost of credit. Among other things, TILA requires lenders to disclose information about the terms of loans— including the amount financed, the finance charge, and the annual percentage rate (APR)—that can help borrowers understand the overall costs of their loans. Congress enacted HOEPA to amend TILA, in response to concerns about predatory lending. HOEPA regulates and restricts the terms and characteristics of certain kinds of high-cost mortgage loans that exceed certain thresholds in their APRs or fees (often referred to as “rate and fee triggers”). The Board of Governors of the Federal Reserve System (Federal Reserve) implements TILA and HOEPA through Regulation Z, which was amended in 2001 and 2008 with respect to high-cost lending. As a result of the most recent rulemaking in 2008, Regulation Z will restrict mortgage lending in the following ways, as of October 1, 2009: Higher-priced loans: First-lien loans with APRs that equal or exceed an index of average prime offer rates by 1.5 percentage points above an index of average prime offer rates—a category meant to include virtually all loans in the subprime market, but generally exclude loans in the prime market—are called “higher-priced mortgage loans.” Creditors are prohibited from making these loans without regard to the borrower’s ability to repay from income and assets other than the home’s value, and creditors must verify the income and assets they rely upon to determine a borrower’s repayment ability. Also, prepayment penalties are prohibited for these loans if the payment can change in the first 4 years of the loan; for loans where the payment is fixed for at least the first 4 years, prepayment penalties are limited to 2 years. In addition, creditors must establish escrow accounts for this category of loans for property taxes and homeowners’ insurance. High-cost HOEPA loans: First-lien loans with APRs that exceed the yield on Treasury securities of comparable maturity by more than 8 percentage points or with total points and fees that exceed the greater of 8 percent of the loan amount or $583, are called “high-cost HOEPA loans.” For these loans, the law restricts prepayment penalties, prohibits balloon payments (i.e., a large balance due at maturity of the loan term) for loans with terms of less than 5 years, prohibits negative amortization, and contains certain other restrictions on loan terms or payments. General provisions: For all loans, regardless of whether they fall into one of the above categories, Regulation Z includes a number of basic disclosure requirements and prohibits certain activities considered to be unfair, deceptive, misleading, abusive, or otherwise problematic, such as coercing a real estate appraiser to misstate a home’s value, and abusive collection practices by loan servicers. Each federal banking regulator is charged with enforcing TILA and HOEPA with respect to the depository institutions it regulates, and the FTC has responsibility for enforcing the statutes for mortgage brokers and most financial entities other than banks, thrifts, and federal credit unions. The Federal Reserve has concurrent authority to enforce T and HOEPA for non-bank subsidiaries of bank holding companie s. In addition to TILA and HOEPA, some other federal laws govern aspects of mortgage lending. For example, the Real Estate Settlement Procedures Act (RESPA), passed in 1974, seeks to protect consumers from unnecessarily high charges in the settlement of residential mortgages by requiring lenders to disclose details of the costs of settling a loan and by prohibiting kickbacks (payments made in exchange for referring a settlement service) and other costs. HUD has primary rule-writing authority and is responsible for enforcing RESPA. HUD coordinates on RESPA issues, as it deems appropriate, with federal banking regulators and other federal agencies, such as the FTC and the Department of Justice. In addition, the federal banking agencies, under section 8 of the Federal Deposit Insurance Act, examine for and enforce compliance with RESPA’s requirements with respect to the institutions they supervise. Finally, the Federal Deposit Insurance Act and Federal Credit Union Act allow federal banking regulators to use their supervisory and enforcement authorities to ensure that an institution’s conduct with respect to consumer protection laws does not affect its safety and soundness or that of an affiliated institution. In conjunction with enforcing federal statutes, federal banking regulators have issued guidance to their institutions—including federally-regulated banks, thrifts, credit unions, holding companies and their subsidiaries— about nontraditional and subprime lending. In September 2006, banking regulators issued final guidance clarifying how institutions can offer nontraditional mortgage products in a safe and sound manner, and in a way that clearly discloses the risks that borrowers may assume. The guidance provides specific steps institutions should take to help ensure that loan terms and underwriting standards are consistent with prudent lending practices, including considering a borrower’s repayment capacity; ensuring strong risk management standards, including capital levels; and ensuring that consumers have sufficient information to clearly understand loan terms and associated risks. In June 2007, banking regulators issued a final statement on subprime lending, in response to concerns about certain types of loans that could result in payment shock to borrowers. The statement warned institutions about risks associated with subprime loans with adjustable rates with low initial payments, based on fixed introductory rates that expire after a short period, limited or no documentation of income, prepayment penalties that were very high or that extended beyond the initial fixed rate period, and other product features likely to result in frequent refinancing to maintain an affordable monthly payment. In response to concerns about the growth of predatory lending over the past decade, many states have enacted laws to restrict the terms or provisions of certain types of mortgage loans. According to the Congressional Research Service, at least 30 states and the District of Columbia had enacted a wide array of such laws, as of November 2008. Many of these state laws are similar to HOEPA in that they regulate and restrict the terms and characteristics of certain kinds of high-cost mortgages exceeding certain interest rate or fee thresholds that require enhanced protections. Like HOEPA, these laws often restrict certain loan features that can, in certain cases, be abusive—such as prepayment penalties, balloon payments, negative amortization, and loan flipping—and many laws also require enhanced disclosures and credit counseling. While some laws are only minimally different than HOEPA, others are more comprehensive. Significant debate has taken place as to the advantages and disadvantages of state predatory lending laws. In several cases, regulators of federally supervised financial institutions have determined that federal laws preempt state predatory lending laws for the institutions they regulate. In making these determinations, two regulators—the Office of the Comptroller of the Currency (OCC) and Office of Thrift Supervision (OTS)—have cited federal law that provides for uniform regulation of federally chartered institutions and have noted the potential harm that state predatory lending laws can do to legitimate lending. Many state officials and consumer advocates are opposed to federal preemption of state predatory lending laws. They maintain that federal laws related to predatory lending are insufficient, and that preemption, therefore, interferes with their ability to protect consumers in their states. The first state predatory lending law, the North Carolina Anti-Predatory Lending Law of 1999, has been the subject of particular attention by researchers and policymakers. The law was more restrictive than HOEPA was at the time. Among other things, it banned prepayment penalties on all home loans with a principal amount of $150,000 or less, and prohibited loan flipping (refinancings of consumer home loans that do not provide a reasonable, net tangible benefit to the borrower). It included more restrictions for a category of high-cost loans, which were defined to include lower points and fee triggers than HOEPA, as well as a third trigger that included any loan with a prepayment penalty that could be collected more than 30 months after closing or that was greater than 2 percent of the amount paid. The U.S. House of Representatives passed H.R. 3915—the Mortgage Reform and Anti-Predatory Lending Act of 2007—on November 15, 2007, in response to significant increases in mortgage defaults and foreclosures, especially among subprime borrowers. Although the bill was passed by the U.S. House of Representatives, it was not enacted into law before the end of the 110th Congress. The bill would have reformed mortgage lending by, among other things, setting minimum standards for residential mortgage loans (see fig. 1). The two standards included: Reasonable ability to repay. The bill would have created a “reasonable ability to repay” standard by prohibiting a creditor from making a residential mortgage loan without making a determination based on verified and documented information that a consumer was likely to be able to repay the loan, including all applicable taxes, insurance, and assessments. Such a determination was to be based on the consumer’s credit history, current and expected income, obligations, debt-service-to- income (DTI) ratio, employment status, and financial resources other than any equity in the real property securing the loan. Additionally, the bill would have required lenders making ARMs to qualify borrowers at the fully indexed rate. However, the actual standard was to be prescribed in regulation by the federal banking agencies, in consultation with the FTC. Net tangible benefit. The bill would have created a “net tangible benefit” standard by prohibiting a creditor from refinancing a loan without making a reasonable good faith determination that the loan would provide a net tangible benefit to the consumer. The bill stated that a loan would not meet the standard if the loan’s costs exceeded the amount of newly advanced principal, without any corresponding changes in the terms of the refinanced loan that were advantageous to the consumer. However, the term “net tangible benefit” was to be defined in regulation by the federal banking agencies. The specific responsibilities of lenders to meet the standards, and the rights of consumers to take action against lenders to claim standards had not been met, depended on the category of the loan. Under the bill, loans are classified into three basic categories: Qualified mortgages would have had relatively low APRs, be insured by FHA, or made or guaranteed by VA. This category was intended to include most prime loans. Specifically, a loan would have been considered a qualified loan if either the APR was less than 3 percent above the yield on comparable Treasury securities, or less than 1.75 percent above the most recent conventional mortgage rate (a term that would have been more explicitly defined in regulation). For second-lien loans, the limits were 5 and 3.75 percent, respectively. Qualified mortgages would have been presumed under the law to meet the “ability to repay” and “net tangible benefit” standards, and for these loans, the creditor’s presumption could not be rebutted by borrowers. Qualified safe harbor mortgages would have fallen outside of the definition of qualified mortgages (i.e., would not have met this standard), but would have met certain underwriting requirements. This category was intended to include subprime loans that did not contain certain high-risk features. Specifically, these mortgages were required to (1) have full documentation, (2) be underwritten to the fully indexed rate, (3) not negatively amortize, and (4) have a fixed rate for at least 5 years, have a variable rate with an APR less than 3 percentage points over a generally accepted interest rate index, or meet a DTI ratio to be established in regulation. Qualified safe harbor mortgages, like qualified mortgages, would have been presumed under the law to meet the “ability to repay” and “net tangible benefit” standards. Unlike borrowers with qualified mortgages, however, borrowers with these mortgages would have had the right to challenge a creditor’s presumption that these loans met the “ability to repay” and “net tangible benefit” standards. Nonqualified mortgages would have fallen outside of the two definitions above (i.e., would not have met either standard). This category was intended to include subprime loans with high-risk features. For these loans, the law would have required lenders to meet the reasonable ability to repay and net tangible benefit standards, as well as provide borrowers with the ability to challenge such determinations by creditors and assignees. As shown in figure 1, the bill would also have imposed restrictions on specific loan terms, depending on the loan category. First, the bill would have prohibited prepayment penalties for loans that were not qualified mortgages and would have required the penalties on all qualified mortgages with an adjustable interest rate to expire 3 months before the initial interest rate adjustment. Second, negative amortization loans to first-time borrowers would have been prohibited, unless the creditor made certain disclosures to the consumer and the consumer had received homeownership counseling from a HUD-certified organization or counselor. Finally, single-premium credit insurance and mandatory arbitration on mortgage loans would have been prohibited for all loans. The bill would have established additional liability for creditors of qualified safe harbor and nonqualified mortgages (see fig. 1). In addition, it would have established limited liability for assignees of nonqualified mortgages. Borrowers would have been able to bring civil actions against creditors or assignees if loans violated the “reasonable ability to repay” or “net tangible benefit” standards. Creditors would have been liable for the rescission of a loan and the borrower’s cost associated with the rescission unless they could make the loan conform to minimum standards within 90 days. In addition, assignees would have been liable for the rescission (i.e., cancellation) of a loan and for borrower costs associated with the rescission unless the loan could be made to conform to the minimum standards within 90 days, or unless the assignee (1) had a policy against buying loans that were not qualified loans or qualified safe harbor loans, (2) exercised reasonable due diligence, as defined in regulation by the federal banking agencies and the SEC, and (3) had agreements with the seller or assignees of loans requiring that certain standards be met and certain steps be taken. The bill included additional provisions to resolve situations in which the parties could not agree on loan changes and set certain time frames for addressing challenges to these changes. Liability would not have been extended to pools of loans, including the securitization vehicles, or investors in pools of loans. According to the House Committee Report on the bill, it was not intended to apply to trustees or titleholders who held loans solely for the benefit of the securitization vehicle. The bill would also have expanded the definition of “high-cost” loans under HOEPA. Specifically, the bill would have included home purchase loans in the definition, reduced the points and fees trigger from 8 to 5 percent—the APR trigger would stay at 8—and expanded the definition of points and fees for high-cost mortgages. The bill would have also added a third high-cost trigger for loans with prepayment penalties that applied for more than 3 years or exceeded 2 percent of the prepaid amount. Further, the bill would have enhanced existing HOEPA restrictions on lending without repayment ability by presuming that creditors engaged in a pattern or practice of making high-cost mortgages without verifying or documenting consumers’ repayment ability were violating HOEPA. Finally, the bill would have established a federal duty of care for mortgage originators; prohibited steering of consumers eligible for qualified mortgages to nonqualified mortgages; established a licensing and registration regime for loan originators; established an Office of Housing Counseling within HUD and imposed additional counseling requirements; made changes to mortgage servicing and appraisal requirements; and provided protections for renters in foreclosed properties. We estimate that almost three-quarters of securitized nonprime mortgages originated from 2000 through 2007 would not have been safe harbor loans. The extent to which mortgages would have met the individual safe harbor requirements varied substantially by origination year, reflecting changes in market conditions and lending practices over the 8-year period. We also found that the proportions of safe harbor and non-safe harbor loans varied across different census tract and borrower groupings. Our statistical analysis of loan data shows that certain variables associated with the safe harbor requirements—documentation of borrower income and assets, in particular—were associated with the probability of a loan default. We found that other variables, such as house price appreciation and borrower credit score, were also associated with default rates. To illustrate the potential significance of the safe harbor requirements under different lending environments and market conditions, we applied those requirements to nonprime mortgages originated from 2000 through 2007 and calculated the proportions of loans that likely would and would not have met the requirements. Because of data limitations and uncertainty about how federal regulators would have interpreted some of the safe harbor requirements, our analysis includes a number of assumptions discussed in this section. (See appendix I for details about our methodology.) We estimate that almost 75 percent of nonprime mortgages originated from 2000 through 2007 would not have met the bill’s safe harbor requirements. More specifically, the estimated proportion of non-safe harbor loans ranged from a low of 58 percent for 2001 to a high of 84 percent for 2006 (see fig. 2). The non-safe harbor loans were primarily ARMs, while the safe harbor loans were largely fixed-rate mortgages. For all 8 years combined, Alt-A mortgages represented about 37 percent of non-safe harbor loans, or slightly more than the Alt-A share of the nonprime market over this period (35 percent). Over this same period, subprime mortgages comprised about 63 percent of non-safe harbor loans, or slightly less that their 65 percent share of the nonprime market. The significance of particular safe harbor requirements varied by origination year. As previously noted, the safe harbor requirements include the following: Documentation and amortization. The mortgage would have to be underwritten based on full documentation of the borrower’s income and assets and could not have a negative amortization feature. Interest rate and debt burden. The mortgage would be required to have either (1) a fixed interest rate for at least 5 years, (2) a DTI ratio within a level to be specified in regulation (we used the 41 percent ratio that serves as a guideline in underwriting FHA-insured mortgages), or (3) an ARM with an APR of less than 3 percentage points over a generally accepted interest rate index. Because the loan data we used did not include APRs, we instead compared the initial interest rate on each loan to the relevant interest rate index. Fully indexed rate. The mortgage would have to be underwritten to the fully indexed interest rate (which the bill defines as the initial interest rate index, plus the lender’s margin). We could not determine from the data we used whether a mortgage was underwritten to the fully indexed rate. We created a proxy by assuming that the mortgage satisfied this requirement if the fully indexed rate was 1 percentage point or less over the initial interest rate, indicating a reasonable likelihood that the borrower could have qualified for a loan underwritten to the fully indexed rate. As shown in figure 3, there was an increasing trend in the proportion of nonprime loans originated from 2000 through 2007 that would not have met the safe harbor documentation and amortization requirements. More specifically, the estimated percentages of nonprime loans without full documentation ranged from a low of 27 percent in 2000 to a high of almost 60 percent in 2007. Also, from 2004 through 2007, the proportion of nonprime loans with a negative amortization feature increased steadily. The growth in these percentages reflects the increased use of low- documentation mortgages in both the subprime and Alt-A markets and mortgages with negative amortization features (e.g., payment-option ARMs) in the Alt-A market. In both cases, these products were originally intended for a narrow population of borrowers but, ultimately, became more widespread. For example, as we reported in 2006, payment-option ARMs were once specialized products for financially sophisticated borrowers who wanted to minimize mortgage payments to invest funds elsewhere or borrowers with irregular earnings who could take advantage of minimum monthly payments during periods of lower income and could pay down principal when they received an increase in income. However, according to federal banking regulators and a range of industry participants, as home prices increased rapidly in some areas of the country, lenders began marketing payment-option ARMs as affordability products and made them available to less creditworthy and lower income borrowers. borrowers. Substantial proportions of the nonprime loans made over the 8-year period we examined also did not meet the safe harbor interest rate and debt burden requirements, although the proportions varied by year: The proportion of nonprime originations that did not have a fixed interest rate for at least 5 years rose from 52 percent in 2000 to 64 percent in 2004 (see fig. 4). This increase can be attributed primarily to a shift in the Alt-A market away from fixed-rate mortgage products to adjustable-rate products. For example, in 2000 about 88 percent of Alt-A loans were fixed rate, but by 2004 this figure had dropped to about 38 percent. Beginning in 2005, the percentage of nonprime originations with adjustable rates began falling, reaching 37 percent in 2007. The decline was due in large part to a trend in the Alt-A market toward fixed-rate mortgages. As figure 4 also shows, the proportion of nonprime originations that did not have a DTI ratio under 41 percent grew over the 8-year period, rising from 43 percent in 2000 to 51 percent in 2006, although it fell slightly in 2007. The generally increasing trend is partly a result of house prices growing faster than borrowers’ incomes over the period and of lenders allowing borrowers to take out larger mortgages relative to their incomes. For example, from 2000 through 2006, average home prices grew by 38 percent nationally, while over the same period, average incomes grew by just 23 percent. Finally, the proportion of nonprime ARM originations with initial interest rates not less than 3 percentage points over a generally accepted interest rate index (3 percent test) ranged from a high of 96 percent in 2002 to a low of 48 percent in 2007 (see fig. 4). The changing proportions over time were largely due to movements in the interest rate indexes used to set ARM interest rates that affected the size of the gap between the initial rates and the index values. For example, when the 2-year Treasury constant maturity rate (a common interest rate index) dropped from 2000 through 2002, the proportion of nonprime ARMs that did not meet the 3 percent test rose. But when the 2-year Treasury rate rose from 2004 through 2006, the proportion declined sharply. The bill’s interest rate and debt burden requirements for safe harbor mortgages were structured so that a loan would only have to meet one of the three requirements. As a result, some loans could have met one of the requirements, but not one or both of the other requirements and still could have qualified as safe harbor loans. To illustrate, of the safe harbor loans that met the bill’s safe harbor requirements by having a fixed interest rate for 5 or more years, almost one-half would not have met the DTI ratio requirement, assuming the 41 percent ratio we used for our analysis. Some of the banking regulators we interviewed said that the DTI ratio was an important factor in assessing a borrower’s ability to repay a mortgage loan. They said that all borrowers should be required to meet some DTI ratio in order for their loans to be eligible for the bill’s safe harbor. Consistent with this view, H.R. 1728, which was passed by the House earlier this year, requires borrowers of safe harbor loans to meet a DTI ratio to be established by regulation. Over the 8-year period we examined, about 38 percent of the nonprime loans originated would not have met the safe harbor fully indexed rate requirement, although the proportions varied by year (see fig. 5). As previously noted, we assumed that if the fully indexed rate—that is, the index rate at origination plus the lender’s margin—was more than 1 percentage point above the initial interest rate, the mortgage did not meet the requirement. The variation by year largely reflected changes in the index used to determine the fully indexed rate. More specifically, during years in which a commonly used index such as the 6-month LIBOR was relatively high (e.g., 2000 and 2005 through 2006), a larger proportion of the nonprime loans would not have met the requirement because the fully indexed rate would have been well above the initial interest rate of the loan. In contrast, during years in which the index was low (e.g., 2001 through 2004), a greater proportion of loans would have met the requirement because the fully indexed rate would have been close to the initial rate. For example, in 2000, when the average 6-month LIBOR was 6.7 percent, the proportion of nonprime loans that did not meet the fully indexed rate requirement was 47 percent. In 2003, when the average 6- month LIBOR was 1.2 percent, the proportion was 9 percent. A potential shortcoming of this requirement is that many ARMs could meet this requirement when interest rates were low, but the mortgages could become unaffordable if interest rates were to rise and the borrower’s payments adjusted upward to reflect the higher rates. However, it may be difficult to design a more stringent fully indexed rate requirement to provide protection during low interest rate environments without possibly reducing the availability of ARMs during high interest rate environments. Prior research has indicated that nonprime lending occurred disproportionately in areas with higher proportions of minority, low- income, and credit-impaired residents. Therefore, in contemplating the potential impact of the Bill, one consideration is the extent to which nonprime mortgages made to these groups of borrowers would have fallen inside or outside of the safe harbor. For groups with higher proportions of non-safe harbor mortgages, the Bill’s impact on the availability of these loans and consumer protections for them may be particularly important. Accordingly, we examined the estimated proportions of safe harbor and non-safe harbor loans within various zip code and borrower groupings. Specifically, we looked at zip codes grouped by race, ethnicity, and income characteristics, as well as borrowers grouped by credit score. Our analysis of safe harbor and non-safe harbor loans by race and ethnicity groupings found that zip codes with higher percentages of households that Census identified as black or African-American had lower percentages of non-safe harbor loans than the nonprime borrower population as a whole. For example, in zip codes where black or African- American households made up 75 percent or more of the household population, the proportion of non-safe harbor loans was 68 percent, compared with 75 percent for all nonprime borrowers (see table 2). In contrast, in zip codes with higher percentages of households that Censusidentified as Hispanic or Latino, the percentages of non-safe harbor loanswere higher than for nonprime borrowers as a whole. For example, in zip codes where Hispanic or Latino households comprised 75 percent or mor of the household population, the percentage of non-safe harbor loans was 80 percent, or 5 percentage points higher than for all nonprime borrowers. of non-safe harbor loans for each grouping was essentially the sam that for the entire nonprime borrower population. Prior research has shown that a number of different loan, borrower, and economic variables influence the performance of a loan. To see if the bill’s provisions appear to fulfill their consumer protection purpose, we developed a statistical model, based on the data available to us, to examine the relationship between safe harbor requirements, as well as a subset of other variables known to affect performance, and the probability of a loan defaulting within the first 24 months of origination. We defined a loan as being in default if it was delinquent by at least 90 days, in the foreclosure process (including loans identified as in real-estate-owned status), paid off after being 90 days delinquent or in foreclosure, or had already terminated with evidence of a loss. We focused on 24-month performance because a large proportion of nonprime borrowers—particularly those with hybrid ARMs—prepaid their loans (e.g., by refinancing) within 2 years. Using a 24-month time frame allowed us to include these loans in our model. The variables we used in the model included variables based on the individual safe harbor requirements, house price appreciation, borrower credit scores, and LTV ratios. We developed the model using data on nonprime mortgages originated from 2000 through 2006 (the latest year for which we could examine 24-month performance). We produced separate estimates for four types of loan products: (1) short-term hybrid ARMs (i.e., 2/28 or 3/27 mortgages), which accounted for 54 percent of the loans originated during this period; (2) longer-term ARMs (i.e., ARMs with interest rates that were fixed for 5, 7, or 10 years before adjusting), which accounted for 10 percent of originations; (3) payment-option ARMs, which represented 6 percent of originations and (4) fixed-rate mortgages, which represented 30 percent of originations. Appendix II provides additional information about our model and estimation results. Consistent with the consumer protection purpose of the bill’s provisions, we found that two safe harbor variables were associated with the probability of default. Across all product types, the safe harbor variable with the largest estimated influence on default probability was documentation of borrower income and assets. For example, less than full documentation was associated with a 5.5 percentage point increase in the estimated probability of default for short-term hybrid ARMs used for home purchases, all other things being equal (see table 4). The corresponding increases in estimated default probabilities for longer-term ARMs, payment-option ARMs, and fixed-rate mortgages were 4.8 percent, 2.0 percent, and 4.6 percent, respectively. The higher default probabilities associated with no- and low-documentation loans may reflect use of this feature to overstate the financial resources of some borrowers and qualify them for larger, potentially unaffordable loans. Our results are generally consistent with prior research showing an association between a lack of documentation and higher default probabilities. A second safe harbor variable that had a significant influence on default probability was the variable representing the difference between the loan’s initial interest rate and the relevant interest rate index (the spread). As previously noted, ARMs with a difference of 3 percentage points or more over a generally accepted interest rate index would not meet one of the bill’s safe harbor interest rate and debt burden requirements. To examine the effect of this variable for each product type, we estimated the default probability assuming the spread was near the 25th percentile (base assumption) for that product and compared this with the estimated default probability assuming the spread was near the 75th percentile (alternative assumption) for that product. We estimated that for short-term hybrid ARMs used for home purchases, moving from the lower spread to the higher one was associated with a 4.0 percentage point increase in default probability, all other things remaining equal (see table 5). The corresponding increases in estimated default probabilities for longer-term ARMs and fixed-rate mortgages were 1.8 percent and 2.6 percent, respectively. These results were generally consistent with other economic research showing a positive relationship between higher interest rates and default probabilities for nonprime mortgages. This relationship may reflect the higher monthly payments associated with higher interest rates and difficulties borrowers may face in making these payments, particularly during times of economic hardship. We also estimated the effect of the DTI ratio at origination and found that for all product types, this variable did not have a strong influence on the probability of default within 24 months. This relatively weak association may be due, in part, to changes in borrower income or indebtedness after loan origination. For example, a mortgage that is affordable to the borrower at origination may become less so if the borrower experiences a decline in income or takes on additional nonmortgage debt. Finally, we estimated the effect of the proxy variable we developed for the safe harbor requirement that loans be underwritten to the fully indexed rate. As previously noted, if the fully indexed rate was 1 percentage point or less over the initial interest rate, we assumed the loan met this requirement. For all product types, we found that this variable did not have a strong influence on the probability of default within 24 months (see app. II). It is possible that other model specifications—such as examining default probabilities beyond 24 months—would have yielded different results. For example, the difference between the initial interest rate and the fully indexed rate might have been more significant using such an alternative specification because the initial interest rates for many short- term hybrid ARMs begin adjusting upward after 24 months. In examining the influence of safe harbor variables on the probability of default within 24 months, we controlled for other variables not associated with the safe harbor requirements, such as house price appreciation, borrower credit score, and the LTV ratio. Because these variables have been shown to influence default probabilities, it was important to control for their effects in order to properly analyze the implications of the safe harbor provisions. Consistent with other economic research, we found that house price appreciation, borrower credit score, and the LTV ratio were strongly associated with default probabilities. The estimated influence of these variables on default probabilities for each product type were as follows: House price appreciation. We found that lower rates of house price appreciation were associated with a higher likelihood of default. For each product type, we estimated the default probability assuming house price appreciation near the 75th percentile for that product (base assumption) and compared this with the estimated default probability assuming house price appreciation near the 25th percentile for that product (alternative assumption). For short-term hybrid ARMs used for home purchases, moving from the higher rate of appreciation to the lower rate was associated with a 13.5 percentage point increase in estimated default probability (see fig. 6). The corresponding figures for longer-term ARMs, payment-option ARMs, and fixed-rate mortgages were 3.7 percent, 1.3 percent, and 3.5 percent, respectively. Borrower credit score. We found that lower credit scores were associated with a higher likelihood of default. For each product type, we estimated the default probability assuming a borrower credit score close to the 75th percentile for that product (base assumption) and compared this with the estimated default probability assuming a borrower credit score close to the 25th percentile for that product (alternative assumption). For short- term hybrid ARMs used for home purchases, moving from the higher credit score to the lower one was associated with a 7.3 percentage point increase in the estimated default probability (see fig. 6). For longer-term ARMs, payment-option ARMs, and fixed-rate mortgages, the corresponding figures were 3.3 percent, 2.1 percent, and 5.5 percent, respectively. LTV ratio. We found that higher LTV ratios were associated with higher probabilities of default. For each product type, we estimated the default probability assuming a LTV ratio close to the 25th percentile for that product (base assumption) and compared this with the estimated default probability assuming a LTV ratio close to the 75th percentile for that product (alternative assumption). For short-term hybrid ARMs used for home purchases, moving from the lower ratio to the higher ratio was associated with a 4.4 percentage point increase in the estimated default probability (see fig. 6). The corresponding figures for longer-term ARMs, payment-option ARMs, and fixed-rate mortgages were 4.7 percent, 6.3 percent, and 3.7 percent, respectively. While some research indicates that anti-predatory lending laws can reduce originations of problematic loans without overly restricting credit, research on state and local anti-predatory lending laws and the views of mortgage industry stakeholders do not provide a consensus view on the potential effects of the bill. The state and local anti-predatory lending laws we reviewed are, in some ways, similar to the bill, but the results of the research on these laws may have limited applicability to the bill for a number of reasons. Mortgage industry and consumer group representatives we interviewed disagreed on the bill’s potential effect on credit availability and consumer protections. For example, mortgage industry representatives said that the safe harbor and assignee liability provisions were too stringent and would restrict and raise the cost of mortgage credit. In contrast, consumer group representatives indicated that the provisions were not strong enough to prevent predatory lending and, thereby, protect borrowers. Several studies have examined the impact of state and local anti-predatory lending laws on subprime mortgage markets. Our review of eight such studies found evidence that anti-predatory lending laws can have the intended effect of reducing loans with problematic features without substantially affecting credit availability, but also that it is difficult to generalize these findings to all anti-predatory lending laws or to the potential effect of the bill. The studies we reviewed fell into two broad categories: those that focused solely on the North Carolina law and those that examined laws in multiple states and localities. In general, the researchers measured the effect of the laws in terms of the volume of subprime originations, the probability of originating a subprime loan, or the probability of originating a loan with predatory characteristics. The three studies on the North Carolina law (which was implemented in phases beginning in October 1999 and ending in July 2000) concluded that the law had a dampening effect on subprime originations in that state, but one found that the drop occurred primarily in the types of loans targeted by the law. For example, using data from nine subprime lenders and controlling for a number of demographic and housing market variables, Elliehausen and Staten estimated that subprime originations fell by 14 percent after the law was first implemented. A second study by Quercia, Stegman, and Davis that used an LP data set with broader coverage and used neighboring states as a control group, found that subprime originations declined 3 percent after the law was fully implemented and that subprime originations in four neighboring states without similar laws rose over the same period. Importantly, the authors also determined that 90 percent of the decline in subprime originations resulted from a decrease in refinance loans with one or more “predatory” characteristics, such as prepayment penalties lasting 3 years or more, balloon payments, or LTV ratios over 110 percent. Finally, a study by Burnett, Finkel and Kaul, which used Home Mortgage Disclosure Act (HMDA) data and also used neighboring states as a control group, found a 0.2 percent increase in subprime originations in North Carolina after implementation of the law. Like the Quercia study, the study by Burnett and others concluded that subprime refinance loans fell sharply in North Carolina over the period examined and that states neighboring North Carolina experienced higher percentage increases in total subprime originations. Additionally, the study noted that the volume of subprime originations in North Carolina fell in census tracts that were more than 50 percent minority but rose in other areas. The five studies that examined multiple state and local anti-predatory lending laws found mixed results but provide insights into the importance of the specific attributes of the laws. For example, using HMDA data, Ho and Pennington-Cross calculated the percentage change in subprime originations in 10 states with anti-predatory lending laws over periods that captured each state’s experience before and after the laws were passed. They compared the changes they found with the corresponding changes during the same periods in a control group of neighboring states without such laws. They found that in 5 of the 10 states (including North Carolina) with anti-predatory lending laws, subprime originations increased less than in the control group, but that in the other 5 states, subprime originations increased more. In another study, Ho and Pennington-Cross developed a legal index to measure the coverage and restrictions of anti- predatory lending laws, and examined how laws in 25 states and 3 localities affected the probability of originating a subprime loan. They found that, controlling for other factors, anti-predatory lending laws can increase, decrease, or have no effect on the flow of mortgage credit. Specifically, they found that: laws with broader coverage (i.e., those affecting a larger portion of the market) increased the estimated likelihood of subprime originations; those with greater restrictions (i.e., those with stricter limits on high-risk loan features) decreased the estimated likelihood of subprime originations; and in some instances, these two effects appeared to cancel each other out. As a result, they noted that the design of the law can have an important impact on the availability of credit in the subprime market. For example, the authors hypothesized that the effect of broader coverage may result from borrowers being more comfortable applying for a mortgage where there is a law to protect them from predatory loans. A study by Bostic and others built on this research by refining the legal index previously discussed, adding an enforcement dimension to the index, and examining a larger set of laws. The study confirmed the earlier findings regarding the impact of the coverage and restriction provisions of anti-predatory lending laws on the subprime market. Additionally, this study found that the strength of a law’s enforcement provisions (e.g., the extent of potential liability for assignees) was not associated with changes in the estimated likelihood of subprime originations. Li and Ernst examined anti-predatory lending laws in 33 states and used LP data on subprime mortgages made from January 1998 through March 2005 to examine the impact of these laws on the origination of loans with predatory features and the cost of subprime credit. They concluded that state anti-predatory lending laws that provided greater consumer protections than HOEPA had the intended effect of reducing subprime mortgages with predatory features. They also concluded that such laws did not lead to any systematic increase in costs to consumers. Pennington- Cross and Ho also examined the impact of predatory lending laws on the cost of subprime credit by reviewing anti-predatory lending laws in 24 states and analyzing HMDA and LP data from 1998 through 2005. They concluded that these laws resulted in, at most, a modest increase to consumers’ cost of borrowing. Although the bill is, in some ways, similar to the state and local laws analyzed in these studies, the results of these studies may have limited applicability to it, for a number of reasons. First, the legal indexes used by some researchers to assess the impact of state and local laws are based on an older set of laws that are similar to HOEPA. According to one of these researchers, the indexes do not take into account a newer generation of laws that, like the bill, have different thresholds and restrictions and cover products that were previously not common in the marketplace (e.g., low- and no-documentation loans). As a result, evaluating the bill, using these analytical tools, could be problematic. Additionally, the impact of a federal law could be different than the effects of state and local laws. For example, lenders or assignees may choose to exit a state or local market rather than comply with that jurisdiction’s anti-predatory lending law but still conduct business in other markets. However, under a federal law, these entities would not have that option. Finally, prior studies examined the impact of laws during a relatively active period in the subprime lending market. If a law similar to the bill were to be passed in the near future, it would be implemented in the wake of a major contraction in the mortgage market that would likely affect the response of both the mortgage industry and consumers to new lending standards. Mortgage industry representatives and consumer groups we interviewed generally agreed that the bill would have little short-term impact on the mortgage market because of existing market conditions. However, they held different views on the long-term impact that key provisions in the bill would have on consumer access to affordable credit and protection from predatory lending practices. Representatives from both groups generally agreed that the bill would have very little impact on mortgage originations in the current financial environment because the overall primary market was highly constrained, with lenders tightening qualifications for all borrowers and the market for private label MBS virtually nonexistent. In addition, representatives from mortgage industry groups expected that the Federal Reserve’s revisions to Regulation Z could lessen the impact of the bill. Specifically, the groups stated that the revisions to Regulation Z would place lender requirements on nonprime loans that were similar to the bill’s safe harbor requirements. For example, both the Regulation Z revisions and the bill’s safe harbor require that borrowers obtaining loans with APRs over certain thresholds provide full documentation of income and assets and qualify for ARMs based on a monthly payment that takes into account scheduled interest rate increases. Mortgage industry representatives we interviewed generally viewed the bill’s safe harbor requirements as overly restrictive and said that these requirements would reduce mortgage options and increase the cost of credit for certain borrowers. Some of these representatives said that lenders would be unwilling to make loans that did not meet the safe harbor requirements. They cited the experience with HOEPA as an example of what might take place if the safe harbor requirements were put in place. Specifically, they noted that since the implementation of HOEPA, very few lenders have been willing to make mortgages considered “high cost” loans under HOEPA’s provisions because they cannot sell them to the secondary market. For example, in 2006, less than 1 percent of mortgages were high cost loans, as defined by HOEPA regulations. The industry representatives also said that specific safe harbor requirements would reduce access to credit for certain types of borrowers. For example, they said that the safe harbor requirement that would prohibit loans with less than full documentation of income and assets could restrict access to credit for borrowers with irregular income streams, such as some small business owners. Some industry representatives acknowledged that many low- and no-documentation mortgages should not have been made, but said that some flexibility should be allowed under this requirement to account for borrowers with nontraditional sources of income. In addition, industry representatives said that borrowers who had responsibly used negative amortization loans in the past could face limited mortgage options under the bill, as the safe harbor requirement would prohibit these loans. Some industry representatives acknowledged that negative amortization products had been used inappropriately in recent years to allow some borrowers to buy homes that they might not have been able to afford, but added that prohibiting this feature would adversely impact borrowers who had used this product responsibly. For example, some borrowers with irregular income have taken out negative amortization loans in order to pay minimum amounts when their income was low and higher amounts when their income increased. One mortgage industry participant suggested that one way to address concerns that these loans subject borrowers to payment shock would be to limit the amount by which the mortgage payments could reset. In contrast, representatives from consumer groups that we interviewed generally indicated that the safe harbor requirements would need to be strengthened and applied to a broader range of loans in order to prevent predatory lending practices to protect borrowers. For example, some representatives supported adding more consumer protection features to the bill, such as prohibiting prepayment penalties, balloon payments, and yield spread premiums. They also said that the bill’s safe harbor requirements should be applied to all mortgages, including FHA-insured mortgages and loans with relatively low APRs, because these loans could also contain predatory features. Most of the consumer group representatives said that strengthening safe harbor requirements and applying them more broadly would not significantly affect the cost or availability of credit. For example, in response to industry concerns that requiring full documentation would restrict some borrowers’ access to credit, consumer group representatives noted that full documentation had already become a marketplace standard. They generally believed that the majority of borrowers, including self-employed consumers, could provide sufficient documentation using their income tax records, but some groups supported limited flexibility in the types of documents that would be accepted. In addition, while industry groups were concerned that prohibiting loans with a negative amortization feature under the bill’s safe harbor provisions could restrict credit to some borrowers, consumer groups supported prohibiting this feature in order to protect consumers from potential payment shock. Some of these representatives acknowledged that negative amortization loans could be suitable for certain borrowers, but they viewed these cases as exceptional and did not think the potential benefits to a small segment outweighed the potential costs to the larger portion of the market. Mortgage industry representatives we interviewed generally said that the bill’s assignee liability provisions would increase the cost of credit for borrowers and deter secondary market participants from reentering the nonprime market. Specifically, these representatives said that the cost of complying with the bill’s assignee liability provisions, including secondary market participants’ cost of due diligence procedures, would increase the cost of credit and cause some secondary market participants to stop securitizing loans. Some industry representatives stated that mortgage originators were better positioned to conduct due diligence to ensure that loans were responsibly underwritten and argued that mortgage reform legislation should focus on enhancing the primary market’s underwriting standards. Mortgage industry representatives also said that lack of certainty in what assignees could be held liable for under the bill would deter participants from reentering the secondary market. For example, some representatives noted that the bill did not clearly define the standards that assignees would be held to, such as “ability to repay” and “net tangible benefit.” They cited Georgia’s 2002 anti-predatory lending law as an example of how the lack of clarity concerning assignee liability could adversely impact the market. As we have reported, because of the uncertainty surrounding potential liability under the Georgia law, secondary market participants withdrew from the mortgage market in Georgia until the provisions were repealed. In contrast, consumer group representatives generally believed that enhanced regulation and accountability in the secondary market would provide consumers with greater protections against predatory lending practices. These representatives generally supported strengthening the bill’s assignee liability provisions. For example, some consumer group representatives said that the bill’s assignee liability provisions should not allow for any exemptions from liability, such as allowing assignees to cure a loan (i.e., modify or refinance the loan so that it meets the bill’s minimum lending standards) to avoid liability. They noted that some assignees might choose to cure the relatively few loans that violate the bill’s minimum lending standards, rather than invest the resources in due diligence policies and procedures that would help prevent predatory lending practices. Further, consumer groups said that the bill should not preempt state assignee liability laws because these laws could potentially provide consumers with an ability to seek redress if they obtain a predatory loan. Finally, representatives of consumer groups also said that applying the assignee liability provisions more broadly, beyond the bill’s nonqualified mortgages, could also help prevent predatory lending on a wider variety of mortgages. They contended that stronger and broader assignee liability provisions would not significantly impact the cost of or access to credit and would set a standard to which secondary market participants would eventually adapt. Mortgage industry representatives preferred that any federal legislation on mortgage lending preempt all state anti-predatory lending laws, not just assignee liability laws, in order to reduce the cost of and increase the availability of credit. They stated that a uniform set of mortgage standards for lenders would significantly reduce the cost of doing business and that these savings could be passed on to consumers. According to one mortgage industry participant, under the current legal and regulatory environment, lenders’ costs are higher because lenders are required to develop systems to track laws and regulations in up to 50 states, monitor these laws and regulations, and ensure they are in compliance with them. Some industry representatives stated that federal preemption could also lower consumer costs by applying uniform standards and supporting competition between state- and federally licensed mortgage originators. Mortgage industry representatives also said that full federal preemption would provide a uniform set of standards that would renew activity in the secondary market, thereby, allowing lenders to make more credit available to consumers. In contrast, consumer group representatives generally believed that federal legislation should not preempt state laws, because consumers benefited from states’ abilities to enact stronger consumer protection laws. For example, some consumer groups said that in the past, states had responded faster to predatory lending abuses than federal regulators in enacting anti-predatory lending laws, and expected this to continue if a federal bill did not preempt state laws. Further, some of these representatives said that state and federal regulations existed in a complementary framework in other areas, such as civil rights and the environment, and generally did not think that compliance costs would be significant in light of the benefits to consumers and the long-term sustainability of the mortgage market. They viewed states’ experimentation with mortgage reform as an important source of useful information on changes in market conditions and industry responses to different approaches. We provided a draft of this report to the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, Office of Thrift Supervision, National Credit Union Administration, Department of Housing and Urban Development, Federal Trade Commission, and Securities and Exchange Commission. We received written comments from NCUA, which are summarized below. Appendix III contains a reprint of NCUA’s letter. The Federal Reserve, FDIC, OCC, HUD, and FTC provided technical comments, which we incorporated into this report, where appropriate. In its written comments, NCUA reiterated several of our findings and noted that the findings supported its view that ensuring borrowers have a reasonable ability to repay is in the best interest of credit unions and their members. We are sending copies of this report to the Ranking Member, House Financial Services Committee and other interested parties. We will also send copies to the Federal Reserve, FDIC, OCC, OTS, NCUA, HUD, FTC, and SEC. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) assess the proportion of recent nonprime loans that would likely have met and not met the Mortgage Reform and Anti- Predatory Lending Act of 2007’s (bill) safe harbor requirements, and how variables associated with those requirements affect loan performance; and (2) discuss relevant research and the views of mortgage industry stakeholders concerning the potential impact of key provisions of the bill on the mortgage market. The scope of our analysis was limited to the nonprime mortgages. To assess the proportions of nonprime loans originated from 2000 through 2007 that would likely have met and not met the bill’s safe harbor requirements, we analyzed data on subprime and Alt-A (nonprime) mortgages from that period. Specifically, we analyzed information from LoanPerformance’s (LP) Asset-backed Securities database, which contains loan-level data on nonagency securitized mortgages in subprime and Alt-A pools. About three-quarters of subprime mortgages were securitized in recent years. For purposes of this report, we defined subprime loans as mortgages in subprime pools and Alt-A loans as mortgages in Alt-A pools. The LP database covers the vast majority of mortgages in nonagency subprime and Alt-A securitizations. For example, for the period 2001 through July 2007, the LP database contains information covering (in dollar terms) an estimated 87 percent of securitized subprime loans and 98 percent of securitized Alt-A loans (see table 6). Nonprime mortgages that were not securitized (i.e., mortgages that lenders held in portfolio) may have different characteristics and performance histories than those that were securitized. For our analysis, we used a random 2 percent sample of the database that amounted to almost 300,000 loans for the 2000 through 2007 period. Our sample included purchase and refinance mortgages and loans to owner- occupants and investors, and excluded second-lien mortgages. We assessed the reliability of the data by interviewing LP representatives about the methods they use to collect and ensure the integrity of the information. We also reviewed supporting documentation about the database, including LP’s estimates of the database’s market coverage. In addition, we conducted reasonableness checks on the data to identify any missing, erroneous, or outlying figures. We found the data elements we used to be sufficiently reliable. To estimate the proportion of loans that likely would have met and not met the safe harbor requirements, we used variables in the LP database that directly corresponded with the requirements and developed proxies when the database did not contain such variables (see table 7). To compare the demographic characteristics (e.g., race, ethnicity, and income level) of safe harbor and nonsafe harbor loans, we incorporated data from the Census Bureau. More specifically, whenever possible, we linked the zip code for each loan reported in the LP data to an associated census tract in a metropolitan statistical area (MSA). We grouped the zip codes according to the percentage of households that Census identified as black or African-American and Hispanic or Latino. The groupings in our analysis were: (1) less than 5 percent, (2) 5 to 24 percent, (3) 25 to 74 percent, and (4) 75 percent or greater of household populations. We also grouped zip codes according to the median income of the MSA of a given zip code. The specific groupings in our analysis were low-, moderate-, and upper-income zip codes, defined as those with median incomes that were less than 80 percent, at least 80 percent but less than 120 percent, and 120 percent and above, respectively, of the median income for the associated MSA. To analyze nonsafe harbor loans by borrower credit score, we used the FICO scores in the LP database. FICO scores, generally based on software developed Fair, Isaac and Company, are a numerical indicator of a borrower’s creditworthiness. The scores range from 300 to 850, with higher scores indicating a better credit history. For our analysis, we used 4 ranges of scores: 599 and below, 600 to 659, 660 to 719, and 720 and above. To examine factors affecting the performance of nonprime loans, we developed an econometric model to estimate the relationship between variables associated with the safe harbor requirements, as well as other variables, and the probability of a loan defaulting within 24 months of origination. We developed the model using data on mortgages originated from 2000 through 2006 (the latest year for which we could examine 24- month performance). Detailed information about our model and our estimation results are presented in appendix II. To describe relevant research on the bill’s potential effect on the mortgage market, we identified and reviewed empirical studies on the impact of state and local anti-predatory lending laws on key nonprime mortgage indicators, such as subprime mortgage originations and the cost of credit. While we identified a number of such studies, we narrowed our scope to eight studies that used control groups (e.g., comparison states without anti-predatory lending laws) or statistical techniques that controlled for factors other than the laws that could affect lending patterns. The studies we reviewed fell into two broad categories: three studies that focused solely on North Carolina’s 1999 anti-predatory lending law and five that examined laws in multiple states and localities. In general, the researchers measured the effects of the laws in terms of the volume of subprime originations, the probability of originating a subprime loan, or the probability of originating a loan with predatory characteristics. Our review of these eight studies included an examination of the methodologies used, the data and time periods used, the limitations of the studies, and the conclusions. We also interviewed selected authors to ensure that we interpreted their results correctly and to obtain their views on whether the results from their studies might apply to the potential impact of the bill on the mortgage market. To obtain the views of mortgage industry stakeholders, we reviewed written statements and congressional testimony about the bill by officials from the federal banking regulatory agencies and organizations representing mortgage lenders, mortgage brokers, securitizers, and consumer interests. We also interviewed officials from a number of these organizations, including the Mortgage Bankers Association, American Securitization Forum, American Financial Services Association, American Bankers Association, Independent Community Bankers of America, National Association of Mortgage Brokers, Center for Responsible Lending, National Community Reinvestment Coalition, National Consumer Law Center, Neighborhood Association of Consumer Advocates, and Consumer Federation of America. In addition, we interviewed officials from a large mortgage lender and a major investment bank involved in the securitization of mortgages. Finally, we interviewed officials from the federal banking regulatory agencies, the Department of Housing and Urban Development (HUD), the Federal Trade Commission (FTC), and the Securities and Exchange Commission (SEC). We conducted this performance audit from March 2008 to July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix describes the econometric model we developed to examine the relationship between variables associated with the bill’s safe harbor requirements, as well as other variables, and the probability of a loan entering default. Safe harbor requirements include features related to documentation of borrower income and assets, limits on debt-service-to– income (DTI) ratios, the duration before which any interest rate adjustments may occur, limits on the relationship between a loan’s annual percentage rate and other prevailing interest rates at origination, and prohibitions on mortgages that allow negative amortization. The safe harbor requirements limit features that may increase the risk of default, but they may also restrict the number and types of mortgages lenders are willing to originate. Since the requirements were not in effect during the recent past, we do not know in what ways lenders and securitizers may have responded to their introduction. Therefore, we characterize our evaluation as an assessment of whether mortgages with safe harbor characteristics performed better than those without them, as opposed to an assessment of the effects of the introduction of a safe harbor. Our investigation focused on a recent set of nonprime mortgages and controlled for a variety of loan, borrower, and housing market conditions that are likely to affect mortgage performance. To do this work, we analyzed a 2 percent random sample of securitized nonprime loans originated from 2000 through 2006 from LoanPerformance’s (LP) Asset-backed Securities database. Our sample was comprised of the approximately 92 percent of loans for which the associated property was located in an area covered by the Federal Housing Finance Agency’s house price indexes for metropolitan areas. The LP database has been used extensively by regulators and others to examine the characteristics and performance of nonprime loans. The database provides information on loan characteristics, from which we developed variables that indicated or measured relevant safe harbor requirements. We determined the status of each loan 24 months after the month of first payment. We used loan performance history through the end of December 2008. We defined a loan as being in default if it was delinquent by at least 90 days, in the foreclosure process (including loans identified as in real-estate-owned status), paid off after being 90-days delinquent or in foreclosure, or had already terminated with evidence of a loss. We categorized loans as follows: short-term hybrid adjustable rate mortgages (ARM) (essentially 2/28 and 3/27 mortgages), fixed-rate mortgages, payment-option ARMs, and other longer-term ARMs (i.e., ARMs with 5-, 7-, and 10-year fixed-rate periods). We included only first- lien loans for which the borrower is identified as an owner-occupant, and we estimated default probabilities for purchase money loans separately from loans for refinancing except for payment-option ARMs, for which we examined purchase and refinancing loans together. Our primary reason for examining performance by mortgage type is that borrower incentives and motivations may vary for loans with different characteristics. For example, short-term hybrid ARMs provide a strong incentive for a borrower to exit from a mortgage by the time the interest rate begins to reset. We estimated separate default models for each mortgage type, although the general underlying structure of the models was similar. We used a logistic regression model to explain the probability of loan default, based on the observed pattern of actual defaults and the values of safe harbor variables and a subset of other variables known to be associated with loan performance (see table 8). Many loan and borrower characteristics are likely to influence the status of a mortgage over time. Some factors describe conditions at the time of mortgage origination, such as the loan- to-value (LTV) ratio and the borrower’s credit score. Other important factors may change over time, sometimes dramatically, without being observed by a lender, loan servicer, or researcher. For instance, an individual household’s income may change due to job loss, increasing the probability of default. Other conditions vary over time in ways that can be observed, or at least approximated. For example, greater house price appreciation (HPA) contributes to greater housing equity, thus reducing the probability that a borrower, if facing financial distress, views defaulting on a loan as a better option than prepaying. We focused on whether a loan defaulted within 24 months as our measure of performance because a large proportion of nonprime borrowers had hybrid ARMs and prepaid their loans (e.g., by refinancing) within 2 years. Using a 24-month time frame allowed us to include these loans in our model, as well as loans originated in 2006, a year in which many nonprime loans were originated. For reasons described below, some of the variables associated with the safe harbor requirements are included in all four models, while others are only included in certain models: Full documentation of borrower income and assets: This variable is in all four models. Negative amortization feature: This variable is only in the model for longer-term ARMs. We did not include it in the models for the other mortgage types because the negative amortization feature was essentially never present (in the case of fixed-rate mortgages and short-term hybrid ARMs) or was essentially always present (in the case of payment-option ARMs). The lack of variation within these mortgage types made estimating the marginal effects of the negative amortization variable problematic. Fully indexed proxy: This variable is in three of the models, but we do not include it in the model for fixed-rate mortgages because it is only relevant to loans with adjustable interest rates. DTI ratio: In the context of the bill’s safe harbor requirements, this variable would only apply to short-term hybrid ARMs and payment-option ARMs. However, we include it in all four models because the DTI ratio is an important measure of the borrower’s ability to repay. Spread over relevant interest rate index: In the context of the bill’s safe harbor requirements, this variable would only apply to short-term hybrid ARMs. However, we include it in all four models because loans with higher interest rates may be at greater risk of default due to their higher monthly payments. Tables 9 through 12 provide information on the number of loans and mean values for each of the mortgage types for which we estimated default probabilities. Short-term hybrid ARMs were the most prevalent type of mortgage, and refinance loans were more prevalent than purchase loans. In addition, more loans were originated in the later portion of the time period we examined than the earlier portion. Default rates were highest for short-term hybrid ARMs, lower for loans originated in the middle years of the time period and higher for purchase loans than for refinance loans. The results of our analysis are presented in tables 13 through 16. We ran seven regressions: separate purchase loan and refinance loan regressions for three of the product types (short-term hybrid ARMs, fixed-rate mortgages, and longer-term ARMs) and a single regression combining purchase and refinance loans for payment-option ARMs. For this set of regressions, we only included the 63 percent of loans for which DTI information was available. We also ran a second set of regressions that used all of the loans for each mortgage type and binary variables indicating DTI ranges, including categories for missing information. We found that the results were very similar to those for the first set of regressions. We presented coefficient estimates, as well as a transformation of the coefficients into a form that can be interpreted as the marginal effect of each variable on the estimated probability of default. This marginal effect is the calculation of the change in the estimated probability of default that would result if a variable’s standard deviation were added to that variable’s mean value, while all other variables are held at their mean values. This permits a comparison of the impact of different variables within and across mortgage types. In general, combined LTV ratio, HPA, and FICO score had substantial marginal effects across different mortgage types and loan purposes. Specifically, higher LTV ratios, lower HPA, and lower FICO scores were associated with higher likelihoods of default. The observed effects for DTI ratio were relatively small. Among safe harbor characteristics, documentation of borrower income and assets and a loan’s spread over the applicable Treasury rate had substantial marginal effects. Less than full documentation and higher spreads were associated with higher default probabilities. Our results for full documentation of borrower income and assets were not sensitive to alternative specifications. Including the loan amount as an additional variable, adding or substituting different interest rates, and changing the form in which house price appreciation or FICO scores entered the model all had no effect on our general conclusion that the presence of full documentation was strongly associated with lowering the probability of default. Our conclusion concerning high cost loans—that larger spreads over specified Treasury rates at the time of origination are associated with increased default probability—is somewhat more nuanced. In some respects, the spread variable is capturing something about the effect of higher interest rates generally. For example, alternative specifications which substituted the initial interest rate or the Treasury rate for the spread variable yielded similar results. However, when the Treasury rate and the spread variables are included in the model, both variables are significant and have large marginal effects. As an alternative specification for short-term hybrid ARMs, we included a variable indicating whether each mortgage was a safe harbor or a non-safe harbor loan, in contrast to including variables for separate safe harbor requirements. We found that this variable had a small marginal effect, most likely because many non-safe harbor loans met some of the safe harbor requirements. In particular, a substantial percentage of non-safe harbor loans had full documentation of borrower income and assets but failed to meet other safe harbor requirements. In addition to the individual named above, Steve Westley, Assistant Director; Bill Bates; Stephen Brown; Emily Chalmers; Rudy Chatlos; Randy Fasnacht; Tom McCool; John McGrail; Mark Metcalfe; Rachel Munn; Susan Offutt; Jasminee Persaud; José R. Peña; Scott Purdy; and Jim Vitarello made key contributions to this report.
H.R. 3915 (2007), a bill introduced, but not enacted by the 110th Congress, was intended to reform mortgage lending practices to prevent a recurrence of problems in the mortgage market, particularly in the nonprime market segment. The bill would have set minimum standards for all mortgages (e.g., reasonable ability to repay) and created a "safe harbor" for loans that met certain requirements. Securitizers of safe harbor loans would be exempt from liability provisions, while securitizers of non-safe harbor loans would be subject to limited liability for loans that violated the bill's minimum standards. In response to a congressional request, this report discusses (1) the proportions of recent nonprime loans that likely would have met and not met the bill's safe harbor requirements and factors influencing the performance of these loans, and (2) relevant research and the views of mortgage industry stakeholders concerning the potential impact of key provisions of the bill on the availability of mortgage credit. To do this work, GAO analyzed a proprietary database of securitized nonprime loans, reviewed studies of state and local anti-predatory lending laws, and met with financial regulatory agencies and key mortgage industry stakeholders. GAO estimates that almost 75 percent of securitized nonprime mortgages originated from 2000 through 2007 would not have met H.R. 3915's safe harbor requirements, which include, among other things, full documentation of borrower income and assets, and a prohibition on mortgages for which the loan principal can increase over time. The extent to which mortgages met specific safe harbor requirements varied by origination year. For example, the percentage of nonprime mortgages with less than full documentation rose from 27 percent in 2000 to almost 60 percent in 2007. Consistent with the consumer protection purpose of the bill, GAO found that certain variables associated with the safe harbor requirements influenced the probability of a loan entering default (i.e., 90 or more days delinquent or in foreclosure) within 24 months of origination. For example, on the basis of statistical analysis, GAO estimates that, all other things being equal, less than full documentation was associated with a 5 percentage point increase in the likelihood of default for the most common type of nonprime mortgage product. GAO also found that other variables--such as house price appreciation, borrowers' credit scores, and the ratio of the loan amount to the house value--were associated with default rates. Research on state and local anti-predatory lending laws and the perspectives of mortgage industry stakeholders do not provide a consensus view on the bill's potential effects on the availability of mortgage credit. Some research indicates that anti-predatory lending laws can have the intended result of reducing loans with problematic features without substantially affecting credit availability. However, it is difficult to generalize these findings to all anti-predatory lending laws or the potential effect of the bill, in part, because of differences in the design and coverage of these laws. Mortgage industry and consumer group representatives with whom GAO spoke disagreed on the bill's potential effect on credit availability and consumer protection. For example, mortgage industry officials generally said that the bill's safe harbor, securitizer liability, and other provisions would limit mortgage options and increase the cost of credit for nonprime borrowers. In contrast, consumer groups generally stated that these provisions needed to be strengthened to protect consumers from predatory loan products.
Before 1978, the U.S. airline industry was tightly regulated. The federal government controlled what fares airlines could charge and what cities they could serve. Legislatively mandated to promote the air transport system, the Civil Aeronautics Board believed that passengers traveling shorter distances—more typical of travel from small and medium-sized communities—would not choose air travel if they had to pay the full cost of service. Thus, the Board set fares relatively lower in short-haul markets and higher in long-haul markets than would be warranted by costs. In effect, long-distance travel subsidized short-distance markets. In addition, the Board did not allow new airlines to form and compete against the established carriers. Concerned that government regulation had caused fares to be too high in many heavily traveled markets, made the airline industry inefficient, and inhibited its growth, the Congress deregulated the industry. The Airline Deregulation Act of 1978 phased out the government’s control over fares and service but did not change the government’s role in regulating and overseeing air safety. Deregulation was expected to result in (1) lower fares at large-community airports, from which many trips are long-distance, and somewhat higher fares at small- and medium-sized-community airports; (2) increased competition from new airlines entering the market; and (3) greater use of turboprop (propeller) aircraft by airlines in place of jets in smaller markets that could not economically support jet service. In 1990, at the request of this Committee, we reported on the trends in airfares since deregulation for airports serving small, medium-sized, and large communities. For the 112 airports we reviewed, we found that overall fares had fallen not only at airports serving large communities, as was expected, but at airports serving small and medium-sized communities as well. We noted, however, that despite the overall trend toward lower airfares, some small- and medium-sized-community airports had experienced substantial increases in fares following deregulation, especially in the Southeast. Our current report on changes in airfares, service, and safety since airline deregulation updated this analysis for the same 112 airports. We have also reported on several other issues concerning airfares since deregulation, including the effects of market concentration and the industry’s operating and marketing practices on fares. These reports are listed at the end of this statement. As of the first 6 months of 1995, airfares overall continued to be below what they were in 1979 for airports serving small, medium-sized, and large communities. Comparing full-year data for 1979 and 1994, the fares per passenger mile, adjusted for inflation, were about 9 percent lower for small-community airports, 11 percent lower for medium-sized-community airports, and 8 percent lower for large-community airports. Despite the general trend toward lower fares, however, fares at small- and medium-sized-community airports have remained consistently higher than fares at airports serving large communities, largely because of the economics associated with traffic volume and trip distance. As the volume of traffic and average length of haul increase, the average cost per passenger mile decreases, allowing for lower fares. Airports serving small and medium-sized communities tend to have fewer heavily traveled routes and shorter average distances, resulting in higher fares per passenger mile compared with those of large-community airports. Nevertheless, fares have fallen since deregulation for most of the airports in our sample. Of the 112 airports that we reviewed, 73 have lower fares, while 33 have higher fares. Specifically, fares have declined at 36 of the 49 airports serving small communities, 19 of the 38 airports serving medium-sized communities, and 18 of the 25 airports serving large communities. The overall trend toward lower fares since deregulation has resulted in large part from increased competition, spurred in many cases by the entry of new airlines. The average number of large airlines serving the medium-sized-community airports in our sample, for example, increased by over 50 percent between 1978 and 1994, while the average number of commuter carriers serving these airports increased by about 40 percent. Low-cost airlines, such as America West and Southwest Airlines, have accounted for much of this new entry, resulting in substantially lower fares at airports in the West and Southwest, regardless of the size of the community served. In addition, the established airlines’ transition to hub-and-spoke systems following deregulation has increased competition at many airports serving small and medium-sized communities. By bringing passengers from multiple origins (the spokes) to a common point (the hub) and placing them on new flights to their ultimate destinations, these systems provide for more frequent flights and more travel options than did the direct “point-to-point” systems that predominated before deregulation. Thus, instead of having a choice of a few direct flights between their community and a final destination, travelers departing from a small community might now choose from among many flights by several airlines through different hubs to that destination. Nevertheless, while fares have fallen at the majority of airports in our sample, they have risen substantially for travel out of several airports. As appendix I shows, those airports that have experienced the largest fare increases—over 20 percent—mostly serve small and medium-sized communities in the Southeast and Appalachia. In contrast to those airports in the West and Southwest that have experienced substantial declines in fares, these airports tend to be dominated by one or two higher-cost airlines. For example, Delta accounted for nearly 90 percent of the passenger enplanements in 1994 at the airport serving Jackson, Mississippi, where fares have risen by over 25 percent since deregulation.By contrast, three low-cost, new entrant airlines—America West, Reno Air, and Southwest—accounted for about 65 percent of the enplanements in 1994 at the airport serving Reno, Nevada, where fares have fallen by about 21 percent since deregulation. The more widespread entry of low-cost airlines at airports in the West and Southwest in the nearly two decades since deregulation—and the resulting geographic differences in fare trends—stems primarily from stronger economic growth, less airport congestion, and more favorable weather conditions in those regions, compared to the East and Southeast. For example, the average annual increase in employment between 1979 and 1993 for Reno, Nevada, was 2.6 percent, which compares with an average annual increase of 0.9 percent for the communities in the Southeast and Appalachia whose airports have experienced an increase in fares of over 20 percent since deregulation. Nevertheless, over the past 2 years, a few new entrant airlines have attempted to initiate low-cost, low-fare service in the East. The results have been mixed. In early 1994, for example, Continental Airlines created a separate, low-cost service in the East, commonly referred to as “Calite.” Largely because it grew too rapidly and was unable to compete successfully against USAir and Delta, Calite failed and was terminated in early 1995. As a result of the loss of competition brought by Calite, the largest fare increases during the first 6 months of 1995 occurred at airports in the East, primarily at small- and medium-sized communities in North Carolina and South Carolina. More recently, other low-cost carriers have emerged in the East. The most successful of these to date has been Valujet. However, Valujet has begun to experience some of the problems of operating in the East, such as difficulties in obtaining scarce take-off and landing slots at congested airports. Even so, Valujet’s success has sparked competitive responses from the dominant airlines in the East. Delta, for example, plans to initiate a separate, low-cost operation of its own in the East later this year. However, because most of Valujet’s growth occurred in the second half of 1995 and the competitive responses of other airlines are only beginning to unfold, data are not yet available to determine the extent to which Valujet has affected fares in the East, particularly at airports serving small and medium-sized communities that have yet to benefit from the overall trend toward lower airfares since deregulation. Most communities served by the airports in our sample have more air service today than they did under regulation. Seventy-eight percent of the small and medium-sized-community airports have had an increase in the number of departures, and every large-community airport has more departures. Overall, the number of departures has increased by 50 percent for small-community airports; 57 percent for medium-sized-community airports; and 68 percent for large-community airports. In addition, the overall number of available seats has increased for all three airport groups. However, because of the substitution of turboprops for jets in many markets serving small and medium-sized communities following deregulation, the increase in the number of available seats has been less dramatic for those communities than the increase in departures. For example, although the number of departures has increased by 50 percent for small-community airports, the number of seats has increased by only 15 percent—an increase that barely exceeds the overall increase in population over the past two decades at the communities served by these airports. Because of the greater use of turboprops, some airports serving small and medium-sized communities have actually had a decrease in the number of available seats even though the number of departures has increased. The airport serving Bismarck, North Dakota, for example, has had a 23-percent decrease in the number of seats even though the number of departures has increased by 54 percent. By comparison, every large-community airport has had an increase in the number of seats, and in some cases—like Phoenix’s Sky Harbor Airport and Houston’s Hobby Airport—that increase exceeds 300 percent. In addition, several other airports serving small and medium-sized communities have experienced a decline in the number of both departures and seats. The communities that these airports serve—including Duluth, Minnesota; Green Bay, Wisconsin; Moline, Illinois; and Rapid City, South Dakota—are located primarily in the Upper Midwest, where economic growth has been relatively slow. In some cases, the communities served by these airports have contracted. For example, the average annual change in population for Moline, Illinois, between 1979 and 1993 was –0.5 percent. For the three communities in our sample whose airports have experienced the sharpest decline in departures and seats—Lincoln, Nebraska; Rochester, Minnesota, and Sioux Falls, South Dakota—the average annual growth rate during this period was only 0.4 percent in population, 1.3 percent in personal income, and 1.4 percent in employment. By comparison, for Phoenix, Arizona, the average annual growth rate was 3.0 percent in population, 3.7 percent in personal income, and 3.7 percent in employment. Measuring the overall changes in air service quality since deregulation is more difficult than measuring the changes in quantity. Such an assessment requires, among other things, a subjective weighting of the relative importance of several variables that are generally considered dimensions of quality. These variables are the number of (1) departures and available seats, (2) destinations served by nonstop flights, (3) destinations served by one-stop flights and the efficiency of the connecting service, and (4) jet departures compared with the number of turboprop departures. We found that large-community airports, largely because of their central role in hub-and-spoke networks, have not only had an increase in the number of departures but have also experienced a nearly 25-percent increase in the number of cities served by nonstop flights. In addition, while the share of departures involving jets at large-community airports has decreased slightly with the greater use of turboprops, the actual number of jet departures has increased by 47 percent for airports serving large communities. For airports serving small and medium-sized communities, the picture is much less clear. For these airports, hub-and-spoke networks have resulted in more departures and more and better one-stop service. However, because much of this service is to hubs via turboprops, small and medium-sized communities have few destinations served by nonstop flights and relatively less jet service. For the small-community airports in our sample, for example, the number of cities accessible via nonstop service has declined by 7 percent since deregulation while the percentage of departures involving jets fell from 66 percent in 1978 to 39 percent in 1995. On the other hand, the number of cities accessible via one-stop service has increased by about 10 percent and the efficiency of that service has improved substantially as a result of the greater number of departures. Weighting the value placed on these changes depends on a subjective determination that will vary by individual. As a result, it is difficult to judge whether smaller communities such as Fayetteville, North Carolina, have better air service today. Even though the number of destinations served from Fayetteville’s airport has declined from nine in 1978—including daily service to Washington, D.C.—to two in 1995, those two cities (Atlanta, Georgia, and Charlotte, North Carolina) are major hubs. When service to these hubs is combined with more frequent turboprop service to and from Fayetteville, the result is a substantial increase in one-stop connections and a corresponding decrease in layover times between flights for residents of Fayetteville. An assessment of service quality for small and medium-sized communities is further complicated because it is not possible to convert each dimension of quality into a common measure, such as total travel time. Although most of the dimensions can be measured in terms of travel time, one cannot: the perceived levels of amenities and comfort that travelers associate with the different types of turboprops and jets. As a result, developing a formula that combines the various factors to produce a single, objective “quality score” is problematic. Nevertheless, as appendix II shows, when we considered the airports in our sample that had either a positive or negative change in every quality dimension, we found not only that large-community airports have better air service today than they did under regulation but that geographical differences exist as well. Fast-growing communities of all sizes in the West, Southwest, Upper New England, and Florida have better service, while some small and medium-sized communities in the Upper Midwest and Southeast—areas of the country that have experienced relatively slow economic growth over the last two decades—are worse off today. In a recent study of the nation’s smallest airports, which account for approximately 3 percent of the total passenger enplanements in the United States, the Department of Transportation has found trends in fares and service similar to those that we observed, and the study’s conclusions are consistent with our findings. Because we were interested in fare trends at individual airports, we limited the airports we examined to those that had sufficient numbers of tickets to ensure that the results were statistically meaningful. As a result, we excluded the airports serving the nation’s smallest communities. We believe that the Department of Transportation’s study could therefore serve as a valuable complement to our analysis. Since the 1940s, the rate of airline accidents in the United States has been declining. Following the introduction of jet aircraft in the late 1950s (e.g., the Boeing 707) and second-generation jets in the 1960s (e.g., the Boeing 737), this long-term decline in the accident rate accelerated. By the late 1980s there were only a small number of airline accidents occurred each year, and as a result, the rate of decline has slowed in recent years. In addition, the overall accident rate for commuter carriers has declined by 90 percent over the last two decades, largely due to more advanced aircraft technology and better pilot training. As appendix III shows, this general trend toward improved safety is evident for all three airport groups that we reviewed, especially for airports serving medium-sized communities. Specifically, the rate of accidents at airports serving small communities fell from 0.47 accidents per 100,000 departures in 1978 to 0.14 accidents per 100,000 departures in 1994. At medium-sized-community airports, the rate fell from 1.29 accidents per 100,000 departures to 0.00 in 1994 because no accidents were recorded at those airports in 1994. Finally, at airports serving large communities, the rate fell from 0.41 accidents per 100,000 departures to 0.14 in 1994. However, as appendix III also shows, an increase of just one or two accidents in a given year can cause a significant fluctuation in accident rates. Thus, while it is true that turboprops do not have as good a safety record as the larger jets they replaced in many markets serving small and medium-sized communities, this fluctuation in accident rates makes it difficult to discern any impact of the increasing use of turboprops on relative safety between the airport groups. Our attempts to discern trends between airport groups by smoothing the data—employing, for example, such common practices as calculating a 3-year moving average—did not help identify any trends. Our analysis of accidents on routes to and from the airports in our sample was similarly inconclusive in terms of identifying any differences in the trends between airport groups. Mr. Chairman, this concludes our prepared statement. We would be glad to respond to any questions that you or any member of the Committee may have. Colorado Springs (M) Kansas City (L) Pittsburgh (L) Charleston (S) Knoxville (M) Chattanooga (M) Augusta (M) Tucson (M) Huntsville (S) Albuquerque (M) Montgomery (S) El Paso (M) Jackson (M) Midland (S) Lafayette (S) Houston Hobby (L) Fort Myers (S) Rochester (S) Colorado Springs (M) Kansas City (L) Appleton (S) Denver (L) Lincoln (S) Minneapolis (L) Duluth (S) Burlington (S) Rapid City (S) Madison (M) Moline (M) Peoria (M) Cleveland (L) Portland (S) Manchester (S) Boston (L) Newark (L) Philadelphia (L) Harrisburg (M) Pittsburgh (L) St. Louis (L) Asheville (S) Myrtle Beach (S) Charleston (M) Augusta (M) Albuquerque (M) Pensacola (M) Atlanta (L) El Paso (M) Amarillo (S) Miami (L) Dallas (L) Lafayette (S) Houston Intercontinental (L) Shreveport (M) Fort Myers (S) Houston Hobby (L) Little Rock (M) Sarasota (S) Airport Competition: Essential Air Service Slots at O’Hare International Airport (GAO/RCED-94-118FS, Mar. 4, 1994). Airline Competition: Higher Fares and Less Competition Continue at Concentrated Airports (GAO/RCED-93-171, July 15, 1993). Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130, Mar. 20, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). Airline Competition: Weak Financial Structure Threatens Competition (GAO/RCED-91-110, Apr. 15, 1991). Airline Competition: Fares and Concentration at Small-City Airports (GAO/RCED-91-51, Jan. 18, 1991). Airline Deregulation: Trends in Airfares at Airports in Small and Medium-Sized Communities (GAO/RCED-91-13, Nov. 8, 1990). Airline Competition: Industry Operating and Marketing Practices Limit Market Entry (GAO/RCED-90-147, Aug. 29, 1990). Airline Competition: Higher Fares and Reduced Competition at Concentrated Airports (GAO/RCED-90-102, July 11, 1990). Airline Competition: DOT’s Implementation of Airline Regulatory Authority (GAO/RCED-89-93, June 28, 1989). Airline Service: Changes at Major Montana Airports Since Deregulation (GAO/RCED-89-141FS, May 24, 1989). Airline Competition: Fare and Service Changes at St. Louis Since the TWA-Ozark Merger (GAO/RCED-88-217BR, Sept. 21, 1988). Competition in the Airline Computerized Reservation Systems (GAO/T-RCED-88-62, Sept. 14, 1988). Airline Competition: Impact of Computerized Reservation Systems (GAO/RCED-86-74, May 9, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed changes that have occurred in domestic aviation since the deregulation of the airline industry, focusing on changes in airline fares, service quantity and quality, and safety. GAO noted that: (1) increased competition and especially the entry of new airlines has resulted in lower air fares than before deregulation at most airports; (2) fares have risen at some airports, many of which are dominated by one or two airlines; (3) most airports have more and better quality air service, in terms of number of departures and available seats, available now than they did before deregulation, although some airports serving small- and medium-sized communities have experienced decreases in service; and (4) air travel safety has improved since deregulation, and there were no statistically significant differences in air safety rates among airports serving small, medium, or large communities.
MARAD is responsible for overall policy direction of the Academy. The Maritime Administrator issues Maritime Administrative Orders (MAO) and bulletins establishing policies and providing guidelines for Academy operations. Additionally, the MARAD Chief Financial Officer (CFO) issues directives to provide guidance for conducting Academy financial operations. Since 2010, there have been two MARAD CFOs with an acting CFO between their appointments. From 2009 through 2011, MARAD relied on a Fiscal Oversight Board to assist MARAD in the oversight of the Academy’s operations. The Academy Superintendent is responsible for day-to-day management of the Academy and reports directly to the head of MARAD, the Maritime Administrator. Since September 2008, there have been three Academy Superintendents, with the Academic Dean serving as Interim Superintendent between Superintendent appointments. Various assistant superintendents report to the Superintendent, and various directors of departments and offices report to the assistant superintendents. The Office of Academy Operations is led by the Academy’s CFO, who reports directly to the MARAD CFO. Additionally, since our 2009 report, four Academy support functionsdirectly to MARAD headquarters offices. As of September 30, 2011, the Academy was also affiliated with two nonappropriated fund instrumentalities (NAFI) intended to assist the Academy in providing programs and services primarily for Academy midshipmen and employees. One of these NAFIs, the Employee Association, is intended to promote and support the interests of Academy personnel. The other is the Regimental Morale Fund Association, authorized to support morale, welfare, and recreation activities for the Academy’s midshipmen. In addition, as of September 30, 2011, four other NAFIs were in transition for closure (see table 1). Additionally, the Academy also received financial support from two private foundations, the U.S. Merchant Marine Academy Alumni Association and Foundation and the Sailing Foundation. These foundations use funds provided by Academy alumni to provide financial support for the Academy’s charitable, scientific, and educational activities. See figure 1 for an overview of DOT, MARAD, and Academy organizational relationships. Our 2009 reportdeficiencies as well as weaknesses in the Academy’s control environment. Specifically, we found that flawed design and implementation of internal controls were the root causes of the Academy’s inability to prevent or detect numerous instances of improper and questionable sources and uses of funds that we identified. Additionally, we found that the Academy lacked an accountability structure that clearly defined organizational roles and responsibilities; policies and procedures for carrying out its financial stewardship responsibilities; an oversight and monitoring process; and periodic, comprehensive financial reporting. As a result, our 2009 report included a total of 47 recommendations for corrective action. on the Academy identified numerous internal control Of the 47 recommendations, we made 1 overarching recommendation concerning actions needed for the Academy to establish effective overall internal control. Specifically, we recommended that the Secretary of Transportation direct the Administrator of MARAD, in coordination with the Superintendent of the Academy, to establish a comprehensive, risk- based internal control system to address the core causes of the control deficiencies identified in our 2009 report, including delineating the roles and responsibilities of management and employees to establish and maintain a positive and supporting attitude toward internal control and conscientious management, and the responsibility of managers to monitor control activities. The remaining 46 recommendations related to control deficiencies associated with the following activities: Academy training vessel use, personal service acquisitions, NAFI camps and clinics using Academy facilities, midshipmen fee accountability, accountability for Academy reserves, Academy and NAFI governance structure, financial reporting, fund accountability, and capital asset repairs and improvements. Our review of the Academy’s and MARAD’s efforts to address the issues reported in our 2009 report found that the Academy and MARAD had not yet established a comprehensive, risk-based system of internal control at the Academy. The Academy and MARAD both focused their initial efforts on the more readily correctible deficiencies in the Academy’s controls over specific activities. As a result, the Academy and MARAD had made substantial progress in addressing weaknesses related to specific control activities by successfully implementing 32 of the 46 control deficiency- related recommendations identified in our 2009 report. For example, the corrective actions taken to improve controls were sufficient for us to conclude that all recommendations related to training vessel use, personal service acquisitions, accountability for Academy reserves, and NAFI camps and clinics using Academy facilities were successfully implemented. Our review found that actions taken by the Academy and MARAD through September 30, 2011, were not sufficient to fully address our overarching recommendation related to the establishment of a comprehensive, risk- based internal control system at the Academy, a positive and supportive attitude toward internal control, and an associated system of monitoring internal control effectiveness. Standards for Internal Control in the Federal Government provides that internal control should (1) serve as the first line of defense in preventing and detecting errors and fraud, (2) provide for an assessment of external and internal risks to the entity, and (3) provide for internal control monitoring. Monitoring is critical to ensure that findings of audits or other reviews are promptly resolved and to assess the ongoing effectiveness of controls. However, our review found that the Academy had not yet taken actions sufficient to address the core cause of many of the control deficiencies identified in our 2009 report. Understanding the causes of internal control deficiencies is critical to designing effective, risk-based controls that can help prevent questionable transactions in the future. For example, as discussed later in this report, the Academy took action to correct errors we identified in its accounting for certain capital asset repairs and improvements, but had not yet taken action to conduct an analysis, as we recommended, to identify and address the underlying causes of these errors. Nonetheless, the Academy and MARAD have taken a number of actions that represent an important first step toward focusing the top-level attention and accountability needed to establish an effective overall system of internal control for the Academy. For example, in January 2011, MARAD established and filled an Internal Control Officer (ICO) position for the Academy. The Academy ICO, who is also the Academy’s Chief Information Officer, reports to MARAD’s Internal Controls Program Manager and is responsible for coordinating and leading the Academy’s reviews of internal controls in accordance with Office of Management and Budget (OMB) Circular A-123 and the Federal Managers’ Financial Integrity Act of 1982 (FMFIA). Also, in September 2011, MARAD issued guidance in MAO 400-11 intended to help clarify the Academy’s oversight role and responsibilities with respect to the Academy’s affiliated NAFIs. Despite these completed actions, until Academy and MARAD officials take additional actions to fully address our recommendation in this area, their ability to ensure that Academy and MARAD management’s objectives are carried out, including proactively identifying and correcting control deficiencies in a timely fashion, will continue to be impaired. Both the Academy and MARAD focused their initial efforts on the more readily correctible deficiencies in the Academy’s specific controls, as discussed later in this report. However, while these efforts are necessary and have a significant impact on internal control, it is important that the Academy and MARAD address the long-standing, deeply rooted causes of the specific control deficiencies we identified and establish processes to ensure that effective internal control is fully established, maintained, and monitored over time. In this regard, it is also important that the Academy and MARAD provide priority attention and focus not only on addressing questionable transactions we identified in our prior report, but also on the underlying causes of the specific errors we identified. As of September 30, 2011, Academy and MARAD officials had taken sufficient actions to address 32 of our 46 prior recommendations intended to address specific deficiencies in the Academy’s control activities. For example, officials took steps to fully close all of our previous recommendations related to controls over training vessel use, personal service acquisitions, accountability for Academy reserves, and NAFI camps and clinics using Academy facilities. For the 14 remaining recommendations, we identified varying levels of action in process as of September 30, 2011, but none sufficient for us to consider them effectively implemented. Our conclusions on the status of Academy and MARAD actions to address our 46 previous recommendations concerning specific deficiencies in control activities are summarized in table 2, discussed in summary in the following sections, and discussed in greater detail in appendix II. Standards for Internal Control in the Federal Government provides that agencies should take appropriate follow-up actions to address findings and recommendations of audits and other reviews. Also, GAO’s Internal Control Management and Evaluation Tool provides that in taking such actions, agencies are to ensure that the underlying causes giving rise to the findings or recommendations are investigated, that actions are decided upon to correct the identified weaknesses, and that weaknesses are corrected promptly. Training vessel use. In 2009, we reported that the Academy lacked policies and procedures and adequate internal controls over the use of Academy training vessels by outside parties. For example, we found that the usage rates for Academy training vessels were not supported and were not based on consideration of current costs of operation. The Academy and MARAD took steps to address all five of our recommendations related to internal control deficiencies regarding outside party use of Academy training vessels. In September 2010, the Academy issued Superintendent’s Instruction 2010-08, which set out procedures for outside parties to use and pay for all costs associated with the use of Academy training vessels. The Academy’s Department of Waterfront Activities also issued a vessel usage rate schedule in September 2010 that provided current costs of operation for use of Academy training vessels. The guidance provides base hourly, daily, and weekly billing rates that outside parties are to be charged for using the Academy’s training vessels. Personal service acquisitions. In our 2009 report,instances in which the Academy entered into illegal personal service agreements with its NAFIs, whereby NAFI employees performed exclusively Academy functions and reported to Academy supervisors. The Academy and MARAD took actions to address both of our recommendations related to improving internal controls over personal service acquisitions. To address our recommendations, MARAD performed an analysis to identify the nature and scope of personal service arrangements in order to determine whether amounts paid by the government were consistent with the services received by the Academy. Using the results of the analysis, MARAD took action to address all personal service arrangements by either converting affected NAFI employees to the civil service or transferring them to a contract. In addition, the MARAD Administrator finalized MAO 400-11 on September 30, 2011, which provided governing principles under which NAFIs must operate, including that a NAFI must have staff to conduct its daily operations, and it is not to receive any operating subsidy from the Academy other than minor incidental support. Notably, however, MAO 400-11 requires that generally, a NAFI may not provide goods or services to the Academy. NAFI camps and clinics using academy facilities. In 2009, we reported that the Academy lacked policies and procedures and effective internal controls to help ensure proper accounting for the use of fees from conducting camps and clinics using Academy athletic facilities. The Academy and MARAD took corrective actions to address all three of our recommendations related to NAFI camps and clinics’ use of academy facilities. Specifically, MARAD engaged the services of an independent public accounting firm to perform an analysis to identify NAFI camps, clinics, or fund-raising activities that used Academy property from fiscal years 2006 through 2008 and determine the related sources and uses of funds. Based on this analysis, it was determined that most of the revenue resulting from the camp and clinic activity was paid to the individuals responsible for conducting the activities and the residual revenue, deemed nominal, was identified to have been used to support the general activities of the NAFI conducting the camp or clinic. Also, Superintendent’s Instruction 11100.1, dated May 17, 2011, established a written policy that Academy facilities will not be used for revenue-generating athletic camps and clinics.selected 2010 and 2011 transactions did not identify any camp or clinic activity using the Academy’s facilities. Our review of supporting financial documents for Midshipmen fee accountability. In 2009, we identified instances of improper and questionable sources and uses of midshipmen fees, as well as a lack of adequate procedures and controls to maintain effective accountability over the amounts charged to midshipmen and to ensure that midshipmen fees were used only for their intended purpose. The Academy and MARAD undertook a series of reviews and completed actions that addressed seven of our nine recommendations related to midshipmen fee accountability. To address our recommendations, MARAD established a baseline for items of a personal nature and related costs to be charged to midshipmen, beginning with fiscal year 2009. Following an analysis based on this baseline, MARAD determined that prior years’ midshipmen fees included the costs of items that should have been paid by the Academy. Consequently, the fees collected from midshipmen over their 4-year academic program dropped from $15,500 in fiscal year 2008 to approximately $5,500 in fiscal year 2009. As a result, MARAD proposed a refund of midshipmen fees to the midshipmen who it was determined had overpaid in academic years from academic year 2003-2004 through academic year 2008-2009. In February 2012, DOT informed us that MARAD began making refunds in November 2011 and, as of January 2012, had reimbursed 83 percent of eligible refund recipients and continued with its efforts to reach the remaining refund-eligible midshipmen. One of the two recommendations that remain open relates to establishing written policies and procedures over (1) processing of midshipmen fee collections and payments and (2) monthly reporting for midshipmen fee activity and balances. On May 26, 2010, MARAD issued MAO 400-15, providing policy guidance governing the setting, managing, and reporting of midshipmen fees. However, this guidance did not provide detailed procedures to ensure that staff consistently and effectively process and account for midshipmen fee activities and balances. The second of these open recommendations calls for a determination of the extent to which appropriated funds and midshipmen fees collected should be used to pay for contracted medical services. On February 5, 2010, MARAD reported the results of a joint MARAD and Academy review that identified the types of medical services provided to the midshipmen, through both appropriated and nonappropriated funding, and stated that for the fiscal year 2012 budget development process, the Administrator may wish to consider guiding the Superintendent to seek alternatives to the current approach. However, as of September 30, 2011, the Academy had not yet determined the extent to which appropriated funds and midshipmen fees collected should be used to pay for contracted medical service as called for in our recommendation. Accountability for academy reserves. Our 2009 report identified inappropriate conversion of “off-book” reserves accumulated from excess midshipmen fee collections, funds received from the Global Maritime and Transportation School (GMATS) NAFI for use of the Academy’s facilities, and expiring appropriated funds that were deposited to a commercial bank account to fund Academy operations the following fiscal year. For example, a “Superintendent’s Reserve” account was created “off book” and used to make discretionary Academy operations payments authorized by the Academy Superintendent. The Academy and MARAD have taken corrective actions sufficient to address all four recommendations related to accountability for Academy reserves. To address our recommendations, MARAD engaged the services of a public accounting firm to perform an analysis of the sources and uses of funds held in commercial bank accounts during fiscal years 2006 through 2008 to determine consistency with applicable law, regulation, or policy. A May 19, 2010, MARAD CFO memorandum on the results of the analysis stated that the review did not identify any evidence that funds were used inappropriately or for the personal benefit of any individual and that no inconsistency with law, regulation, or policy was identified. Accordingly, the MARAD CFO concluded that no further action was warranted. In addition, the Academy’s changed business practices discontinued conversion of appropriated and nonappropriated funds to “off-book” reserves. Further, as a result of the analysis performed in response to our recommendation, a balance of approximately $3 million in the commercial bank account maintained for excess prior-year midshipmen fee collections was transferred to the U.S. Treasury to assist with the midshipmen fee refunds. Academy and NAFI governance structure. Our 2009 report found that 11 of the 14 NAFIs that provided programs and services for Academy midshipmen and employees did not have approved governing documents, such as charters and bylaws, and that the remaining 3 NAFIs performed some duties and functions that fell outside the scope of authority set forth in their governance documents. As a result, we issued eight recommendations related to improving controls over activities between the Academy and its affiliated organizations. The Academy and MARAD completed actions sufficient to address four of our eight recommendations related to Academy and NAFI governance structure. For example, on September 30, 2011, MARAD issued MAO 400-11, which provides policies and general guidelines for the establishment and governance of NAFIs and their affiliation with the Academy, including charters, bylaws, and a NAFI governing board. In addition, an independent public accounting firm’s analysis of activity between the Academy and its GMATS NAFI did not identify any activity that was inconsistent with applicable law, regulation, or policy. A May 19, 2010, MARAD CFO memorandum concluded that none of the NAFI funds were used inappropriately or for the benefit of any individual and that no further action was warranted. Two of the four recommendations that we concluded were not yet addressed relate to performing an analysis to identify each activity involving the Academy and its NAFIs and establishing formal written policies and procedures documenting the (1) planned timing of performance of each internal control procedure for each NAFI activity, (2) responsibilities for oversight and monitoring of those internal control procedures, and (3) direct, compensating, and mitigating controls for each NAFI activity. Although MAO 400-11 provides general guidance on NAFI and Academy relationships, it does not provide the necessary detailed procedures to be followed to ensure that each NAFI has a robust, risk-based system of checks and balances with the Academy for each NAFI activity as called for in our 2009 recommendation. As of September 30, 2011, we found that neither the Academy nor MARAD had performed an analysis aimed at identifying all activities between the Academy and its ongoing affiliated organizations. Without identifying such activities and their associated risks to the Academy, the Academy faces the continued risk of improper transactions. The other two open recommendations relate to the relationship between the Academy and its GMATS NAFI. The recommendations were intended to address the Academy improperly entering into sole- source agreements with GMATS to provide training to other federal agencies and improperly accepting and using nonappropriated GMATS funds. On September 30, 2011, the Maritime Administrator issued a memorandum rescinding the authority of GMATS to operate as a NAFI effective August 1, 2012. The memorandum also provided that GMATS may continue future operations if it is reestablished under a different operating model. However, because GMATS NAFI operations have not yet been terminated or reestablished, our two recommendations in this area remain open. Financial reporting. In 2009, we found that the Academy did not routinely prepare financial reports presenting information on all of its financial activities, including sources and uses of its appropriated and nonappropriated funds. Instead, we found that the Academy’s financial reporting was sporadic, unreliable, and, consequently, of limited value for decision making. The Academy and MARAD completed actions sufficient to address two of our five recommendations aimed at improving controls over the Academy’s financial reporting. MARAD issued CFO Directive 2 in December 2009 and amended it twice in 2010. Directive 2 established procedures for monitoring Academy financial performance by requiring the Office of Academy Operations to prepare, review, and provide monthly financial reports to the MARAD CFO for review. Directive 2 also provided for following up on, and documenting, unusual items and balances. The three recommendations that remain open relate to (1) the production of financial reports to facilitate oversight and monitoring of actual and budgeted amounts of revenues and expenses, reporting of amounts for activities and balances with affiliated organizations, and the identification of items of revenue and expenses; (2) identification and evaluation of potential misstatements of amounts in Academy financial records; and (3) compliance with required annual reporting to Congress on all expenditures and receipts for the Academy and its affiliated organizations. CFO Directive 2 provides instructions for the Office of Academy Operation’s preparation, review, and monitoring of monthly financial reports on funds allotted to the Academy. However, it does not require the monthly reports to include information on all sources and uses of other funds benefiting the Academy, such as obligation and expenditure activity of expired funds; CIP and gift and bequest expenditure activity for years following activity for miscellaneous receipts, such as transfer of funds from closed commercial bank accounts, fees from foreign midshipmen and for outside use of training vessels, and payments from the Navy Exchange; sources and uses of funds maintained in commercial bank accounts; and sources and uses of midshipmen fees processed through two appropriation accounts and one nonappropriation account. The MARAD CFO told us that the primary intent of Directive 2 is to report the status of funds allotted to the Academy, but acknowledged that any additional funds should be identified and reported as well. Accordingly, the MARAD CFO agreed that CFO Directive 2 reporting requirements should be updated to require all sources and uses of Academy funds to be identified and reported to provide visibility over all Academy accounts and activities necessary to facilitate comprehensive oversight and monitoring. With respect to our recommendation to identify and evaluate potential misstatements of amounts in Academy financial records, MARAD engaged an independent public accounting firm to determine the extent of any such misstatements. The review identified approximately $1.3 million in Academy expenditures funded by midshipmen fees that should have been funded by appropriated funds. However, as of September 30, 2011, MARAD had not yet determined whether it should adjust financial records and reports to reflect the use of midshipmen fees to augment Academy appropriations. MARAD officials acknowledged the need to perform additional work to determine whether adjustments to the Academy’s financial records and reports are warranted. In response to our recommendation to provide Congress a statement on the purpose and amount of all expenditures and receipts of nonappropriated funds benefiting the Academy and its affiliated organizations, MARAD issued CFO Directive 3 on December 29, 2009. Directive 3 established procedures for reporting on resources other than the Academy’s appropriated funds to be included in MARAD’s annual report to Congress. Specifically, the directive provides for reporting on the amount, source, intended use, and nature of such funds. However, based on our review of MARAD’s fiscal year 2009 annual report to Congress—the most recent report available as of September 30, 2011—the report did not contain expenditure information for any of the Academy’s affiliated organizations and did not provide a purpose for which the expenditures were made to comply with the reporting requirement as we recommended. Funds accountability. In 2009, we found that the Academy did not have assurance that it complied with applicable funds control requirements, including those in the Antideficiency Act (ADA). The Academy and MARAD have taken sufficient actions to implement four of our seven prior recommendations related to funds accountability. Specifically, officials investigated and mitigated a potential ADA violation, completed a series of reviews aimed at resolving past deficiencies related to “parking” of Academy funds to keep them from expiring,expense at year-end. With respect to the potential ADA violation, MARAD’s Chief Counsel determined in 2011 that Academy officials had retained approximately $200,000 in excess funds from GMATS which they had no legal authority to retain and had thereby augmented Academy funds in excess of appropriations made by Congress during fiscal years 2006, 2007, and 2008. According to an April 11, 2011, memo signed by the Maritime Administrator, MARAD determined that sufficient prior-year funds existed to offset the GMATS-funded Academy expenditures. MARAD adjusted its accounts accordingly and since these sufficient prior-year funds existed, MARAD did not exceed its available appropriations in violation of the ADA. Also, the Academy and MARAD successfully implemented two recommendations related to “parking” of appropriated Academy funds by performing a series of reviews to identify excess Academy funds improperly held in commercial bank accounts. As a result of these reviews, the Fiscal Control Office NAFI returned over $214,000 of improperly held excess fiscal years 2006 and 2007 funds to the Academy. The Academy subsequently deobligated these funds and returned them to their respective appropriation. Further, the Academy and MARAD have discontinued use of the Academy’s in-house fund control system and have transitioned to DOT’s financial accounting system and have revised the Budget Program Accounting Codes. These actions should provide more transparent financial reporting and improved oversight. Additionally, in January 2010, MARAD issued CFO Directive 6 establishing policy and procedures for accounting and recording of Academy accrual of expenses at year-end. If fully and effectively implemented, Directive 6 should improve internal control over accounting and recording of Academy accrual of expenses. and established written policy on the accrual of items of The three recommendations that remain open in this area relate to making final notification to Congress of an ADA violation resulting from midshipmen fees that were used to cover Academy expenses without legal authority to do so, implementing corrective actions as a result of a MARAD review in which it identified weaknesses related to the Academy’s funds control process, and establishing targeted internal controls, such as management’s review and approval procedures, over accruals. According to a March 23, 2011, memorandum to the Maritime Administrator from MARAD’s Chief Counsel, the Academy violated the ADA by charging midshipmen fees in excess of authorized levels, improperly augmenting agency appropriations, and making expenditures in excess of and in advance of appropriations. As of June 1, 2012, the required ADA report to Congress had not been filed. Additionally, the Academy and MARAD had yet to implement corrective actions as a result of a review of the Academy’s funds control process. Specifically, on February 1, 2010, the MARAD CFO completed a review of the Academy’s funds control process and made five recommendations for process improvements, including that the MARAD CFO issue a directive establishing policy and procedures requiring the periodic review and closeout of undelivered orders. The review also recommended that the MARAD CFO issue policy guidance to address an internal control weakness regarding contracting officer approval of Academy invoices. However, as of September 30, 2011, the MARAD CFO had not issued any guidance in this area. While MARAD officials told us they had practices in place for contracting officer approval of invoices, we found that as of September 30, 2011, MARAD had not documented and disseminated these practices to Academy acquisition staff in policy guidance, such as a CFO directive. Capital asset repairs and improvements. Our 2009 report identified repair and maintenance (R&M) costs that were improperly charged against the Academy’s no-year capital asset improvement appropriation and found that the MARAD CFO did not conduct timely reviews of the Academy’s capital improvement-related expenses for fiscal years 2006 and 2007. The Academy and MARAD have taken corrective action to address one of the three recommendations we made to improve specific controls related to capital asset repairs and improvements. On May 25, 2010, the MARAD CFO issued Directive 7, requiring monthly review of recorded amounts of Academy repairs and maintenance expenses and capital improvement transactions to identify any anomalies, discrepancies, or questionable entries. Our work found that the Office of Academy Operations reviewed the accounting records on a monthly basis in accordance with Directive 7. The two recommendations that remain open relate to (1) establishing policies and procedures for reporting financial information on R&M expenses and capital asset additions to help monitor these items and (2) performing an analysis to identify the causes of approximately $8 million of errors in recording R&M expenses identified in our 2009 report. While CFO Directive 7 required monthly reporting, we found that the reports were not distributed to users, such as Academy department managers, to facilitate monitoring of these items as called for in our recommendation. With respect to our recommendation to identify the causes of the errors in recording R&M expenses, we found that the Academy and MARAD had taken action to reclassify the transaction errors we identified but had not yet conducted the recommended root cause analysis needed to identify and address any systemic issues causing the errors and to prevent such errors in the future. Consequently, the Academy and MARAD are at risk of reoccurrences of errors in R&M expense accounting. Although the Academy and MARAD have taken steps to improve oversight of the Academy’s CIP, the Academy does not have a current comprehensive plan for capital improvements to provide the basis for oversight of CIP planning and implementation. In response to a May 2009 directive from the Secretary of Transportation, MARAD convened an independent “Blue Ribbon” advisory panel consisting of senior government executives to analyze the Academy’s CIP and its investment priorities. The panel concluded that the Academy’s facilities were seriously deteriorated and its support buildings were inadequately maintained. The panel’s March 2010 report, USMMA: Red Sky in the Morning, provided 11 recommendations, which included linking a comprehensive Academy strategic plan to its Facilities Master Plan with capital investments prioritized for a 10- to 15-year period to address the Academy’s extensive needs. According to DOT officials, MARAD prepared its USMMA Capital Improvements Implementation Plan in November 2010 to serve as a road map for implementing the panel’s recommendations and to identify the Academy’s immediate capital improvement needs through 2016. The Academy also established and filled a new position for an Assistant Superintendent for Capital Improvements and Facilities Maintenance in 2010, designating the new position’s responsibilities to include oversight of the Academy’s Departments of Capital Improvements and Facilities Maintenance and the Office of Safety and Environmental Protection. This position is to provide oversight for the Academy’s CIP and leadership for strategic planning and accountability related to capital improvement activities. Further, according to Academy officials, the Academy realigned its facilities management organizational structure in December 2010 to reflect that of other academic organizations and service academies and to improve oversight. Despite these recent improvements, the Academy has not yet developed reliable project cost estimates and current phased capital investment plans aligned with the organization’s strategic objectives, as recommended by the Blue Ribbon panel in 2010. Guidance issued by OMB as well as our prior work on leading practices in this area identifies the need for effective capital planning to provide CIP accountability and oversight. Our Executive Guide: Leading Practices in Capital Decision- Making summarizes the results of our research on leading capital planning practices used by state and local government and private-sector organizations. It includes practical steps for implementing capital planning effectively and presents examples that illustrate and complement many of the concepts and specific steps contained in OMB’s Capital Programming Guide. Our GAO Cost Estimating and Assessment Guide, which provides best practices for developing, managing, and evaluating capital program cost estimates, complements the Executive Guide. Fundamental success factors identified in our Executive Guide include developing long- term capital investment plans that integrate with long-range organizational strategic objectives, present reliable project cost estimates to inform decision making, and use a phased priority approach to guide and schedule capital spending. The Academy’s Facilities Master Plan, which identified anticipated capital projects for a 10-year period in three priority phases, has not been comprehensively updated since 2002. Up-to-date CIP information that identifies and prioritizes long-range renovation and new construction projects and provides reliable project cost estimates and completion and funding timelines would facilitate oversight by helping to assess whether individual improvements were carried out in a timely and cost-effective manner and in priority order. We identified instances where improvements were not carried out in priority order and also identified lengthy delays in completing some planned capital investments. For example, the 2002 Facilities Master Plan showed that the midshipmen’s primary commissary, Delano Hall (which serves over 2,000 meals a day to the Academy’s residential midshipman population), was to have been refurbished during fiscal year 2002. As of September 30, 2011, the Delano Hall refurbishment project had not yet broken ground. According to information that DOT provided in February 2012, several tasks related to the Delano Hall refurbishment project have been completed, such as remodeling rest rooms in compliance with the Americans with Disabilities Act. DOT told us at that time that it expected construction on the commissary to begin in the summer of 2012 and to be completed by December 2013. Preparing the 2010 Capital Improvements Implementation Plan was a positive step, although the implementation plan acknowledged the need for a more comprehensive, phased investment plan. For example, the implementation plan did not present a long-range phased investment plan and detailed cost estimates. Rather, projects presented were based on average replacement cost per square foot for federal government buildings, described in the plan as a “rough order of magnitude estimate.” Accordingly, the cost estimates presented in the implementation plan did not consider specific cost factors for capital improvements at the Academy, such as geographic location, local cost of construction and labor, and the technology, simulation, and infrastructure requirements of an institution of higher education. As stated previously and recommended by the 2010 Blue Ribbon panel, reliable cost estimates and phased investment priorities for Academy capital projects that are aligned with the organization’s long-range strategic objectives would help facilitate CIP oversight. According to DOT officials, the Academy had recently begun preparing its first strategic plan to define the organization’s goals and objectives and to help the Academy better prioritize investments. Specifically, they told us in February 2012 that the strategic planning process for the Academy is to be led by the Academy’s Acting Superintendent and is expected to be a multiphased effort beginning with identification of key internal stakeholders to assist with the development of critical issues that should be addressed in the strategic plan. Clearly articulated top management vision and support will be critical for fully implementing the open recommendation related to establishing a comprehensive risk-based system of internal control and related monitoring. The Academy and MARAD have taken a number of positive steps, but until Academy and MARAD officials take additional actions to fully address our recommendation in this area, their ability to ensure that management’s objectives are carried out, including proactively identifying and correcting control deficiencies in a timely fashion, will continue to be impaired. We reiterate the need to continue efforts to complete full implementation of this recommendation. The Academy and MARAD have made significant progress in addressing our prior recommendations related to specific control activities, and their actions effectively addressed 32 of the 46 recommendations related to deficiencies in specific controls. We reiterate the need to address the remaining 14 recommendations related to midshipmen fee accountability, Academy and NAFI governance structure, financial reporting controls, fund accountability, and controls over capital asset repairs and improvements. The Academy and MARAD have taken steps to improve CIP oversight and planning of capital improvement projects at the Academy. However, the Academy does not have comprehensive, updated information on capital improvement projects, including reliable cost data, long-range capital investment plans, and phased priorities. Further, the Academy has not yet aligned its capital improvement priorities with the organization’s strategic objectives, a critical factor in providing effective oversight. To improve oversight of the Academy’s capital improvement program, we recommend that the Secretary of Transportation direct MARAD to work with Academy officials to develop and maintain a current and comprehensive plan in accordance with leading practices and guidance. At a minimum, such plan should include an inventory of long-range capital improvements that align with the a phased investment approach for prioritizing capital improvement Academy’s strategic objectives, reliable estimates of cost specific to each capital improvement, and needs. On June 19, 2012, DOT provided written comments on a draft of this report, signed by the Deputy Assistant Secretary for Administration. The comments are reprinted in appendix III. In its comments, DOT described steps that the department, MARAD, and Academy are taking to help prioritize and manage Academy capital improvement projects. DOT indicated that these actions are aligned with and fulfill the recommendation in our draft report. Specifically, DOT stated that since our draft report was issued, the department, MARAD, and the Academy established a comprehensive CIP that includes a Senior Advisory Council and working group; a management process that provides a clear and consistent understanding of project status; and controls for project selection, management, and completion, including tracking reports, process documentation, and Senior Advisory Council meetings to monitor progress. We are encouraged by the increased oversight these measures would provide for the Academy’s capital investments. However, it is too soon to determine the extent to which the new CIP addresses the three specific elements that our recommendation provided as minimum attributes for a comprehensive capital improvement plan: an inventory of long-range capital improvements that align with the Academy’s strategic objectives, reliable cost estimates specific to each capital improvement, and a phased investment approach for prioritizing capital improvement needs. We believe these attributes are critical to facilitating oversight of the Academy’s capital improvements to ensure that they address the needs of the midshipmen in accordance with the strategic vision of the Academy, are undertaken in priority order, and are completed timely and cost effectively. DOT also stated that it remains dedicated to the challenging task of building and operating a comprehensive system of internal controls at the Academy, including completing action on all recommendations in our prior report, and will keep us apprised of progress to close out our recommendations. DOT further stated that it has established a goal of completing action on all 46 of our specific control-related recommendations by December 31, 2012. We are sending copies of this report to the Secretary of Transportation; Maritime Administrator; and Superintendent, U.S. Merchant Marine Academy. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9500 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To address our objective to assess the extent to which actions have been taken to address our prior recommendations as contained in our report, United States Merchant Marine Academy: Internal Control Weaknesses Resulted in Improper Sources and Uses of Funds; Some Corrective Actions Are Under Way (GAO-09-635), we interviewed agency officials and reviewed and analyzed MARAD summaries of action steps taken in response to our activities and policies, procedures, and memorandums issued by the U.S. Merchant Marine Academy (Academy) and Maritime Administration (MARAD); and laws and regulations governing Academy operations. In addition, as part of our assessment of recommendations related to accountability for Academy reserves, financial reporting controls, and fund accountability, we obtained a database of Academy obligations and expenditures at the transaction level for fiscal years 2010 and 2011 to make a nonstatistical selection of transactions to obtain an understanding of the Academy’s accounting process. We compared these data to amounts reported for the Academy in the Department of Transportation’s (DOT) annual performance and accountability reports. We performed system walk-throughs to gain an understanding of procedures in place. As part of this assessment, we reviewed and analyzed bank statements; and procurement documentation for selected transactions, including receipts and disbursements records; contracts, purchase orders, receiving documentation, invoices, and disbursement documents and other pertinent supporting documents. We also reviewed and discussed with appropriate officials the objectives and scope of a report prepared by an independent public accountant engaged by MARAD on its analysis of certain prior-year sources and uses of funds that related to our recommendations. Further, for some specific recommendations, we reviewed rate schedules, billings and underlying calculations, payroll records, and congressional notifications, as appropriate. We also reviewed analyses prepared by the Academy, MARAD, and certain nonappropriated fund instrumentalities (NAFI) regarding specific transaction types, such as a midshipmen fee refund schedule and a summary of payments made by the Global Maritime and Transportation School (GMATS) NAFI to the Academy. To assess actions taken to address our recommendations regarding controls over financial reporting and capital asset repairs and improvements, we also reviewed monthly reports summarizing transaction activity and balances. To address our second objective regarding our assessment of oversight of the Academy’s Capital Improvement Program (CIP), we interviewed Academy and MARAD officials and reviewed Academy and MARAD procedures, monthly reports to MARAD, and monthly analyses prepared to help ensure that transactions are accounted for properly. We also reviewed the Academy’s 2002 Facilities Master Plan, a report on the Academy’s CIP prepared by a Blue Ribbon panel, a follow-up implementation plan that responded to the Blue Ribbon panel report,GAO and Office of Management and Budget (OMB) guides related to capital improvements project planning. We also observed the Academy’s physical plant. We evaluated corrective actions taken through September 30, 2011, that the Academy, MARAD, and DOT made us aware of by February 15, 2012. In addition, we reviewed DOT’s Office of Inspector General audit and investigation reports for the period from January 2010 through March 2012 to determine if any work related to the Academy had been performed. No audit reports that could have an impact on this engagement were noted. We conducted this performance audit from February 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents a list of the 47 recommendations that we previously issued in our August 2009 report along with our analysis of the implementation status of each recommendation (see table 3). We evaluated corrective actions taken through September 30, 2011. In addition to the contact named above, Jack Warner, Assistant Director; Crystal Alfred; Francine DelVecchio; Geoff Frank; Pat Frey; Matthew Gardner; Jehan Abdel-Gawad; Jamie Haynes; Kate Lenane; and Scott McNulty made key contributions to this report.
The Academy, a component of the Department of Transportation’s (DOT) MARAD, was established in 1938 and built during World War II to provide undergraduate education programs for midshipmen to become shipboard officers and leaders in the maritime transportation field. DOT allocated $80 million to the Academy for fiscal year 2011 for its operations, CIP, and facilities maintenance. In August 2009, GAO issued a report that identified numerous internal control deficiencies and made 47 recommendations for corrective action. This report provides the results of GAO’s assessment of (1) the extent to which the Academy has taken actions to address the prior recommendations and (2) the Academy’s CIP oversight. To address these objectives, GAO evaluated corrective actions and supporting documentation; interviewed Academy, MARAD, and DOT officials; and performed walk-throughs of several processes revised in response to GAO’s prior recommendations. The U.S. Merchant Marine Academy (Academy) has made progress in improving its internal control since GAO’s August 2009 report, but has not yet fully addressed one key recommendation related to fundamental weaknesses in its overall internal control system. GAO found that while the Academy had appointed an Internal Control Officer responsible for coordinating reviews of internal controls, it had not yet established a comprehensive risk-based internal control system to ensure effective and efficient operations, reliable financial reporting, and compliance with laws and regulations, including a monitoring system to help ensure that control deficiencies are proactively identified and promptly corrected. Maritime Administration (MARAD) officials stated that their strategy had been to focus on the deficiencies that could be readily resolved. As of September 30, 2011, Academy and MARAD officials had addressed 32 of the other 46 prior recommendations regarding control activity deficiencies. Importantly, for many of the specific control-related recommendations that remained open, the Academy and MARAD had not yet identified the cause of the related internal control deficiencies, a critical step for designing effective controls. GAO also found that the Academy and MARAD have taken steps to improve Capital Improvement Program (CIP) oversight. For example, the Academy filled a new Assistant Superintendent position responsible for oversight of the Academy’s capital improvements and facilities maintenance. However, the Academy did not yet have an up-to-date, comprehensive plan for capital improvements to provide a basis for oversight. Specifically, the Academy did not have a capital improvement plan that identified long-term capital improvement needs aligned with the Academy’s strategic objectives, reliable cost estimates for planned improvements, and a phased implementation approach for prioritizing capital improvement needs. Such plan elements are consistent with Office of Management and Budget guidance and GAO-identified leading practices. In addition to reiterating the need to fully implement the remaining open recommendations, GAO is making one new recommendation directed at updating the Academy’s capital improvement plan to include reliable cost estimates and phased investment priorities aligned with the Academy’s strategic objectives in accordance with leading practices. In commenting on a draft of this report, DOT stated that it had recently established a comprehensive plan to manage CIP, and plans to keep GAO apprised as it completes actions addressing other GAO open recommendations.
In 2003, we first designated federal disability programs as a high-risk area because the programs require urgent attention and organizational transformation to ensure that they function in the most economical, We have also reported that efficient, and effective manner possible.improving work participation among people with disabilities has been challenging in part because the United States has a patchwork of disability programs—developed individually over many years—and lacks a unified set of national goals that guide coordination among programs or contribute to measuring desired outcomes. In February 2012, we identified programs administered by nine federal agencies that supported employment for people with disabilities, and many of these programs overlapped in that they provided similar services to similar populations. We recommended that the Office of Management and Budget (OMB), in consultation with those agencies that administer programs that support employment for people with disabilities, take two actions to improve coordination and program effectiveness and efficiency: (1) consider establishing measurable, governmentwide goals for employment of people with disabilities, and (2) continue to work with executive agencies that administer overlapping programs to determine whether program consolidation might result in administrative savings and more effective and efficient delivery of services. In response, the Office of Management and Budget (OMB) noted that, in fiscal year 2012, the administration’s Domestic Policy Council will conduct an internal review of ways to improve the effectiveness of disability programs through better coordination and alignment of policies and strategies. OMB also noted that the administration has set governmentwide goals for employment and inclusion of people with disabilities in the federal government, among other ongoing and planned efforts to improve employment for people with disabilities. Responsibility for many federal efforts, including employment support for people with disabilities, lies with more than one agency, yet agencies face a range of challenges and barriers when they attempt to work collaboratively. Both Congress and the Executive Branch have recognized this, and in January 2011, the GPRA Modernization Act of 2010 (GPRAMA) was enacted, updating the Government Performance and Results Act of 1993. GPRAMA establishes a new framework aimed at taking a more crosscutting and integrated approach for focusing on results and improving government performance. Effective implementation of the law could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. GPRAMA requires OMB to coordinate with agencies to establish outcome-oriented goals covering a limited number of crosscutting policy areas as well as goals to improve management across the federal government, and to develop a governmentwide performance plan for making progress toward achieving those goals. The performance plan is required to, among other things, identify the agencies and federal activities—including spending programs, tax expenditures, and regulations—that contribute to each goal, and establish performance indicators to measure overall progress toward these goals as well as the individual contribution of the underlying agencies and federal activities. GPRAMA also requires similar information at the agency level. Each agency is required to identify the various federal organizations and activities—both within and external to the agency—that contribute to its goals, and describe how the agency is working with other agencies to achieve its goals as well as any relevant crosscutting goals. OMB officials stated that their approach to responding to this requirement will address fragmentation among federal programs. OMB and the agencies within our scope identified several employment-related goals for fiscal year 2013, including a goal to increase the percentage of eligible servicemembers served by career readiness and preparedness programs, and a goal to provide 2 million workers with skills training by 2015 and improve the coordination and delivery of job training services. However, none of the governmentwide goals established for fiscal year 2013 relate specifically to employment for people with disabilities. Members of Congress have expressed concern that there is no consolidated list of all federal government programs, and that individual federal agencies are not able to provide a list of all of their programs and initiatives. GPRAMA requires OMB to create a single website, no later than October 1, 2012, that lists each federal agency’s programs. Agencies are required to identify how they define the term “program,” consistent with guidance from OMB; a description of the purpose of each program and how it contributes to the mission and goals of the agency; and information on funding for the current fiscal year and two previous fiscal years. In addition, in January 2012, OMB announced that it will work with agencies to identify a comprehensive list of programs, pursuant to the law. As a first step, OMB stated it will conduct a pilot for a selected group of agencies and bureaus with programs related to trade, exports, and competitiveness. Based on the pilot, OMB plans to issue guidance to all federal agencies detailing the approach to be taken to develop a governmentwide inventory of programs. Oversight and administration of programs that support employment for people with disabilities is fragmented among various congressional committees, multiple federal agencies, and state entities (see fig. 1). Agency officials reported that 27 of the 45 programs were created by statute, rather than at the agencies’ initiative. At least 13 congressional committees are responsible for oversight of the 45 programs, which are administered by nine federal agencies. In some cases, a range of departments or offices within an agency are responsible for the programs. For example, several offices in the Department of Labor (Labor) administer 14 programs that provide employment-related services to people with disabilities. Further, the Department of Defense has 10 programs within its purview in part because each service branch administers its own program or programs to assist wounded, ill, and injured servicemembers with employment. Adding to the fragmented landscape, some of these federal programs are administered by governmental and nongovernmental state or local entities, either in collaboration or independently. In addition, agency officials noted that states have various governmental structures to administer the programs. For example, the Department of Education (Education) allocates formula funds to states to carry out the State Vocational Rehabilitation Services (VR) program. The state may create one vocational rehabilitation agency (VR agency) or designate a separate VR agency to serve individuals who are blind and a “general” agency for all other disability categories. In addition, each state may organize its VR agency or agencies within different government departments relative to other states, such as state departments of labor or education, or they may be free-standing agencies or commissions. The specific definitions of disability and eligibility requirements that programs use—often established by law—vary, which may contribute to fragmentation. For example, officials from 34 programs collectively reported using at least 10 different definitions of disability, and 10 programs reported having no specific definition for disability. In addition, the 45 programs reported at least 26 specific limitations to eligibility, such as limiting services to Native Americans or people who are blind. The variation across programs allows policy makers and program officials to target certain populations and, as discussed later, may reduce the potential for duplicative services. However, variation in definitions of disability and eligibility requirements may lead to confusion among people with disabilities about their eligibility for a specific program, and may create additional administrative burdens for state and local agencies and private partners that deliver services. To address fragmentation among programs that support employment for people with disabilities, at least two programs have been created to assist clients in determining what services and benefits they are eligible for, and which would best meet their needs. Specifically, according to agency officials, the Social Security Administration’s (SSA) Work Incentives Planning and Assistance program helps Social Security Disability Insurance beneficiaries and Supplemental Security Income disability recipients (for the purposes of this report, we will refer to these populations collectively as “SSA disability beneficiaries”) understand SSA’s complex work incentives and how working would affect their disability benefits or payments. In addition, agency officials noted that Labor’s Disability Program Navigators, jointly funded with SSA, provided staff members in one-stop career centers to help people with disabilities navigate multiple employment programs and services to meet their employment needs. The number and range of programs makes it difficult to estimate the total federal funding dedicated to providing people with disabilities with employment supports or the number of individuals served. Programs we surveyed reported obligating about $4.1 billion to provide employment support to at least 1.5 million individuals with disabilities in fiscal year 2010, but these numbers may be largely underestimated for several reasons. (See table 1 for summary information on reported numbers of people served and obligations in fiscal year 2010; see app. III for detailed fiscal year 2010 participant and obligation data reported by each program.) Of the 23 programs serving only people with disabilities, 18 reported on the number of people with disabilities receiving employment supports and 18 reported data on obligations in fiscal year 2010. One program—Education’s VR program—accounted for most of these funds and participants (the program obligated $3 billion to serve more than 1 million people with disabilities). Even less is known about expenditures on and the number of people receiving employment supports from the 22 programs serving people with and without disabilities. Specifically, only 13 of these programs reported how many people with disabilities received employment supports and 10 reported obligations spent on people with disabilities. Agency officials from some of these programs reported that they do not systematically collect information on whether participants have disabilities, while others indicated that program participants may not always disclose that they have a disability. SSA officials noted that Section 504 of the Rehabilitation Act of 1973, as amended—which prohibits federal agencies and programs that receive federal funding from discriminating against individuals with disabilities—limits programs’ ability to require individuals to disclose that they have a disability. Other programs, such as Labor’s Employer Assistance and Resource Network, serve employers of people with disabilities and do not track the number of people with disabilities who indirectly benefit from program services. All 45 programs overlapped with at least one other program in that they provided one or more similar employment service to people with disabilities. To identify services provided, we asked survey respondents to indicate from a list of employment-related services and supports which ones their programs provide. Respondents indicated a range of services provided, with some services being provided more than others. For example, survey responses revealed that 36 of the 45 programs provided employment counseling, assessment, and case management, with 23 providing these services to more than half of their participants. On the other hand, agency officials reported that 17 programs provided remedial academic English language skills and adult literacy assistance, with 4 of those providing it to more than half of their participants. Two programs reported providing tax expenditures related to workers with disabilities. For example, the Work Opportunity Tax Credit provides a tax credit to employers who hire individuals from target groups, including disabled veterans. In addition, two programs—Education’s Randolph-Sheppard program and the U.S. AbilityOne Commission’s AbilityOne Program— help create jobs for individuals with disabilities through the federal property management and procurement systems. Randolph-Sheppard licenses people who are blind to operate vending facilities on federal or other designated state properties. Under the AbilityOne program, federal government agencies are generally required to purchase certain goods and services from nonprofit agencies that employ people who are blind or have some other severe disability. Several programs also noted that they provided additional services not included on our list, such as financial supports, resume preparation, job coaching, transportation, and medical and psychiatric services. Overlap was greatest in programs serving two distinct groups; specifically, we identified 19 programs that provided employment services to veterans and servicemembers (see fig. 3) and 5 programs that provided employment services to students and young adults (see fig. 4). In addition, 7 programs did not limit eligibility to any particular group and therefore potentially overlapped with these and other programs in our scope (see fig. 5). For example, 17 of the 19 programs that limit eligibility to veterans and servicemembers reported providing job-readiness skills. At the same time, any veteran or servicemember could receive these services from 5 of the 7 programs that did not limit eligibility to any particular population. The remaining 14 programs limited eligibility to other specific groups or types of disabilities, such as SSA disability beneficiaries, or people who are blind or visually impaired. For a complete list of programs, their objectives, and eligibility requirements, see appendix IV. For a list of programs, populations they serve, and the services they reported providing, see appendix V. While many programs reported providing similar services to similar populations, some programs have less potential for duplication— providing the same services to the same beneficiaries—than others. Some overlapping programs have specific eligibility requirements that make duplication less likely. For example, the Department of Veterans Affairs’ (VA) Compensated Work Therapy and Vocational Rehabilitation and Employment (VR&E) programs both reported providing vocational rehabilitation and a number of other similar employment services to veterans with disabilities. However, the work therapy program targets veterans with mental illness or other severe disabilities who are patients in VA medical centers, whereas the VR&E program serves veterans with all types of disabilities. In addition, unlike the work therapy program, the VR&E program requires that a veteran’s disability be connected to his or her military service. In another example, the Workforce Recruitment Program, jointly administered by Labor and the Department of Defense, is the only one of the five youth programs that reported limiting eligibility to college students or recent graduates with disabilities. As shown in table 2, the programs have different age ranges for eligibility, but all allow eligible youth between the ages of 16 and 21 to participate. program in two states administered in partnership with state-organized apprenticeship agencies to place young adults with disabilities in registered apprenticeships in the construction and health care fields. Labor officials said that case managers may refer a participant from the WIA Youth program to the YouthBuild program, for example, if they are interested in learning construction skills. However, it is difficult to determine the extent to which these different strategies reduce or prevent potential duplication in services among these programs. Another factor affecting the potential for duplication is resource levels, in that some overlapping programs lack the capacity to serve all who apply, thereby reducing the potential for duplication in services. Six of the 45 programs reported having a waiting list for services (see table 3). Three of these programs reported serving only people with disabilities. Individuals who are on a waiting list for one program may be eligible to receive services from another program. For example, Labor officials told us that individuals waiting for VR services could be referred to one-stop career centers for services. Even when the potential for duplication of services is low, there may be inefficiencies associated with operating two or more separate programs that provide similar services to similar populations. For example, in its budget requests for fiscal years 2012 and 2013, Education proposed consolidating two smaller programs in our scope—the Migrant and Seasonal Farmworker and Supported Employment State Grants programs—into its larger VR program. Education proposed this consolidation in order to reduce duplication of effort and administrative costs, streamline program administration at the federal and local levels, and improve accountability. Among the 19 programs that serve servicemembers and veterans, we identified two programs—Labor’s Disabled Veterans’ Outreach and Local Veterans’ Employment Representatives programs—that provide similar services at similar locations, potentially by the same staff members. Both programs reported that they provided job search and placement services to veterans with disabilities, among other similar services. Labor officials said that the veterans’ employment representatives were intended to reach out to employers and the disabled veterans’ outreach specialists were intended to work with job seekers. However, as we reported in May 2007, staff often performed the same roles in one-stop career centers and, in some cases, the roles were carried out by the same staff member. A recent law gave states the flexibility—subject to the approval of the Secretary of Labor—to consolidate these two programs in order to promote more efficient provision of services. Labor officials noted that the agency is in the process of developing criteria and procedures for making determinations on consolidations. The law also requires the Secretary of Labor to conduct audits to ensure that the veterans’ employment representatives and the outreach specialists are performing their required duties, and officials told us that they are in the process of defining the requirements and protocols for these audits. 42 U.S.C. §§ 422(d) and 1382d(d); 20 C.F.R. §§ 404.2101 and 416.2201. Under thresholds set annually by SSA, individuals are considered engaged in substantial gainful activity if they had earnings in 2012 above $1,010 per month for nonblind beneficiaries and $1,690 per month for blind beneficiaries. the two programs provide a continuum of services—VR agencies provide more intensive, up-front services to help beneficiaries enter or return to work, while employment networks under the Ticket to Work program can provide longer-term supports to help beneficiaries stay at work. Coordination could help mitigate the potential for duplication among fragmented programs, but officials we surveyed reported limited coordination among the 45 programs in our scope. In our survey, we asked respondents to indicate whether their program coordinated with any of the other programs receiving our survey. In 13 percent of cases, two programs mutually reported coordinating with each other. However, in most cases, respondents either reported not coordinating or inconsistently reported coordinating with other programs (see table 4). For example, although VA’s VR&E program reported coordinating with Labor’s Veterans Workforce Investment Program and Disabled Veterans Outreach Program, only one of the two Labor programs—the Disabled Veterans Outreach Program—reported coordinating with the VA program. Officials explained that, in some cases, federal-level program staff responding to our survey may not be aware of coordination taking place at the state and local levels. Further, although the rate of mutual coordination reported is low among all the programs in our scope, programs that have different missions or serve different populations may not be expected to coordinate with one another. For instance, Labor’s Senior Community Service Employment Program supports part-time work opportunities for low-income senior citizens, and therefore may not need to coordinate with Department of Defense transition programs—such as Operation Warfighter—that help servicemembers returning to civilian life gain employment experience. In order to better understand our survey results, we held more detailed discussions about coordination efforts with six selected programs that serve only people with disabilities: Assistive Technology State Grant program (Education) Disability Employment Initiative (Labor) State Vocational Rehabilitation Cost Reimbursement Program (SSA) Ticket to Work program (SSA) VR program (Education) Work Incentives Planning and Assistance program (SSA) Officials cited more consistent coordination among these programs. In response to our survey, all six programs had mutually reported coordinating with the Ticket to Work and the VR programs. This is perhaps not surprising, given that the VR program reported serving the largest number of people with disabilities and the Ticket to Work program is closely related to the VR program. Although not all of the six programs mutually reported coordination, federal program officials noted that a significant amount of coordination occurs at the state and local levels where services are delivered. Labor officials reported that its Disability Employment Initiative grantees at the state and local level have established Integrated Resource Teams, which include representatives from a number of programs—including the VR program, the Assistive Technology State Grant program, and other state and local programs—to leverage all available resources for individual clients. To further encourage local coordination, Labor recently issued guidance to state and local workforce agencies outlining ways in which programs housed in one-stop career centers can coordinate with providers under SSA’s Ticket to Work program. Labor issued similar guidance listing available resources to help beneficiaries obtain assistive technology, including Education’s Assistive Technology State Grant program and SSA’s Ticket to Work program. Agency officials also described efforts to increase coordination more broadly among programs that support employment for people with disabilities. For example, in 2008, Education and SSA established the Partnership Plus initiative, which is intended to provide a seamless approach to vocational services for people with disabilities. Individuals who need intensive employment services, such as education or training, can receive them first through the VR program, and then transition to an employment network under Ticket to Work for job retention services, or other ongoing services and supports to maintain employment and increase earnings. In addition, officials described a new initiative that coordinates programs and leverages resources from Education, the Department of Health and Human Services, Labor, and SSA, and aims to help youth receiving Supplemental Security Income to transition successfully to higher education or employment by working with the entire family to provide supports necessary to reduce barriers and improve outcomes. Officials from selected programs reported facing a number of challenges in coordinating with each other. First, officials noted that coordination can be challenging because programs are governed by separate statutes and regulations containing different definitions and program requirements. One official noted that aligning definitions of disability in statute would be helpful to ensure programs established by WIA and the Rehabilitation Act are complementary. In interviews with our six selected programs, officials from each reported that individual programs lack the resources, both in terms of funding and staff time, to pursue coordination with one another. Finally, one official indicated that interagency working groups may have limited effectiveness. He said that, in general, coordination could be more effective if programs had a set of outcomes they were expected to collectively achieve and were given the authority to work together to do so—including the authority to waive requirements that present barriers— and given funding to support such collaboration. This is consistent with concerns raised in our 2010 forum on employment for people with disabilities, where participants noted that past interagency coordination efforts have not been very successful at achieving significant change because they have lacked sufficient authority, accountability, or resources. Coordination efforts can be enhanced when agencies work toward a common goal, yet outcome measures varied across programs and not all programs reported outcomes specifically for people with disabilities. Given a list of typical employment measures, 32 of the 45 programs reported tracking at least one employment measure specifically for people with disabilities.measures most commonly tracked were “participants who enter employment” (28 programs) and “participants’ employment retention” (18 programs). Some programs reported tracking other indicators, such as quality of life. For instance, both VA’s VR&E program and Education’s Helen Keller National Center track the number of participants who are able to live independently or in less-restrictive residential programs. The remaining 13 programs did not report tracking any employment-related outcomes for people with disabilities, in part because they have a broader mission. For instance, Department of Health and Human Services officials reported that states are required to identify various performance measures in their applications for the Medicaid 1915(c) Home and Community-Based Services Waivers and the 1915(i) State Plan Home The measures varied across programs, but the and Community-Based Services related to participants’ health and welfare overall, but are not required to measure employment-related outcome measures because both programs provide a broad range of health care and other services beyond employment-related services.See Figure 6 for the number of programs tracking specific outcome measures. Despite some similarities in programs’ outcome measures, it can be difficult to compare relative performance due to variation across programs in the type and severity of participants’ disabilities. Just over half (24) of the 45 programs reported targeting or giving priority to people with significant or severe disabilities for whom it may be challenging to achieve positive employment outcomes.different thresholds for their employment outcome measures. For example, although 28 programs reported tracking “participants who enter employment,” officials told us that some programs may consider just a few hours of paid work per week as an employment outcome, while others set a higher bar and require a participant to be working at a level that would allow them to become self-sufficient and eliminate their dependence on federal disability benefits. Little is known about the effectiveness of the programs we identified as supporting employment for people with disabilities because only about one-quarter reported having a performance review. Ten of the 45 programs in our scope reported that a review or study had been conducted to evaluate their program’s performance.in methodology, and many examined program outcomes and proposed ways to improve services, but fell short of determining whether outcomes were a direct result of program activities. (See app. VII for programs that reported performance reviews.) For example, the Department of Agriculture’s AgrAbility program conducted a review of its activities between 1991 and 2011 that found that 11,000 clients had been served, and that 88 percent of those clients continued to be engaged in farm or ranch activities. However, this study did not determine whether other factors may have contributed to participants’ positive outcomes. Likewise, a 2009 study evaluated aspects of Labor’s YouthBuild program—such as recruitment and enrollment procedures, educational and vocational services, and case management—to understand similarities and The studies varied differences across grantees, but the study did not attempt to discern the effect of the program on participants’ employment. Only one program in our scope—Labor’s Job Corps program—reported having a study that meets the criteria of an impact study. Impact studies examine what would have happened in the absence of a program to isolate its impact from other factors. Many researchers consider impact studies to be the best method for determining the extent to which a program is responsible for participant outcomes, but these studies can be challenging to conduct. However, there are sometimes opportunities for agencies to assess impacts without conducting full-scale impact studies. As an example, Education officials noted that it may be more feasible to conduct a rigorous study to evaluate the impact of providing enhanced services over regular services, rather than the impact of providing services over providing no services at all. The Job Corps impact study compared the outcomes of participants in the program to outcomes of a comparable group of individuals who did not participate in the program. The study concluded that, although there were no long-term program impacts on earnings for many Job Corps participants, the program generated earnings gains for participants between the ages of 20 and 24, who may be more highly motivated and disciplined. Reports for two additional programs that were excluded from our scope— SSA’s Youth Transition Demonstration and the Mental Health Treatment Study programs—met the criteria for impact studies. Each of the programs was a demonstration project that included an impact study to determine whether the program produced positive outcomes. The Youth Transition Demonstration Project had interim impact studies for three project sites. The studies concluded that, while the program did result in greater use of services to promote employment, it did not impact the employment of participants from two out of three project sites during the 1-year follow-up period. The program also did not impact the education or income of participants from the three project sites during the 1-year follow-up period. The Mental Health Treatment Study program also published an impact study, which concluded that there was significant improvement in the 24-month employment rate for the group receiving services (61 percent versus 40 percent employment for the control group). Finally, agencies may have published or initiated impact studies after responding to our survey. For instance, officials at Labor notified us of two impact studies currently being planned or underway, for the YouthBuild program and for the Disability Employment Initiative. The number of programs providing similar employment services to people with disabilities—and the range of requirements and approaches they entail—raises questions about the current structure of federal disability programs. In fact, several of the programs we identified were created in order to help people with disabilities navigate this fragmented system. Efforts such as Education’s proposed consolidation of several smaller programs into its VR program also indicate an awareness of the need to simplify the system and increase effectiveness and ease of use. In our February 2012 report on duplication and overlap in government programs, we suggested that OMB continue to work with executive agencies that administer overlapping programs to identify any opportunities for cost savings or streamlining, such as program consolidation. We continue to believe that such a review could result in more effective and efficient delivery of services to help people with disabilities obtain and retain employment. We identified limited coordination among programs that provide employment support for people with disabilities, which may exacerbate the potential for duplication among fragmented programs. In addition, many programs have not evaluated their programs for effectiveness, and little is known about effectiveness overall, such that policymakers have limited information to help them make informed decisions on allocating scarce resources. In our February 2012 report, we noted that OMB should consider establishing governmentwide goals related to employment of people with disabilities, with agencies establishing related outcome measures. We continue to believe that setting common goals across programs that support employment for people with disabilities could help spur greater coordination and more efficient and economical service delivery in overlapping program areas, and plan to follow up with OMB on its progress with respect to both of the actions we suggested in our February 2012 report. We provided copies of our draft report to the nine federal agencies that administer programs within the scope of this report for comment. The AbilityOne Commission; the Departments of Defense and Health and Human Services; and the Internal Revenue Service had no comments. The Departments of Education, Labor, and Veterans Affairs, and SSA provided technical comments, which we incorporated into the report, as appropriate. The Departments of Agriculture, Education, and Labor also provided written comments, which are reproduced in appendixes VIII, IX, and X, respectively. In its comments, the Department of Agriculture noted that it generally concurred with our findings. Education provided additional information and examples of how its VR program coordinates services at the state and local level that were not fully described in our report. For example, Education noted that the program coordinates at the state level with each state’s workforce investment system, and state VR agencies have staff who provide a variety of education, training, and rehabilitation services to one-stop career center customers with disabilities. Education also stated that the capacity of the VR program to provide and coordinate a wide range of individualized services to achieve employment outcomes for individuals with disabilities—particularly individuals with significant disabilities—is not duplicated by any other program. Labor generally expressed concern that we found fragmentation among the programs we examined, noting that our definition of fragmentation is broad. We continue to believe that fragmentation—defined as more than one federal agency (or organization within an agency) being involved in the same broad area of national need—exists among the 45 programs we identified across nine federal agencies. As we have noted, unless programs coordinate effectively, fragmentation could lead to inefficient use of scarce resources, confuse program beneficiaries, and ultimately, limit the overall effectiveness of the federal effort. Labor also expressed concern about our statement that certain youth programs with similar eligibility requirements present a greater risk of potential duplication than other programs, noting that we did not consider the unique characteristics of the programs, such as the individual services provided, the program design used, or the populations served. We did consider such factors and provided some information in our report about the different ways these programs serve the youth population. However, we were unable to determine the extent to which these factors reduce the potential for duplication. Further, while Labor acknowledged that it is important to minimize duplication and maximize efficiency, it noted that some overlap is necessary and appropriate to ensure that all participants receive comprehensive employment and training services. We agree that, in some instances, overlap among programs involved in providing services to similar populations may be appropriate. However, we continue to believe there is value in examining overlapping programs to identify opportunities for streamlining and coordination to more efficiently provide services. Labor also pointed out that several of its programs included in the scope of our study were created to serve all job-seekers rather than specifically to provide employment support for people with disabilities. Labor also noted that, rather than being seen as duplicative or undesirable, service integration and diversity of design are important for achieving inclusion of people with disabilities consistent with what Congress envisioned. We included such programs to provide a more comprehensive picture of the services and supports available to help people with disabilities stay at work or return to work. We did not label any given program as duplicative or undesirable, but noted that, in some cases, having many programs serving similar populations may result in administrative inefficiencies. We are sending copies of this report to the AbilityOne Commission; the Departments of Agriculture, Defense, Education, Health and Human Services, Labor, and Veterans Affairs; the Internal Revenue Service; and SSA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XI. Our objectives were to examine: (1) to what extent do federal programs that support employment for people with disabilities provide similar services to similar populations and (2) to what extent has the effectiveness of programs that support employment for people with disabilities been measured? The interim results for this report were included in our February 2012 report on duplication and overlap in government programs. We determined that programs included in the scope of our work should meet two sets of criteria. Specifically, they should: (1) be targeted to people with disabilities or their employers and (2) have provided specific employment and training services in fiscal year 2010. See figure 7 for a detailed description of both sets of criteria. We identified and selected programs for inclusion in our review leveraging a variety of sources and a multi-step process. Specifically, we identified programs for potential inclusion using key terms to search the Catalog of Federal Domestic Assistance (CFDA), reviewing our previous work on related topics, and consulting with internal and external stakeholders. We then reviewed the programs’ objectives and eligibility criteria from CFDA or program websites to determine if the program met our inclusion criteria. If key information, such as how a program focuses on people with disabilities or provides employment-related services, was incomplete or ambiguous, we kept the program in our preliminary list. We sent our preliminary list of programs for validation to the 10 agencies that administer the programs. The agencies requested we add several programs to our list, and determined that others did not meet the criteria for inclusion. We held follow-up meetings with agency officials to clarify criteria, as appropriate. We determined that the two programs in our preliminary list administered by the Small Business Administration did not meet our criteria and thus we excluded the agency and the programs from our review. Our validation process yielded a total of 56 programs administered by nine federal agencies. We surveyed these programs from August to October 2011. Based on agency responses and follow-up conversations, we omitted six surveyed programs because we found they did not meet the inclusion criteria. We reported on 50 of those programs in our February 2012 report. For this final report, we have omitted an additional 7 programs and added 2 new programs, for a total of 45 programs. We omitted six programs that had ended as of April 2012, and one demonstration program—the Benefit Offset National Demonstration—that did not begin enrolling participants until January 2011. (See app. II for a list of programs we omitted.) In commenting on a draft of our February 2012 report and, later, in verifying data previously provided, Department of Defense officials requested that we add three programs that they believed to be within the scope of this review. After identifying the programs’ employment services to people with disabilities, we determined that two—the Army Warrior Care and Transition and the Marines Wounded Warrior Regiment programs—met our criteria and thus were included in our analyses for this report. We did not include or review programs that may have been created or revised to meet our inclusion criteria after fiscal year 2010. We designed a web-based survey to collect information on program background, eligibility requirements and populations served, services, outcome measures, and budget information. In designing this survey, we reviewed our prior surveys used to collect similar information. We pretested the survey with three programs to minimize errors that may arise from differences in how questions might be interpreted and ensure that response categories were appropriate. From August through October 2011, we fielded the web-based survey of 56 federal programs that support employment for people with disabilities. Program representatives were identified by the agencies. Where programs were jointly administered by two or more federal agencies, we consulted with the agencies and asked them to designate one official to fill out the survey and respond to questions for that program. In March 2012, we fielded the web-based survey to two additional programs that were later determined to meet our inclusion criteria (Army Warrior Care and Transition Program and the Marine Corps Wounded Warrior Regiment, discussed earlier). We received completed questionnaires from 58 programs, for a 100 percent response rate. We used standard descriptive statistics to analyze responses to the questionnaire. Because this was not a sample survey, there were no sampling errors. To minimize other types of errors, commonly referred to as nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the questionnaire with program officials. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. To reduce nonresponse, another source of nonsampling error, we sent out e-mail reminder messages and made telephone calls to encourage officials to complete the survey. In reviewing the survey data, we performed automated checks to identify inconsistent answers. We further reviewed the data for missing, ambiguous, or illogical responses and followed up with agency officials when necessary to clarify their responses. In addition, we compared 2010 obligations data provided by survey respondents with data provided to us in a previous survey and with appropriations data from the Consolidated Federal Funds Report. Where obligations differed from the comparison sources by 10 percent or more, we contacted program officials to confirm reported data. Finally, in March and April 2012, we collected some additional data from agencies and verified select data collected during the initial survey. Because we updated selected data and the list of programs included in our analyses, some data in our analyses have changed since our February 2012 report. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were sufficiently reliable for our purposes. We did not conduct an independent legal analysis to verify the program information provided by survey respondents. We have defined fragmentation to be when more than one federal agency is involved in the same broad area of national need. To further expand on this definition, we used the survey responses to identify: the number of programs created in statute and variability in definitions of disability and the ways in which programs deliver supports (e.g., directly to individuals, or through federal, state, or local entities). We have defined overlap to be instances where programs provide similar services to similar populations. To identify areas of potential overlap among programs that support employment for people with disabilities, we reviewed survey responses from agency officials. We analyzed responses to survey questions regarding any limitations in eligibility based on populations or disability. We sorted the 45 programs into 4 groups: 1. Programs that limit eligibility to servicemembers and veterans (19 programs).2. Programs that limit eligibility to students, transition-age youth and/or young adults (5 programs). 3. Programs that limit eligibility to other populations or specific types of disabilities (14 programs). 4. Programs that serve all people with disabilities (7 programs). We have defined duplication as when the same beneficiaries receive the same or similar services. Although fragmentation and overlap may indicate the potential for duplication, we did not identify actual duplication in programs that provide employment support to people with disabilities because (1) due to data limitations, we did not attempt an intensive data matching among our universe of programs to identify instances where programs were providing the same or similar services to the same beneficiaries, and (2) programs do not consistently collect information on beneficiaries. Instead, we examined the potential for duplication by more closely examining the reported eligibility requirements among programs in our four groups. We asked survey respondents if their program coordinated with each of the other programs in our scope to reduce duplication and gaps and services. While we surveyed federal program officials, our survey question did not specify if we were requesting information on coordination at the federal level, or at the state and local level where services and supports may be delivered. We analyzed the data to identify mismatches in reported coordination. For example, we identified cases in which program A reported that they coordinated with program B, but program B did not report coordinating with program A. To further understand the nature and challenges of coordination, we selected a subgroup of programs with which to hold further discussion on coordination. Specifically, we selected six programs—administered by three different agencies—that served only people with disabilities: Assistive Technology State Grant program [Department of Education Disability Employment Initiative (a new program that replaced the Work Incentive Grants) State Vocational Rehabilitation Cost Reimbursement Program [Social Security Administration (SSA)] Ticket to Work program (SSA) State Vocational Rehabilitation Services program (VR) (Education) Work Incentives Planning and Assistance program (SSA) We interviewed agency representatives for each program regarding their coordination efforts, challenges to coordination, and factors that facilitate or create barriers to coordination. We asked all survey respondents to provide information on any performance evaluations that have been completed since 2006 related to employment for people with disabilities for their program and to provide citations to those studies. We selected 2006 because studies conducted in the past 5 years were most likely to include services still offered by each program and be relevant to the employment market participants are currently facing. We reviewed the findings and conclusions of a total of 14 studies identified by programs in our scope to determine the elements of program performance that had been evaluated. Although programs identified a total of 18 program studies, 1 was not publicly available and 2 programs had identified a GAO report that had been published prior to 2006. In addition, we evaluated the methodology section of 11 of those studies, which were identified as impact studies by 7 programs, to determine if they met our criteria for an impact study—that is, the study provides an assessment of the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. We conducted this performance audit from April 2011 through June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Details on status The program, authorized for 11 years, expired at the close of fiscal year 2011. The study to test how better access to treatment and employment services would affect outcomes such as medical recovery, employment, and benefit receipt for Social Security Disability Insurance ended field operations on July 31, 2010. According to the agency’s budget justification for fiscal year 2013, the project’s funding continued and, going forward, the agency will focus on best practices in services to individuals with schizophrenia and affective disorder and track employment and benefit payments. Details on status This program, administered by the National Organization of Disability, provided services to wounded, ill, and injured U.S. Army personnel through a memorandum of agreement, which ended October, 2010. The program was eliminated in the fiscal year 2011 appropriation. Details on status The Work Incentives Grant funded the Disability Program Navigator Initiative. Both programs ended effective June 30, 2010. A new program, the Disability Employment Initiative, was informed by the best practices learned through the Work Incentive Grant/Disability Program Navigator initiative. Employment counseling, assessment and case management Assistance in earning a high school diploma or its equivalent Job development Job readiness skills Job recruitment and referrals Job retention training Job search or placement activities On-the-job training Remedial academic, English language skills, or adult literacy Work experience Employment-related information dissemination Entrepreneurship training and support Support services to employers of people with disabilities Assistive technology and workplace accommodation. The program suspended services at the end of calendar year 2011. A 36- month evaluation of the impact of the program is scheduled to follow. Funding was approximately $1.35 million in fiscal years 2012 and 2013. This program enrolled participants on a rolling basis between 2006 and 2009 and served 1,121 individuals over the course of the project. Department of Agriculture Assistive Technology Program for Farmers with Disabilities: AgrAbility Project Army Warrior Care and Transition Program Marine Corps Wounded Warrior Regiment Marine Corps Wounded Warrior Intern Program Recovery Coordination Program – Operation Warfighter (Internships) Local Veterans’ Employment Representatives program Registered Apprenticeship for Youth and Young Adults with Disabilities Initiative Community Service Employment for Older Americans/Senior Community Service Employment Program Veterans’ Workforce Investment Program Work Opportunity Tax Credit (joint with the Internal Revenue Service) Workforce Investment Act Youth Activities Workforce Recruitment Program (joint with the Department of Defense) Some programs were not able to identify obligations related to providing employment supports to people with disabilities. A significant portion of these obligations ($578,500,000) was for constructing healing campuses for wounded, ill, and injured soldiers. This program was proposed to be consolidated or eliminated in Education’s fiscal year 2012 budget request, but the department reported that funds were appropriated in fiscal year 2012. The program was also proposed to be consolidated or eliminated in Education’s fiscal year 2013 budget request. In addition, Education officials noted that the Supported Employment Services program provides supplemental funding to help state VR agencies cover the costs of Supported Employment State Grants participating in the State Vocational Rehabilitation Services program. ”No response” indicates that the program did not respond to this survey question or otherwise provide this information. Total program obligations for Job Corps includes $102 million in American Recovery and Reinvestment Act obligations. Data on participants is from program year 2009, and is based on self- disclosed and readily observable disabilities. Data are from May 2008 to October 2010. This data includes the number of participants contacted and services provided, including to spouses and caregivers. This program was proposed to be transferred to the Department of Health and Human Services in Labor’s fiscal year 2013 budget request. Data are from July 2010 to June 2011. This program was proposed to be consolidated or eliminated in Labor’s fiscal year 2013 budget request. Data are from July 2009 to June 2010. Data are from April 2009 to March 2010. Program reported appropriated funds instead of obligations. According to program officials, this program is expected to end by the end of fiscal year 2012. This number represents the number of people who achieved the outcome of 9 months of working and earning at the substantial gainful activity level. As of October 25, 2010, the State Vocational Rehabilitation Cost Reimbursement Program had served 235,346 people. In addition, an SSA official noted that these participants may also be counted as participants in Education’s State Vocational Rehabilitation Services program. The purpose of the Javits-Wagner-O’Day Act is to generate employment and training opportunities for people who are blind or have other severe disabilities in the manufacture and delivery of products and services to the federal government. The law requires federal agencies to procure certain products and services that are produced and provided by community-based nonprofit agencies that are dedicated to training and employing persons who are blind or have other severe disabilities. The primary requirement for a nonprofit agency to participate in the AbilityOne Program is that, on an annual basis, 75 percent of all of the direct labor done at a nonprofit agency be performed by people who are blind or severely disabled. The term ‘‘blind’’ refers to an individual or class of individuals whose central visual acuity does not exceed 20/200 in the better eye with correcting lenses or whose visual acuity, if better than 20/200, is accompanied by a limit to the field of vision in the better eye to such a degree that its widest diameter subtends an angle of no greater than 20 degrees. The term “severely disabled” refers to an individual or class of individuals who has a severe physical or mental impairment other than blindness, which so limits the person’s functional capabilities (mobility, communication, self-care, self direction, work tolerance, or work skills) that the individual is unable to engage in normal competitive employment, over an extended period of time. AgrAbility increases the likelihood that individuals with disabilities and their families engaged in production agriculture (AgrAbility’s customers) become more successful. The program supports cooperative projects in which State Cooperative Extension Services based at either 1862 or 1890 Land-Grant Universities or the University of the District of Columbia subcontract to private, nonprofit disability organizations. Measures of success may include improvements in customers’ financial stability or access to life activities and in the capacity of the states and regions to deliver services this population requires in a timely and satisfying manner. To address the specialized needs of AgrAbility’s customers, the program builds service capacity on national, regional, state, and local levels through education and networking. In the absence of capacity, projects provide assistance to customers. The primary function of the National AgrAbility Project is to support the state and regional projects in developing their capacity to meet these objectives. The program targets accommodating disability in production agriculture. It provides education and awareness to the public, agricultural, and rehabilitation professionals on what can be done to accommodate disability in the agricultural workplace. The program does not fund rehabilitation equipment or payments to individuals. Develop programs to help identify all airmen that need assistance; continue to build mentorship program to aid and benefit recovering airmen; employ dedicated qualified staff as Recovery Care Coordinators and provide them with tools and support needed to be successful; provide comprehensive policy for Mortuary, Casualty Affairs, Wounded Warrior, and Recovery Care Coordinator programs. The Air Force Warrior and Survivor Care programs are made available to all Active Duty, Air National Guard, and Air Force Reserve members and their families to provide support in the event an airman is seriously wounded, ill, or injured while serving. The level of support and benefit assistance provided is dependent solely on need and is provided throughout the continuum of care. If injuries or illness warrant, airmen may enter the program at any time by self-referral or be referred by commander, spouses, supervisors, and medical personnel. An Army-wide structure to provide support and services for wounded, ill, and injured soldiers. The program enables the Army to evaluate and treat soldiers through a comprehensive, soldier-centric process of medical care, rehabilitation, professional development, and achievement of personal goals. The Warrior Care and Transition Program is available to soldiers of all components (active and reserve components/active guard and reserve) on active duty who require complex medical care management of 6 months or longer duration. Additionally, eligibility is extended to activated reserve component soldiers requiring definitive medical care who have been approved by a Medical Review Board. The Computer/Electronic Accommodations Program (CAP) is a program in the TRICARE Management Activity, under the direction of the Assistant Secretary of Defense for Health Affairs. CAP’s mission is to provide assistive technology and accommodation services to federal employees with disabilities and wounded servicemembers to increase access to information environment and employment opportunities in the federal government. To be eligible for CAP services, an individual must be a federal employee with a disability in the Department of Defense or an employee with a disability at one of the federal partners with CAP or an active duty wounded or ill servicemember. To maintain a high level of coordination through a single command structure that delivers or facilitates delivery of nonmedical care to wounded, ill, or injured Marines and their families. Must be an active duty, reserve, retired, or veteran wounded, ill, or injured Marine. To provide job skills and training to wounded, ill, and injured Marines pending medical separation. The Marine Corps Wounded Warrior Intern Program is open to all wounded, ill, and injured Marines. Spouses and caregivers of active duty wounded, ill, and injured Marines are eligible for some of the program benefits. Beneficiary eligibility requirements All seriously wounded, ill, or injured sailors and Coast Guardsmen not likely to return to duty in 180 days and likely to be medically retired or separated; and high- risk, nonseriously wounded, ill, or injured sailors, Coast Guardsmen and their families (case-by-case). 3. Provide focused outreach opportunities for 4. 5. Align resources to effectively and efficiently wounded, ill, or injured sailors’ and Coast Guardsmen’s family members, caregivers and their support network Increase awareness of Navy Safe Harbor execute the Safe Harbor mission 6. Attract and retain sustained superior performers with demonstrated leadership expertise to the Safe Harbor staff. The Recovery Care Coordinators (RCC) assist in the creation and management of the Comprehensive Recovery Plans for Wounded Warriors—Army, Marines, Navy, Air Force, and U.S. Special Operations Command—until they are either returned to duty or separated from the service due to the extent of their injuries. The RCCs act as the single point of contact for the Wounded Warriors and their families as they receive care from multi- disciplinary support teams, both medical and nonmedical, in helping them obtain required treatment, care, and family assistance. Populations served by the RCCs are wounded, ill and injured warriors and their families: troop program unit, Active Guard Reserve, Individual Mobilization Augmentee, Individual Ready Reserve, retirees, and veterans. Beneficiary eligibility requirements Eligibility criteria require participants to be servicemembers in an active duty status assigned to a military treatment facility or a service wounded warrior program. If not assigned to one of these programs, servicemembers can still be eligible if they are going through the medical evaluation board and their chain of command approves their participation. Program objectives Operation Warfighter (OWF) is a federal internship program for wounded, ill, and injured servicemembers. The main objective of OWF is to place servicemembers in supportive work settings that positively impact their recuperation. The program represents a great opportunity for transitioning servicemembers to augment their employment readiness by building their resumes, exploring employment interests, developing job skills, benefiting from both formal and on-the-job training opportunities, and gaining valuable federal government work experience that will help prepare them for the future. Operation Warfighter strives to demonstrate to participants that the skills they have obtained in the military are transferable into civilian employment. For servicemembers returning to duty, the program enables these participants to maintain their skill sets and provides the opportunity for additional training and experience that can subsequently benefit the military. Operation Warfighter simultaneously enables federal employers to better familiarize themselves with the skill sets and challenges of wounded, ill, and injured servicemembers as well as benefit from the considerable talent and dedication of these servicemembers. To provide Special Operations Forces warriors and their families a model advocacy program in order to enhance their quality of life and strengthen overall readiness of Special Operations. To provide vocational rehabilitation services to American Indians with disabilities who reside on or near federal or state reservations in order to achieve gainful employment. American Indians with disabilities residing on or near a federal or state reservation (including Native Alaskans) who meet the definition of an individual with a disability in Section 7 (8)(A) of the Rehabilitation Act. To provide states with financial assistance that supports programs designed to maximize the ability of individuals of all ages with disabilities and their family members, guardians, advocates, and authorized representatives to obtain assistive technology devices and assistive technology services. Individuals with disabilities. Program objectives Authorized by an Act of Congress in 1967, the Helen Keller National Center for Deaf-Blind Youths and Adults is a national rehabilitation program serving youth and adults who are deaf-blind. The purposes of the Center are to (1) provide specialized intensive services, or any other services, at the Center or anywhere else in the United States, which are necessary to encourage the maximum personal development of any individual who is deaf-blind; (2) train family members of individuals who are deaf-blind at the Center or anywhere else in the United States, in order to assist family members in providing and obtaining appropriate services for the individual who is deaf-blind; (3) train professionals and allied personnel at the Center or anywhere else in the United States to provide services to individuals who are deaf-blind; and (4) conduct applied research, development programs, and demonstrations with respect to communication techniques, teaching methods, aids and devices, and delivery of services. Beneficiary eligibility requirements The Helen Keller National Center for Deaf-Blind Youths and Adults provides services on a national basis to adults who are deaf-blind, their families, and service providers. The term “individual who is deaf- blind” means any individual - (A)(i) who has a central visual acuity of 20/200 or less in the better eye with corrective lenses, or a field defect such that the peripheral diameter of visual field subtends an angular distance no greater than 20 degrees, or a progressive visual loss having a prognosis leading to one or both these conditions; (ii) who has a chronic hearing impairment so severe that most speech cannot be understood with optimum amplification, or a progressive hearing loss having a prognosis leading to this condition; and (iii) for whom the combination of impairments described in clauses (i) and (ii) cause extreme difficulty in attaining independence in daily life activities, achieving psychosocial adjustment, or obtaining a vocation; (B) who despite the inability to be measured accurately for hearing and vision loss due to cognitive or behavioral constraints, or both, can be determined through functional and performance assessment to have severe hearing and visual disabilities that cause extreme difficulty in attaining independence in daily life activities, achieving psychosocial adjustment, or obtaining vocational objectives; or (C) meets such other requirements as the Secretary may prescribe by regulation.” The Migrant and Seasonal Farmworkers program provides comprehensive vocational rehabilitation (VR) services available to migrant and seasonal farmworkers with disabilities with the goal of increasing employment opportunities for these individuals. Projects also develop innovative methods for reaching and serving this population. Projects are required to coordinate with the VR State Grants program. Individuals with disabilities and individuals with significant disabilities as defined in Sections 7(9)(A)(B) and 7(20)(A), respectively, of the Rehabilitation Act of 1973, as amended. Create or expand model comprehensive transition and postsecondary programs for students with intellectual disabilities. Funds also support a coordinating center that provides related services. Students with intellectual disabilities enrolled in model programs or other comprehensive transition and postsecondary programs. Program objectives According to the statute creating the Randolph- Sheppard program, the objectives of this initiative include providing blind persons with remunerative employment, enlarging the economic opportunities of the blind, and stimulating the blind to greater efforts in striving to make themselves self- supporting. Toward this end, blind persons licensed under the provisions of this chapter shall be authorized to operate vending facilities on any federal property. In layman language, the program’s objectives are to create employment opportunities by creating a priority for eligible blind people to operate entrepreneurial ventures on federal and other designated state properties. Throughout the country, states have created similar priorities to mirror the federal program in their specific jurisdictions. To provide financial assistance to projects and demonstrations for expanding and improving the provision of rehabilitation and other services authorized under the Rehabilitation Act, or that further the purposes of the act, including related research and evaluation activities. Beneficiary eligibility requirements To be eligible for this program, participants must be blind, citizens of the United States, and meet any other specific standards that individual state licensing agencies may require. Individuals with disabilities. To assist states in operating comprehensive, coordinated, effective, efficient, and accountable programs of vocational rehabilitation; to assess, plan, develop, and provide vocational rehabilitation services for individuals with disabilities, consistent with their strengths, resources, priorities, concerns, abilities, and capabilities so they may prepare for and engage in competitive employment. Eligibility for vocational rehabilitation services is based on the presence of a physical or mental impairment, which for such an individual constitutes or results in a substantial impediment to employment, and the need for vocational rehabilitation services that may be expected to benefit the individual in terms of an employment outcome. To provide grants for time-limited services leading to supported employment for individuals with the most severe disabilities to enable such individuals to achieve the employment outcome of supported employment. Individuals with the most severe disabilities whose ability or potential to engage in a training program leading to supported employment has been determined by evaluating rehabilitation potential. In addition, individuals must need extended services in order to perform competitive work and have the ability to work in a supported employment setting. The Medicaid Home and Community-based Services waiver program is authorized in section 1915(c) of the Social Security Act. The program permits a state to furnish an array of home and community based services that assist Medicaid beneficiaries to live in the community and avoid institutionalization. In order to participate in a waiver, a person must meet the level of care specified for the waiver and also be a member of a Medicaid eligibility group that a state includes in the waiver. A state may include a Medicaid eligibility group in the waiver only when it includes the same group in its state plan. Program objectives 1915(i) allows states the option to add home and community-based services to their Medicaid State Plans. Beneficiary eligibility requirements Financial eligibility requirements: (1) eligible for Medicaid and have income up to 150 percent of the federal poverty level, and (2) at the state’s option, eligible through a new Medicaid eligibility category at income up to 300 percent of Supplemental Security Income Federal Benefit Rate and eligible for a 1915(c) waiver or 1115 demonstration program in their state. Other eligibility requirements: (1) meet state-defined needs-based criteria, and (2) at the state’s option, meet the state-specified targeted population group for the 1915(i) benefit. To provide parameters for the coverage of services authorized by the Social Security Act to be part of the Medicaid State Plan. Each state plan service contains, either in statute or regulation, parameters for service provision, including any functional eligibility requirements to be met. The Money Follows the Person (MFP) Rebalancing Demonstration, authorized by section 6071 of the Deficit Reduction Act of 2005 (Pub. Law No.109- 171), was designed to assist States to balance their long-term care systems and help Medicaid enrollees transition from institutions to the community. Congress initially authorized up to $1.75 billion in funds through federal fiscal year 2011. With the subsequent passage of the Patient Protection and Affordable Care Act (Pub. Law No.111-148) in 2010, Section 2403 extended the program through September 30, 2016. An additional $2.25 billion in federal funds was appropriated through federal fiscal year 2016. The MFP Demonstration supports state efforts to rebalance their long-term support system so that individuals have a choice of where they live and receive services. Transition individuals from institutions who want to live in the community. As defined in Section 6071(b)(2) of the DRA and amended by Section 2403 of the Affordable Care Act, the term “eligible individual” means an individual in the state who, immediately before beginning participation in the MFP demonstration project: (1) resides (and has resided, for a period of not less than 90 consecutive days) in a qualified institution or inpatient setting (excluding days solely for the purpose of short-term rehabilitation services); (2) is receiving Medicaid benefits for services furnished by such qualified institution or inpatient setting; and (3) with respect to whom a determination has been made that, but for the provision of home and community-based long-term care services, the individual would continue to require the level of care provided in a qualified institution or inpatient setting. America’s Heroes at Work is a Department of Labor project that addresses the employment challenges of returning servicemembers living with Traumatic Brain Injury (TBI) or Post-Traumatic Stress Disorder (PTSD)—an important focus of the President’s veterans agenda. The project equips employers and the workforce development system with the tools they need to help returning servicemembers affected by TBI and/or PTSD succeed in the workplace—particularly servicemembers returning from Iraq and Afghanistan. No eligibility requirements. Support services (website, toll-free assistance, and presentations) are available to any group or individual that requests them. Program objectives The purpose of the Senior Community Service Employment Program (SCSEP) program is to provide, foster, and promote useful part-time work opportunities (usually 20 hours per week) in community service training and employment activities for unemployed low-income persons who are 55 years of age and older. To the extent feasible, SCSEP assists and promotes the transition of program participants into unsubsidized employment. Beneficiary eligibility requirements Unemployed persons 55 years or older whose family is low-income (i.e., income does not exceed the low- income standards defined in 20 CFR section 641.507) are eligible for enrollment (20 CFR section 641.500). Low-income means an income of the family which, during the preceding 6 months on an annualized basis or the actual income during the preceding 12 months, at the option of the grantee, is not more than 125 percent of the poverty levels established and periodically updated by the Department of Health and Human Services (42 USC 3056p(a)(4)). The poverty guidelines are issued each year in the Federal Register and the Department of Health and Human Services maintains a page on the Internet which provides the poverty guidelines (http://www.aspe.hhs.gov/poverty/index.shtml, accessed May 30, 2012). Enrollee eligibility is re- determined on an annual basis (20 CFR section 641.505). To provide intensive services to meet the employment needs of disabled and other eligible veterans with maximum emphasis in meeting the employment needs of those who are economically or educationally disadvantaged, including homeless veterans and veterans with barriers to employment. Eligible veterans and eligible persons with emphasis on Special Disabled veterans, disabled veterans, economically or educationally disadvantaged veterans, and veterans with other barriers to employment. EARN is a resource for employers seeking to recruit, retain and advance individuals with disabilities. (www.askearn.org, accessed May 30, 2012) Any public or private employer seeking to advance the employment of individuals with disabilities is eligible for services. The mission of JAN is to facilitate the employment and retention of workers with disabilities by providing employers, employment providers, people with disabilities, their family members, and other interested parties with technical assistance on job accommodations, entrepreneurship, and related subjects. JAN’s efforts are in support of the employment, including self-employment and small business ownership, of people with disabilities. This free, confidential technical assistance is provided in English and Spanish via telephone, email, chat, and postal mail. JAN services are available to all interested parties. Program objectives Job Corps is the nation’s largest federally funded training program that provides at-risk youth, ages 16-24, with academic instruction, toward the achievement of a High School Diploma or General Education Development (GED) certificate, and career training in high-growth, high-demand industries. Upon exit from the program, participants receive transition assistance to employment, higher education, or the military. The program is primarily residential, serving nearly 60,000 students at 125 centers nationwide. Beneficiary eligibility requirements To be eligible to become an enrollee, an individual shall be: (1) not less than age 16 and not more than age 21 on the date of enrollment, except that (a) not more than 20 percent of the individuals enrolled in the Job Corps may be not less than age 22 and not more than age 24 on the date of enrollment, and (b) either such maximum age limitation may be waived by the Secretary, in accordance with regulations of the Secretary, in the case of an individual with a disability; (2) a low-income individual; and (3) an individual who is one or more of the following: (a) basic skills deficient; (b) a school dropout; (c) homeless, a runaway, or a foster child; (d) a parent; (e) an individual who requires additional education, vocational training, or intensive counseling and related assistance, in order to participate successfully in regular schoolwork or to secure and hold employment. Eligible veterans and eligible persons. Conduct outreach and provide seminars to employers which advocates hiring of veterans; facilitate Transition Assistance Program employment workshops to transitioning servicemembers; establish and conduct job search workshops; facilitate employment, training, and placement services furnished to veterans in a state under the applicable state employment service or one-stop career center delivery systems whose sole purpose is to assist veterans in gaining and retaining employment. The objective of REALifelines is to support the economic recovery of transitioning servicemembers and veterans who were wounded or injured while serving in Operation Iraqi Freedom or Operation Enduring Freedom. The transition services are extended to spouses and caregivers through the resources at the one-stop career centers. Intensive services are provided by our Disabled Veterans Outreach Program specialists. The specialized services are intended to identify and address any barriers to employment. Transitioning servicemembers and veterans who were wounded or injured while serving in Operation Iraqi Freedom or Operation Enduring Freedom. Youth and young adults with disabilities ages 16 through 27. To provide services to assist in reintegrating eligible veterans into meaningful employment within the labor force and to stimulate the development of effective service delivery systems that will address the complex problems facing eligible veterans. Service-connected disabled veterans, veterans who have significant barriers to employment, veterans who served on active duty in the armed forces during a war or in a campaign or expedition for which a campaign badge has been authorized, and veterans who are recently separated from military service (48 months). Program objectives To help low-income youth, between the ages of 14 and 21, acquire the educational and occupational skills, training, and support needed to achieve academic and employment success and successfully transition to careers and productive adulthood. Beneficiary eligibility requirements An eligible youth is an individual who: (1) is 14 to 21 years of age; (2) is an individual who received an income or is a member of a family that received a total family income that, in relation to family size, does not exceed the higher of (a) the poverty line or (b) 70 percent of the lower living standard income; and (3) meets one or more of the following criteria: is an individual who is deficient in basic literacy skills, a school dropout, homeless, a runaway, a foster child, pregnant or a parent, an offender, or requires additional assistance to complete their education or secure and hold employment. There is an exception to permit youth who are not low- income individuals to receive youth services. Up to 5 percent of youth participants served by youth programs in a local area may be individuals who do not meet the income criterion for eligible youth, provided that they are within one or more of the following categories: school dropout; basic skills deficient; are one or more grade levels below the grade level appropriate to the individual’s age; pregnant or parenting; possess one or more disabilities, including learn disabilities; homeless or runaway; offender; or face serious barriers to employment as identified by the local board. The tax credit was designed to help individuals from 12 target groups who consistently have faced significant barriers to employment move from economic dependency to self-sufficiency by encouraging businesses in the private sector to hire target group members and be eligible to claim tax credits based on the wages they paid to the new hires during the first year of employment, up to a dollar wage limit. All employers seeking Work Opportunity Tax Credit target group workers and target group members seeking employment. The members of the different target groups have statutory definitions (per Pub. Law No. 109-188, as amended) with specific eligibility requirements that must be verified by the state workforce agencies before a certification can be issued to an employer or his representatives. Participating employers and their representatives must file their certification requests using Internal Revenue Service Form 8850 and ETA Form 9061 or 9062 within 28 days after the employment-start day of the new hires. This timeliness requirement cannot be waived and must be met before a state can issue a certification for eligible target group members. The Workforce Recruitment Program is a recruitment and referral program that connects federal and private sector employers nationwide with highly motivated postsecondary students and recent graduates with disabilities who are eager to prove their abilities in the workplace through summer or permanent jobs. The Workforce Recruitment Program for College Students with Disabilities serves individuals who have disabilities, are enrolled at an accredited institution of higher learning on a substantially full-time basis (unless the severity of the disability precludes the student from taking a substantially full-time load) to seek a degree or are enrolled at such an institution as a degree-seeking student taking less than a substantially full-time load in the enrollment period immediately prior to graduation or have graduated with a degree from such an institution within the past year, and are U.S. citizens. Beneficiary eligibility requirements Youth ages 16 through 24 and a member of a disadvantaged population, such as: low-income, foster care (including youth aging out of foster care), youth offender, youth with a disability, child of an incarcerated parent, high school dropout, or migrant youth. Compensated Work Therapy is a recovery-oriented, vocational model integrated into the continuum of Veterans Health Administration’s services, as authorized by 38 USC § 1718. Department of Veterans Affairs (VA) medical centers offer Compensated Work Therapy with both Transitional Work Experience and Supported Employment services for veterans with occupational dysfunctions resulting from their mental health conditions, or who are unsuccessful at obtaining or maintaining stable employment patterns due to mental illnesses or physical impairments co-occurring with mental illnesses. The scope of Therapeutic and Supported Employment Services (TSES) includes skill development opportunities both for veterans for whom the primary objective is competitive employment, and for veterans in need of therapeutic pre-employment services designed to ameliorate the consequences of long standing mental health problems alone or with co-occurring physical illness. The mission of TSES is to improve the veteran’s overall quality of life through a vocational rehabilitation experience in which the veteran learns new job skills, strengthen successful work habits, and regains a sense of self-esteem and self-worth. The vision of TSES is that all veterans challenged with physical or mental illness can obtain meaningful competitive employment in the community, working in jobs of their choice, while receiving necessary and appropriate support services. The goal of TSES is to provide a continuum of therapeutic and skill development services for veterans who have difficulty obtaining or maintaining stable employment patterns due to mental illnesses or physical impairments co- occurring with mental illnesses. The objectives of TSES are to provide an opportunity for work hardening and skill development services to eligible veterans regardless of diagnosis, disability, or treatment goals; Collaborate with veterans and their primary treatment team to assure each veteran has the support necessary to achieve his or her vocational goals; Ensure access to all components in the continuum of TSES services as the veteran’s needs change over the course of treatment, rehabilitation, and recovery. A person who served in the active military, naval, or air service and who was discharged or released under conditions other than dishonorable may qualify for VA health care benefits. Reservists and National Guard members may also qualify for VA health care benefits if they were called to active duty (other than for training only) by a federal order and completed the full period for which they were called or ordered to active duty. Program objectives DTAP is an integral component of transition assistance that involves intervening on behalf of servicemembers who may be released because of a disability or who believe they have a disability qualifying them for VA’s Vocational Rehabilitation and Employment program (VR&E). The goal of DTAP is to encourage and assist potentially eligible servicemembers in making an informed decision about VA’s VR&E program. It is also intended to facilitate the expeditious delivery of vocational rehabilitation services to eligible persons by assisting them in filing an application for vocational rehabilitation benefits. Beneficiary eligibility requirements Servicemembers who may be released because of injuries or diseases that happened while on active duty, or were made worse by active military service. To provide all services and assistance necessary to enable service-disabled veterans and service persons hospitalized or receiving outpatient medical care services or treatment for a service-connected disability pending discharge to gain and maintain suitable employment. When employment is not reasonably feasible, the program can provide the needed services and assistance to help the individual achieve maximum independence in daily living. The VR&E program is a comprehensive vocational rehabilitation program that provides up to 48 months of extensive services leading to employment. Every VR&E program participant is provided a comprehensive vocational evaluation to determine transferable skills, aptitudes, and interests; explore labor market and wage information; and to focus on vocational options that will lead to a viable suitable employment or independent living goal. Results from the evaluation help determine which of the five tracks to success (Reemployment, Rapid Access to Employment, Self Employment, Employment Through Long-term Services, or Independent Living) is most appropriate. Depending on the rehabilitation needs of the individual, services may include training such as on-the-job, vocational/technical school, college-level (certificate, 2-year degree, 4-year degree, or beyond). VR&E pays for tuition, fees, books, and supplies associated with training, as well as a monthly subsistence allowance. Veterans of World War II and later service with a service-connected disability or disabilities rated at least 20 percent compensable and certain service- disabled servicepersons pending discharge or release from service if VA determines the servicepersons will likely receive at least a 20 percent rating and they need vocational rehabilitation because of an employment handicap. Veterans with compensable ratings of 10 percent may also be eligible if they are found to have a serious employment handicap. To receive an evaluation for vocational rehabilitation services, a veteran must have received, or eventually receive, an honorable or other than dishonorable discharge, have a VA service-connected disability rating of 10 percent or more, and apply for vocational rehabilitation services. Program objectives To provide vocational training and rehabilitation to certain children born with spina bifida or other covered birth defects who are children of Vietnam veterans and some Korean veterans. Beneficiary eligibility requirements A child born with spina bifida or other covered birth defects, except spina bifida occulta, who is the natural child of a Vietnam veteran and some Korean veterans, regardless of the age or marital status of the child, conceived after the date on which the veteran first served in the Republic of Vietnam during the Vietnam era and in particular areas near the demilitarized zone (DMZ) in the Korean conflict. VA must also determine that it is feasible for the child to achieve a vocational goal. To comply with the Ticket to Work and Work Incentives Improvement Act which was passed in December 1999, and reauthorized by the Social Security Protection Act of 2004, which requires SSA to establish a community based work incentives planning and assistance program. The purpose of this program is to disseminate accurate information to SSA disability beneficiaries (including transition to work aged youth) about work incentives programs, and issues related to such programs, to enable them to make informed choices about working and whether or when to assign their Ticket to Work, as well as how available work incentives can facilitate their transition into the workforce. The ultimate goal of the Work Incentives Planning and Assistance program is to assist SSA disability beneficiaries succeed in their return to work efforts. All individuals within the state who are entitled to Social Security Disability Insurance benefits or eligible for Supplemental Security Income payments based on disability or blindness. Makes vocational rehabilitation services more readily available to disabled or blind SSA disability beneficiaries. Ticket to Work program Provides SSA disability beneficiaries more choices for receiving employment services. Social Security Disability Insurance beneficiaries and Supplemental Security Income disability recipients ages 18 through 64. In commenting on a draft of this report, Education noted that Rehabilitation Services Demonstration and Training Programs grantees may use funds to conduct a broad range of activities, the ultimate goal of which is to improve services for individuals with disabilities. However, in some cases, individuals with disabilities may not be the direct target population. For example, some of the program funds are used for providing training and technical assistance to service providers and parents of individuals with disabilities. In commenting on a draft of this report, Education officials noted that Rehabilitation Services Demonstration and Training Programs may track some of these measures on a project-specific basis, depending on the purpose of the demonstration project. Program (agency) AgrAbility (Department of Agriculture) Compensated Work Therapy program (Department of Veteran Affairs) Employer Assistance and Resource Network (Labor) Helen Keller National Center (Education) Job Corps (Labor) Mental Health Treatment Study (SSA) Randolph Sheppard Vending Facilities Program (Education) Recovery Care Coordinator–Operation Warfighter (Department of Defense) Ticket to Work program (SSA) Work Incentives Planning and Assistance program (SSA) Youth Transition Demonstration Projects (SSA) YouthBuild (Labor) In response to our survey questions that asked whether any impact studies had been conducted, officials from five program provided references. We evaluated the methodology of each study and determined that three of them met the definition of an impact study provided in our questionnaire—a study that assessed the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program—and had been completed since 2006. The other studies either did not meet our definition or were not available for review. This program reported having conducted an impact study, but that it was still in progress as part of a larger effort. Two of the three programs that conducted impact studies were excluded from our scope because they ended prior to April 2012, but are listed here because the studies are discussed in the report. This program cited a previous GAO study, which is not an program evaluation. Michele Grgich, Assistant Director; Rachael Chamberlin Valliere, Analyst- in-Charge; Margaret J. Weber; and Miriam Hill made significant contributions to all aspects of this engagement. Stuart M. Kaufmann, Christine C. San, and Walter Vance assisted with the methodology, survey development, and data analysis. Jessica A. Botsford and Sheila McCoy provided legal assistance. Kate van Gelder and James Bennett helped prepare the final report and graphics. Tom Moscovitch, John S. Townes, and Jacques Arsenault verified our findings.
Many federal programs—within the Departments of Education, Labor, and Veterans Affairs; the Social Security Administration; and other agencies—help people with disabilities overcome barriers to employment. Section 21 of Pub. L. No. 111-139 requires GAO to identify and report annually on programs, agencies, offices, and initiatives that have duplicative goals or activities. GAO examined the extent to which programs that support employment for people with disabilities (1) provide similar services to similar populations and (2) measure effectiveness. GAO identified programs by searching the Catalog of Federal Domestic Assistance and consulting agency officials. GAO surveyed and interviewed agency officials to determine program objectives and activities. Nine agencies reviewed the draft report and five provided comments. Labor was concerned that GAO characterized its programs as fragmented and potentially duplicative. While multiple programs may be appropriate, GAO maintains that additional review and coordination may reduce inefficiencies and improve effectiveness among overlapping programs. GAO is not recommending executive action at this time. In a recent report, GAO suggested the Office of Management and Budget (OMB) consider establishing governmentwide goals for employment of people with disabilities, and working with agencies that administer overlapping programs to determine whether consolidation might result in more effective and efficient delivery of services. GAO continues to believe these actions are needed and will follow up with OMB to determine their status. GAO identified 45 programs that supported employment for people with disabilities in fiscal year 2010, reflecting a fragmented system of services. The programs were administered by nine federal agencies and overseen by even more congressional committees. All programs overlapped with at least one other program in that they provided one or more similar employment service to a similar population—people with disabilities. The greatest overlap occurred in programs serving veterans and servicemembers (19 programs) and youth and young adults (5 programs). In addition, GAO identified seven programs that did not limit eligibility to any particular population and were potentially available to veterans and servicemembers or youth. Some overlapping programs, such as those with specific eligibility requirements, have less potential for duplication—providing the same services to the same beneficiaries—than others. However, even when the potential for duplication of services is low, there may be inefficiencies associated with operating multiple programs that provide similar services to similar populations. Coordination across programs may help address fragmentation and potential duplication, but officials that GAO surveyed reported only limited coordination. However, among six selected programs that only serve people with disabilities—including the Department of Education’s Vocational Rehabilitation program and the Social Security Administration’s Ticket to Work program—officials cited more consistent coordination. Most (32) of the 45 programs surveyed tracked at least one employment-related outcome measure for people with disabilities, but overall little is known about the effectiveness of these programs. The most commonly tracked outcomes for people with disabilities were “entered employment” (28 programs) and “employment retention” (18 programs). However, it may be difficult to compare outcomes across programs, in part, because of variation in the type and severity of participants’ disabilities. In addition, only 10 of the 45 programs reported that an evaluation had been conducted in the last 5 years. Just one of the 45 programs (Job Corps) reported conducting an impact study—a study that would most clearly show whether the program (and not other factors) was responsible for improved employment outcomes for people with disabilities. However, additional studies are underway for at least two other programs.
The safe, efficient, and convenient movement of people and goods depends on a vibrant transportation system. Our nation has built vast systems of roads, airways, railways, transit systems, pipelines, and waterways that facilitate commerce and improve our quality of life. However, these systems are under considerable strain due to increasing congestion and the costs of maintaining and improving the systems. This strain is expected to increase as the demand to move people and goods grows resulting from population growth, technological change, and the increased globalization of the economy. DOT implements national transportation policy and administers most federal transportation programs. Its responsibilities are considerable and reflect the extraordinary scale, use, and impact of the nation’s transportation systems. DOT has multiple missions—primarily focusing on mobility and safety—that are carried out by several operating administrations. (See table 1.) For fiscal year 2010, the President’s budget requested $72.5 billion to carry out these and other activities. DOT carries out some activities directly, such as employing more than 15,000 air traffic controllers to coordinate air traffic. However, the vast majority of the programs it supports are not under its direct control. Rather, the recipients of transportation funds, such as state departments of transportation, implement most transportation programs. For example, the Federal Highway Administration (FHWA) provides funds to state governments each year to improve roads and bridges and meet other transportation demands, but state and local governments decide which transportation projects have high priority within their political jurisdictions. We have previously reported that current surface transportation programs—authorized in the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU)—do not effectively address the transportation challenges the nation faces. As a result, we have called for a fundamental reexamination of the nation’s surface transportation programs to (1) have well-defined goals with direct links to an identified federal interest and federal role, (2) institute processes to make grantees more accountable by establishing more performance-based links between funding and program outcomes, (3) institute tools and approaches that emphasize the return on the federal investment, and (4) address the current imbalance between federal surface transportation revenues and spending. We have also called for a timely reauthorization of FAA programs that expired at the end of fiscal year 2007 and have continued under a series of funding extensions. Such short-term funding measures could delay key capital projects and may affect FAA’s current programs and progress toward the Next Generation Air Transportation System. Congress and the presidential administration have fashioned the American Recovery and Reinvestment Act of 2009 to help our nation respond to what is generally reported to be the worst economic crisis since the Great Depression. DOT received about $48 billion of these funds for investments in transportation infrastructure—primarily for highways, passenger rail, and transit—mostly for use through fiscal year 2010. (See table 2.) As with other executive agencies, DOT now faces the challenges of using these funds in ways that aid economic recovery, making wise funding choices while spending the money quickly, and ensuring accountability for results. The act largely provided for increased transportation funding through existing programs—such as the Federal-Aid Highways, the New Starts transit, and the Airport Improvement programs. Channeling funding through existing programs should allow DOT to jump start its spending of recovery funds. However, there is a need to balance the requirement in the recovery act to get funds out quickly to help turn around the economy with the equally powerful need to make sure that funds are spent wisely on infrastructure investments and are not subject to waste, fraud, and abuse. We have reported on important design criteria for any economic stimulus package including that it be timely, temporary, and targeted. This is a difficult challenge for transportation infrastructure projects. First, they require lengthy planning and design periods. According to the Congressional Budget Office (CBO), even those projects that are “on the shelf” generally cannot be undertaken quickly enough to provide a timely stimulus to the economy. Second, spending on transportation infrastructure is generally not temporary because of the extended time frames needed to complete projects. Third, because of differences among states, it is challenging to target stimulus funding to areas with the greatest economic and infrastructure needs. The act will substantially increase the federal investment in the nation’s surface transportation system. However, the current federal approach to addressing the nation’s surface transportation problems is not working well. Many existing surface transportation programs are not effective at addressing key challenges because goals are numerous and sometimes conflicting, roles are unclear, programs lack links to the performance of the transportation system or of the grantees, and programs in some areas do not use the best tools and approaches to ensure effective investment decisions and the best use of federal dollars. In addition, evidence suggests that increased federal highway grants influence states and localities to substitute federal funds for state and local funds they otherwise would have spent on highways. In 2004, we estimated that states used roughly half of the increases in federal highway grants since 1982 to substitute for state and local highway funding, and that the rate of substitution increased during the 1990s. Our work has also shown that there is still room for improved oversight in surface transportation programs including the Federal-Aid Highway program. For example, we and the DOT Inspector General have each recommended that FHWA develop the capability to track and measure the costs of federally-aided projects over time. Among other things, the act gives our office the responsibility of reporting to Congress bimonthly on how selected states and localities are using the recovery funds. We will work with the department’s Office of Inspector General and with the state and local audit community to coordinate our activities. We also anticipate that committees of jurisdiction will request that we assess specific issues related to the department’s use of recovery funds. We look forward to working with this subcommittee and others to meet Congress’s needs. DOT and Congress will be faced with numerous challenges as they work to reauthorize the surface transportation and aviation programs. In particular, the department and Congress will need to address challenges in (1) ensuring that the nation’s surface transportation and aviation systems have adequate funding, (2) improving safety, (3) improving mobility, and (4) transforming the nation’s air traffic control system. Surface transportation program funding is one of the issues on our high-risk list. Revenues from motor fuels taxes and truck-related taxes to support the Highway Trust Fund—the primary source of funds for highway and transit—are not keeping pace with spending levels. This fact was made dramatically apparent last summer when the Highway Account within the trust fund was nearly depleted. The balance of the Highway Account has been declining in recent years because, as designed in SAFETEA-LU, outlays from the account exceed expected receipts over the authorization period. Specifically, when SAFETEA-LU was passed in 2005 estimated outlays from Highway Account programs exceeded estimated receipts by about $10.4 billion. Based on these estimates, the Highway Account balance would have been drawn down from $10.8 billion to about $0.4 billion over the authorization period. This left little room for error. Assuming all outlays were spent, a revenue shortfall of even 1 percent below what SAFETEA-LU had predicted over the 5-year period would result in a cash shortfall in the account balance. In fact, actual Highway Account receipts were lower than had been estimated, particularly for fiscal year 2008. Account receipts were lower in fiscal year 2008 due to a weakening economy and higher motor fuel prices that affected key sources of Highway Trust Fund revenue. For example, fewer truck sales, as well as fewer vehicle miles traveled and correspondingly lower motor fuel purchases resulted in lower revenues. As a result, the account balance dropped more precipitously than had been anticipated and was nearly depleted in August 2008—1 year earlier than the end of the SAFETEA-LU authorization period. In response, Congress passed legislation in September 2008 to provide $8 billion to replenish the account. However, according to CBO, the account could reach a critical stage again before the end of fiscal year 2009. Without either reduced expenditures or increased revenues, or a combination of the two, shortfalls will continue. In the past, we have reported on several strategies that could be used to better align surface transportation expenditures and revenue. Each of these strategies has different merits and challenges, and the selection of any strategy will likely involve trade-offs among different policy goals. The strategies related to funding sources are also included in the recent report from the National Surface Transportation Infrastructure Financing Commission. Altering existing sources of revenue. The Highway Account’s current sources of revenue—motor fuel taxes and truck-related taxes—could be better aligned with actual outlays. According to CBO and others, the existing fuel taxes could be altered in a variety of ways to address the erosion of purchasing power caused by inflation, including increasing the per-gallon tax rate and indexing the rates to inflation. Ensure users are paying fully for benefits. Revenues can also be designed to more closely follow the user-pay concept—that is, require users to pay directly for the cost of the infrastructure they use. This concept seeks to ensure that those who use and benefit from the infrastructure are charged commensurately. Although current per-gallon fuel taxes reflect usage to a certain extent, these taxes are not aligned closely with usage and do not convey to drivers the full costs of road use—such as the costs of congestion and pollution. We have reported that other user-pay mechanisms—for example, charging according to vehicle miles traveled, tolling, implementing new freight fees for trucks, and introducing congestion pricing (pricing that reflects the greater cost of traveling at peak times)—could more equitably recoup costs. Supplement existing revenue sources. We have also reported on strategies to supplement existing revenue sources. A number of alternative financing mechanisms—such as enhanced private-sector participation—can be used to help state and local governments finance surface transportation. These mechanisms, where appropriate, could help meet growing and costly transportation demands. However, these potential financing sources are forms of debt that must ultimately be repaid. Reexamine the base. Given the federal government’s fiscal outlook, we have reported that we cannot accept all of the federal government’s existing programs, policies, and activities as “givens.” Rather, we need to rethink existing programs, policies, and activities by reviewing their results relative to the national interests and by testing their continued relevance and relative priority. Improve the efficiency of current facilities. Finally, better managing existing system capacity and improving performance of existing facilities could minimize the need for additional expenditures. We have reported that the efficiency of the nation’s surface transportation programs are declining and that the return on investment could be improved in a number of ways, including creating incentives to better use existing infrastructure. In addition to better aligning revenues and outlays, improving existing mechanisms that are intended to help maintain Highway Account solvency could help DOT better monitor and manage the account balance, thereby reducing the likelihood of a funding shortfall. For example, statutory mechanisms designed to make annual adjustments to the Highway Account have been modified over time—particularly through changes in SAFETEA-LU—to the extent that these mechanisms either are no longer relevant or are limited in effectiveness. Furthermore, monitoring indicators throughout the year that could signal sudden changes in the Highway Account revenues could help DOT better anticipate potential changes in the account balance that should be communicated to Congress, state officials, and other stakeholders. We recently made recommendations to help DOT improve solvency mechanisms for the Highway Account and communication on the account’s status with stakeholders. Turning to aviation funding, the excise taxes that fund Airport and Airway Trust Fund revenues have been lower than previously forecasted, and forecasts of future revenues have declined because of a decline in airline passenger travel, fares, and fuel consumption. Moreover, the uncommitted balance in the Trust Fund has decreased since fiscal year 2001. (See fig. 2). For the short run, lower-than-expected excise tax revenues will reduce the Trust Fund balance even further and could affect funding for FAA programs this year and next. In the longer run, continued declines in Trust Fund revenues may require Congress to reduce spending on FAA operations and capital projects, increase revenues for the trust fund by introducing new fees or increasing taxes, or increase FAA’s funding provided by the General Fund. Improvements in transportation safety are needed to reduce the number of deaths and injuries from transportation accidents, the vast majority of which occur on our nation’s roads. We recently reported that although the number of traffic crashes and the associated fatality rates has decreased over the last 10 years, the number of fatalities has, unfortunately, remained at about 42,000 annually and some areas are of particular concern. For example, in 2007, over half of the passenger vehicle occupants killed were not using safety belts or other proper restraint, nearly one-third of the total fatalities were in alcohol-impaired driving crashes, and motorcyclist fatalities increased for the 10th year in a row. While the U.S. commercial aviation industry is among the safest in the world, aviation safety is also a major concern because when accidents or serious incidents occur they can have catastrophic consequences. Moreover, last year there were 25 serious runway incursions—nine of these involved commercial aircraft—when collisions between aircraft on runways were narrowly avoided. Runway incursions can be considered a precursor to aviation accidents. Figure 4 shows the number of serious incursions involving commercial aircraft from fiscal year 2001 through fiscal year 2008. DOT has taken steps to address surface and aviation safety concerns. To improve traffic safety, the National Highway Traffic Safety Administration (NHTSA) has made substantial progress in administering traffic safety grant programs and high-visibility enforcement programs which, according to state safety officials, are helping them address key traffic safety issues, such as safety belt use and alcohol-impaired driving. NHTSA has also taken steps to improve the consistency of its process for reviewing states’ management of traffic safety grants. To maintain and expand the margin of safety within the national airspace system, FAA is moving to a system safety approach to oversight and has established risk-based, data-driven safety programs to oversee the aviation industry. FAA has also taken recent actions to improve runway safety, including conducting safety reviews at airports and establishing an FAA-industry team to analyze the root causes of serious incursions and recommend runway safety improvements. Despite NHTSA’s progress in administering and overseeing traffic safety programs, several challenges may limit the effectiveness of the programs and NHTSA’s ability to measure and oversee program effectiveness: The grant programs generally lack performance accountability mechanisms to tie state performance to receipt of grants. Some states have faced challenges passing legislation required to qualify for some traffic safety incentive grants. Each safety incentive grant has a separate application process, which has proven challenging for some states to manage, especially those with small safety offices. Some states also would have preferred more flexibility in using the safety incentive grants to focus on key safety issues within the state. Over the past several years, we have made recommendations to help NHTSA further improve its ability to measure and oversee surface traffic safety programs and to help FAA improve its oversight of aviation safety. However, some challenges related to traffic safety—such as state challenges in administering the programs and the lack of performance accountability measures—result from the structure of the grant programs established under SAFETEA-LU. These challenges and the persistence of substantial numbers of traffic fatalities nationwide raise issues for Congress to consider in restructuring surface traffic safety programs during the upcoming reauthorization. Furthermore, to maintain the high level of safety in the aviation industry, FAA needs to address challenges in accessing complete and accurate aviation safety data, and improving runway and ramp safety. For example, recent actions by some major airlines to discontinue participation in an important data reporting program limit data access. Moreover, a lack of national data on operations involving air ambulances, air cargo, and general aviation hinders FAA’s ability to evaluate accident trends and manage risks in these sectors. Improving runway safety will require a sustained effort by FAA that includes developing new technologies and revised procedures to address human factors issues, such as fatigue and distraction, which experts have identified as the primary cause of incursions. Congestion has worsened over the past 10 years, despite large increases in transportation spending at all levels of government and improvements to the physical condition of highways and transit facilities. Furthermore, according to DOT, highway spending by all levels of government has increased 100 percent in real dollar terms since 1980, but the hours of delay during peak travel periods have increased by almost 200 percent during the same period. These mobility issues have increased at a relatively constant rate over the last two decades. (See table 3.) In addition, demand has outpaced the capacity of the system, and projected population growth, technological changes, and increased globalization are expected to further strain the system. Likewise, increased demand and capacity constraints have threatened the mobility of the nation’s freight transportation network. According to DOT, volumes of goods shipped by trucks and railroads are projected to increase by 98 percent and 88 percent, respectively, by 2035 over 2002 levels, at the same time that the ability to increase capacity will be constrained by geographic barriers, population density, and urban land-use development patterns. One study estimates that highway congestion alone costs shippers $10 billion annually. Constraints on freight mobility can also result in undesirable environmental effects, such as air pollution, and contribute to increased risks for illnesses, such as respiratory disease. Flight delays and cancellations at congested airports also continue to plague the U.S. aviation system. Flight delays and cancellations steadily increased from 2002 through 2007 and decreased slightly in 2008. (See fig. 5.) For example, almost one in four flights either arrived late or was canceled in 2008, and the average flight delay increased despite a 6 percent decline in the total number of operations through December 2008. Delays are a particular problem at a few airports, such as those in the New York area, where less than 70 percent of flights arrive on time. Because the entire airspace system is highly interdependent, delays at one airport may lead to delays rippling across the system and throughout the day. Commissions, proposals, and actions have attempted to address mobility issues in past years. To address concerns with the performance of the surface transportation system, including mobility concerns, Congress established two commissions to examine current and future needs of the system and recommend needed changes to surface transportation programs, one of which called for significantly increasing the level of investment in surface transportation. Various other transportation industry associations and research organizations have also issued proposals for restructuring surface transportation programs. DOT has also taken several steps in the last 5 years to address key impediments to freight mobility by developing policies and programs to address congestion in the United States. For example, it has drafted a framework for a national freight policy, released a national strategy to reduce congestion, and created a freight analysis framework to forecast freight flows along national corridors and through gateways. DOT and FAA began implementing several actions in summer 2008 intended to enhance capacity and reduce flight delays, particularly in the New York region. These actions include redesigning the airspace around the New York, New Jersey, and Philadelphia metropolitan area and establishing schedule caps on takeoffs and landings at the three major New York airports. In addition, as part of a broad congestion relief initiative, DOT awarded over $800 million to several cities under its Urban Partnership Agreements initiative to demonstrate the feasibility and benefits of comprehensive, integrated, performance-driven, and innovative approaches to relieving congestion. We have previously reported on several challenges that impede DOT’s efforts to improve mobility: Although all levels of government have significantly invested in transportation, and recommendations have been made by transportation stakeholders for increasing investment in surface transportation even further, we have previously reported that federal transportation funding is generally not linked to specific performance-related goals or outcomes, resulting in limited assurance that federal funding is being channeled to the nation’s most critical mobility needs. Federal funding is also often tied to a single transportation mode, which may limit the use of those funds to finance the greatest improvements in mobility. DOT does not possess adequate data to assess outcomes or implement performance measures. For example, DOT lacks a central source for data on congestion—even though it has identified congestion as a top priority—and available data are stovepiped by mode, impeding efficient planning and project selection. Although DOT and FAA should be commended for taking steps to reduce mounting flight delays and cancellations, as we predicted this past summer, delays and cancellations in 2008 did not markedly improve over 2007 levels despite a decline in passenger traffic. The growing air traffic congestion and delay problem that we face is the result of many factors, including airline practices and inadequate investment in airport and air traffic control infrastructure. Long-term investments in airport infrastructure and air traffic control, or other actions by Congress, DOT, or FAA could address the fundamental imbalance between underlying demand for, and supply of, airspace capacity. FAA has made significant progress in addressing weaknesses in its air traffic control modernization. It established a framework for improving system management capabilities, continued to develop an enterprise architecture, implemented a comprehensive investment management process, assessed its human capital challenges, and developed an updated corrective action plan for 2009 to sustain improvement efforts and enhance its ability to address risks, among other things. Because FAA has shown progress in addressing most of the root causes of past problems with the air traffic control modernization effort and is committed to sustaining progress into the future, we removed this area from the high- risk list in January 2009. Nonetheless, we will closely monitor FAA’s efforts because the modernization program is still technically complex and costly, and FAA needs to place a high priority on efficient and effective management. FAA’s improvement efforts are even more critical because the modernization has been extended to plan for the Next Generation Air Transportation System (NextGen)—a complex and ambitious multiagency undertaking that is intended to transform the current radar-based system to an aircraft-centered, satellite-based system by 2025. As the primary implementer of NextGen, FAA faces several challenges that, if not addressed, could severely compromise NextGen goals and potentially lead to a future gap between the demand for air transportation and available capacity that could cost the U.S. economy billions of dollars annually. Challenges facing FAA include the following: Accelerating the implementation of available NextGen technologies, which, according to some industry stakeholders, are not being implemented fast enough to have NextGen in place by 2025. Working with stakeholders to explore a range of potential options that would provide incentives to aircraft operators to purchase NextGen equipment and to suppliers to develop that equipment. These options could include some combination of mandated deadlines, operational credits, or equipment investment credits. Reconfiguring facilities and enhancing runways to take full advantage of NextGen’s benefits. FAA has not developed a comprehensive reconfiguration plan, but intends to report on the cost implications of reconfiguration this year. Sustaining the current air traffic control system and maintaining facilities during the transition to NextGen. More and longer unscheduled outages of existing equipment and support systems indicate more frequent system failures. These systems will be the core of the national airspace system for a number of years and, in some cases, become part of NextGen. To implement NextGen, the department is undertaking several initiatives. For example, FAA has formed partnerships with industry to accelerate the availability of NextGen capabilities. These partnerships include (1) entering into agreements with private sector firms to conduct NextGen technology demonstration projects; (2) working with industry and the local community on their plans to build an aviation research and technology park where FAA can work with industry on the research and development, integration, and testing of NextGen technologies; and (3) establishing a NextGen midterm task force to forge a consensus on operational improvements and planned benefits for 2013 to 2018. In addition, to increase the capacity of existing runways at busy airports, FAA has begun implementing the High-Density Terminal and Airport Operations initiative that changes requirements for aircraft separation and spacing, among other things. One step for moving forward with the NextGen transition was proposed in the 2009 House reauthorization bill, which directed FAA to establish a working group to develop criteria and make recommendations for the realignment of services and facilities—considering safety, potential cost savings, and other criteria, in concert with stakeholders, including employee groups. Until FAA establishes this working group and the group develops recommendations, the configurations needed for NextGen cannot be implemented and potential savings that could help offset the cost of NextGen will not be realized. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information on this statement, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals making key contributions to this testimony were Sara Vermillion, Assistant Director; Steve Cohen, Matthew Cook, Heather Krause, Nancy Lueke, James Ratzenberger, and Teresa Spisak. National Airspace System: FAA Reauthorization Issues Are Critical to System Transformation and Operations. GAO-09-377T. Washington, D.C.: February 11, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. FAA Airspace Redesign: An Analysis of the New York/New Jersey/Philadelphia Project. GAO-08-786. Washington, D.C.: July 31, 2008. Surface Transportation Programs: Proposals Highlight Key Issues and Challenges in Restructuring the Programs. GAO-08-843R. Washington, D.C.: July 29, 2008. Traffic Safety Programs: Progress, States’ Challenges, and Issues for Reauthorization. GAO-08-990T. Washington, D.C.: July 16, 2008. Physical Infrastructure: Challenges and Investment Options for the Nation’s Infrastructure. GAO-08-763T. Washington, D.C.: May 8, 2008. Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs. GAO-08-400. Washington, D.C.: March 6, 2008. Federal Aviation Administration: Challenges Facing the Agency in Fiscal Year 2009 and Beyond. GAO-08-460T. Washington, D.C.: February 7, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington, D.C.: January 8, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008. Highlights of a Forum: Transforming Transportation Policy for the 21st Century. GAO-07-1210SP. Washington, D.C.: September 19, 2007. Surface Transportation: Strategies Are Available for Making Existing Road Infrastructure Perform Better. GAO-07-920. Washington, D.C.: July 26, 2007. Performance and Accountability: Transportation Challenges Facing Congress and the Department of Transportation. GAO-07-545T. Washington, D.C.: March 6, 2007. Highway Trust Fund: Overview of Highway Trust Fund Estimates. GAO-06-572T. Washington, D.C.: April 4, 2006. Highlights of an Expert Panel: The Benefits and Costs of Highway and Transit Investments. GAO-05-423SP. Washington, D.C.: May 6, 2005. Federal-Aid Highways: FHWA Needs a Comprehensive Approach to Improving Project Oversight. GAO-05-173. Washington, D.C.: January 31, 2005. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A safe, efficient, and convenient transportation system is integral to the health of our economy and quality of life. Our nation's vast transportation system of airways, railways, roads, transit systems, and waterways has served this need, yet is under considerable pressure due to increasing congestion and costs to maintain and improve the system. Calls for increased investment come at a time when traditional funding for transportation projects is increasingly strained. The authorizing legislation supporting transportation programs will soon expire. The Department of Transportation (DOT) implements national transportation policy and administers most federal transportation programs. DOT received funds for transportation infrastructure projects through the American Recovery and Reinvestment Act of 2009 to aid in economic recovery. DOT also requested $72.5 billion to carry out its activities for fiscal year 2010. This statement presents GAO's views on major challenges facing DOT and Congress as they work to administer recovery funds and reauthorize surface transportation and aviation programs. It is based on work GAO has completed over the last several years. GAO has made recommendations to DOT to improve transportation programs; the agency has generally agreed with these recommendations. To supplement this existing work, GAO obtained information on the recovery funds provided to DOT. The Department of Transportation received about $48 billion of recovery funds for investments in transportation infrastructure from the American Recovery and Reinvestment Act of 2009. As with other executive agencies, DOT is faced with the challenges of using these funds in ways that will aid economic recovery, making wise funding choices while spending the money quickly, and ensuring accountability for results. GAO will report to Congress bimonthly on how states and localities use the recovery funds received from DOT. DOT and Congress will also be faced with numerous challenges as they work to reauthorize surface transportation and aviation programs. Funding the nation's transportation system. Revenues to support the Highway Trust Fund are not keeping pace with spending levels and the Highway Account was nearly depleted last summer. In addition, the excise taxes that fund Airport and Airway Trust Fund revenues have been lower than previously forecasted, and forecasts of future revenues have declined. Declining revenues in both trust funds may adversely affect DOT's ability to continue to fund surface transportation and aviation programs at levels previously assumed. Improving transportation safety. Although the number of traffic crashes and the associated fatality rate has decreased over the last 10 years, the number of fatalities has remained at about 42,000 annually. The continued high level of fatalities and difficulties experienced by states in implementing grant programs raise issues for Congress to consider in restructuring these programs during reauthorization. While the U.S. commercial aviation industry is among the safest in the world, accidents can have catastrophic consequences. The lack of performance measures and complete data limit DOT's ability to improve safety and manage safety risks. Improving transportation mobility. Despite large increases in transportation spending, congestion on our nation's highways has increased over the last 10 years and increased demand will further strain the system. Flight delays and cancellations at congested airports continue to plague the U.S. aviation system. For example, almost one in four flights either arrived late or was canceled in 2008, and the average flight delay increased despite a 6 percent annual decline in the total number of operations through December 2008. Congestion poses serious economic as well as environmental and health concerns for the nation. Transforming the nation's air traffic control system. The air traffic control modernization program is technically complex and costly. The Federal Aviation Administration will need to accelerate the implementation of new and existing technologies, consider incentives for aircraft operators to acquire those technologies, and sustain the current system while transitioning to the new one, among other things.
Ports are critical components of the freight transportation network and serve as gateways for the movement of international (imports and exports) and domestic goods between navigable waterways and landside transportation systems, such as the Interstate highway system or the national rail network. For the purposes of this report, we define a port as the area “inside the gate” and under the control of the local port authority or marine terminal operator, where cargo is loaded and unloaded to and from ships. We refer to a “port complex” as encompassing one to two ports and the nearby roadways, rail, bridges, and intermodal facilities (i.e., connectors) on which cargo arrives or departs the port. Major West Coast ports—Los Angeles, Long Beach, Oakland, Tacoma, and Seattle—have historically handled about half of the nation’s containerized cargo (see figure 1) and all of these ports have projected increasing volumes. For example, the regional government for Southern California, where the nation’s largest port complex is based, has forecasted that the Los Angeles and Long Beach ports will handle approximately 40 million TEUs by 2035, more than two times the cargo handled today. Though cargo volumes at West Coast ports are expected to increase, the share of total cargo handled by West Coast ports has declined slightly in recent years as Gulf and East Coast ports gained market share. Cargo moving through ports is inherently intermodal. Efficient freight movement depends upon the condition of intermodal connections. Port connectors include transportation infrastructure such as roads, railways, and marine highways that connect the port to major trade corridors and allow freight to transfer from one transportation mode to another (e.g., from a ship to a truck). The movement of cargo through ports involves multiple entities, public and private, which compete with one another (ports against other ports, terminals against other terminals, etc.) and coordinate with one another (terminals with truckers and rail carriers, etc.) for shipping business and to make key infrastructure investment and operations decisions. See appendix II for a description of the key entities’ roles and how they fit in the end-to-end sequence of processes and network of companies involved in the production and distribution of goods that make up supply chains. At 29 West Coast ports—including the ports of Los Angeles, Long Beach, Oakland, Seattle, and Tacoma—the employment requirements and responsibilities between terminal operators and labor are outlined in one contract negotiated between the PMA—which represents marine terminal operators and ocean carriers—and the ILWU—which represents approximately 14,000 registered workers and another 7,000 non- registered workers eligible for employment at marine terminals. The most recent contract was finalized in February 2015 after protracted negotiations that began in May 2014 on a contract that was set to expire on July 1st of that same year. Historically, U.S. terminal-labor contract negotiations can be contentious and lengthy. In some cases, contract negotiating difficulties can effectively shut down port operations. Global shipping has changed over the past decade in several fundamental ways as ocean carriers have attempted to reduce their costs. These global shipping changes can impact how cargo is moved through a port. Increased ship size: Over the past decades, many ocean carriers decided to order larger container vessels to meet demand spurred by growing Asian economies, to capture economies of scale made possible by advances in fuel efficient engine technology, and to maintain market share and presence. The largest vessel to call on West Coast ports in 2016 could carry nearly 18,000 TEUs whereas in 2005, the largest vessel was roughly half as large. These larger vessels are longer, wider, and taller. Port terminal infrastructure— crane heights and reach, berth depth, and other considerations, such as the availability of truck chassis—the truck trailers that are used to carry shipping containers—must be adequate to receive these larger vessels. See figure 2 for an illustration of the growth in vessel size since circa 1985 with a Boeing 747 included for scale. Formation of shipping alliances: Ocean carriers have formed alliances as a strategy to contain costs and offer more competitive services. These alliances allow for cargo booked with one carrier to be transported by another alliance carrier’s ship. Shifts in these alliances can result in vessels calling on different ports and terminals, depending on obligations under alliance agreements. There are currently four broad alliances which transport about 80 percent of the U.S. containerized cargo. Changing ownership structures: Historically, ocean carriers owned not only the vessels, but also the cargo containers and the truck chassis that transport containers to and from the vessels. Previously, chassis would be stored, maintained, and repaired (by labor) within the terminal gates. Before leaving the terminal, labor would also conduct a chassis safety, or “roadability” inspection. In an effort to keep their costs low in response to the global recession in 2007-2009 and to follow models of chassis provision in other countries, carriers have divested themselves from chassis ownership and shifted these responsibilities to third-party leasing companies. Supply chains are the end-to-end process of producing and distributing a product or commodity from raw materials to the final customer. Supply chains can be fairly localized, global, or anywhere in between. Management of the supply chain involves shippers adapting supply chain decisions to changing market conditions and to gain efficiencies. For example, a furniture importer’s supply chain could include materials and finished goods from Southeast Asia that are then transported to a West Coast port and distributed across the United States. The freight transportation network, including ports, is a critical component of how end-to-end supply chains function. Lowering production or transportation costs can be key to achieving efficiencies in the supply chain. Industry supply chains have evolved in recent years with advances in communications and computing technology, reductions in trade barriers and production costs, and the opening of new markets globally. Over the past several decades, firms have become increasingly reliant on timely shipping. “Just-in-time” business models enable firms to save inventory costs by planning their supply chains carefully to have inputs and goods delivered within very specific time frames. While these strategies are highly efficient, any disruption in the supply chain can have a greater impact than would be the case if larger inventories were held, buffering any breakdown in planned deliveries. Further, many shippers face seasonal demand, where goods must be delivered to the customer during a narrow window of time, such as goods for the holiday season or agricultural goods. In addition to private entities and state, regional, and local governments, multiple federal agencies have roles in various aspects of port and near- port freight infrastructure and in facilitating international trade. Although historically DOT’s freight policy and funding have been targeted towards highways and transit, some DOT programs have funded port-related projects such as the Transportation Investment Generating Economic Recovery Discretionary Grant (TIGER) program; the Transportation Infrastructure Finance and Innovation Act (TIFIA) program; and Railroad Rehabilitation and Improvement Financing (RRIF). These programs’ broad eligibility has allowed states and local governments to fund multi- modal, multi-jurisdictional projects. In 2012, MAP-21 expanded DOT’s authorities to address multimodal freight, and DOT has subsequently assumed more of a leadership role in federal freight activity. MAP-21 established a national freight policy focused on highways and directed DOT to develop a national freight strategic plan. The goals of this policy include increasing the economic competitiveness of the United States, reducing freight congestion, and improving the safety, reliability, and efficiency of the freight network, among other goals. In October 2015, DOT issued a draft National Freight Strategic Plan for public comment and plans to finalize the plan by December 4, 2017, in accordance with the statutory deadline mandated by the FAST Act. In December 2015, the FAST Act expanded DOT’s freight role again. The FAST Act created a new freight formula program, authorized at $6.2 billion over 5 years, to fund improvements on the National Highway Freight Network. Up to 10 percent of the funds may be used for freight rail and intermodal projects, including projects at ports. The FAST Act also created a new discretionary grant program, commonly referred to as the FASTLANE program, to fund major transportation projects, such as highway bridge projects, as well as freight projects. Up to $500 million of the $4.5 billion authorized for the program over 5 years may be used for freight rail, intermodal, or port projects. The Act also directed DOT to designate a multimodal freight network and undertake a port performance data effort. Other federal agencies with specific roles related to ports include the Departments of Commerce, Homeland Security, and Agriculture as well as the U.S. Army Corps of Engineers (Corps) and the Federal Maritime Commission (FMC) (see table 1). For example, the Corps is tasked with maintaining navigable waterways and, consequently, is the lead federal agency for harbor dredging projects at ports. Other agencies have a specific role related to a step in the flow of goods and share information with other agencies to support their purposes. For example, Customs and Border Protection, within Homeland Security, inspects and clears cargo as part of its overall mission of protecting the homeland. After gathering required customs information, it provides data on import trade to the U.S. Census, within Commerce, which it maintains and makes available for analysis. Environmental regulation and protection of port complexes, channels, and waterways, may involve multiple federal agencies including the Corps, the Environmental Protection Agency, and DOT. Some port infrastructure is outdated and not well suited to address the recent changes of global shipping. Literature we reviewed and stakeholders we interviewed as part of our case studies described how existing capacity at each of our case study ports could not adequately accommodate larger ships, specifically, and increased volumes, generally. For example, acreage for storing containers within some terminals (i.e. a terminal container yard) was identified as inadequate for handling increased container volumes, though a port may have sufficient acreage across its multiple terminals. Marine terminal operators increase terminal capacity by stacking containers higher, which are then more time-consuming and costly to sort through when a trucker arrives for pick up. Other infrastructure may be coming to the end of its useful life and need to be replaced or retrofitted to more capably handle larger ships and increased volumes. For example, according to the port authority of Seattle, installing new cranes that can reach across larger vessels would also require sections of one pier to be reinforced to handle the cranes’ heavier weight. Outside ports, aging roadways can also impede cargo movement to and from the port particularly where freight rail, trucks, and other road users converge at congested crossings and intersections. At each of our three case-study port complexes, stakeholders have identified numerous grade crossings, nearby and in the broader metropolitan region, that are problematic for the transport of growing cargo volumes. A number of terminal and inland infrastructure constraints created or exacerbated by global shipping changes are illustrated in figure 3. In response to global shipping changes, infrastructure projects have been completed or are planned at all major West Coast ports, though some projects have been deferred indefinitely. See appendix II for examples of these landside (terminal and inland) infrastructure projects. According to port authorities and other stakeholders we interviewed, infrastructure projects are of vital importance for maintaining the capability of serving current cargo volumes, as well as enhancing the long-term competitiveness of their ports and shippers’ products. For example, according to the Port of Oakland, the redevelopment of the former Oakland Army Base adjacent to the port into facilities serving port cargo will accommodate anticipated growth and provide shippers with transportation cost savings. The first phase of the project consists of several types of infrastructure development, including roads, an expanded railyard, and other facilities for the movement of goods. By increasing rail access, the port anticipates reducing truck traffic to and from the port and reducing the typical cost of transporting a container by an estimated $300. At full capacity, according to the Port and City of Oakland, the equivalent of 375,000 truckloads of cargo can be transported directly into the port by rail rather than by trucks, yielding over $112 million in annual savings for the nation’s exporters. Infrastructure is funded through a combination of public investments and private sector partnerships, typically requiring significant resources and potentially decades to plan and complete. State and local governments, as well as port authorities of the three major West Coast port complexes look to both the federal government and the private sector to secure funding for infrastructure projects. For example, according to Port of Long Beach officials, about 78 percent of the Gerald Desmond Bridge’s replacement project’s $1.3 billion in secured funding comes from federal sources, which includes about $325 million of financing through TIFIA. According to the port, replacement of the bridge was initially considered in the early 1990s due to mounting maintenance costs; in 2002, the port began developing an initial cost estimate and finalized the estimate in 2008. The height of the replacement bridge will allow passage of larger vessels and additional lanes will increase capacity to handle the estimated 15 percent of the nation’s waterborne cargo that navigates under this stretch of roadway. The bridge, in conjunction with other port projects, represents a $4 billion capital improvement program being implemented by the port, according to Port of Long Beach officials. See figure 4 for an illustration of the existing 50-year bridge compared to the replacement bridge scheduled to be substantially completed in 2018 and the clearance of different sized vessels. Private partnership is also key for successful project implementation. For example, private entities were responsible for operating and maintaining some buildings and rail facilities, and the marine terminal, among other portions of the first phase of the Port of Oakland’s Army base redevelopment project. Though the first phase, specifically the rail yard, had received $15 million in TIGER funding, construction of the second phase of the redevelopment project, involving a new intermodal rail terminal, additional warehouse and logistics space, and a new grade separation have not yet commenced with various aspects of the project’s development still under negotiation. According to a Port of Oakland official, aspects of the project that mostly benefit the public, such as a new grade separation project, will likely require public investment, unless there is strong growth in rail activity through Oakland to motivate private investment. Similarly, the Port of Los Angeles’ modernization of a 185- acre terminal, which included automation and more than $500 million to develop, relied on a public-private partnership for funding and subsequent operations. According to Port authority officials, the port contributed $460 million, the state of California another $60 million in grants, and the marine terminal operator invested more than $200 million in specialized automated equipment. The marine terminal operator has a 30-year lease to operate the terminal. According to the Port of Los Angeles’ 2014 master port plan, this and other expansion projects are needed to ensure that projected future cargo volumes can be handled. Although infrastructure projects were generally considered important by port stakeholders to address constraints in cargo movement, some questioned the effectiveness or the efficiency of some infrastructure investments. For example, one terminal operator we spoke with said that investments (which included federal funds) made at a competing terminal at the Port of Seattle were unnecessary because expected volumes could be accommodated at lower costs by consolidating two terminals. Similarly, labor representatives also questioned the impact of pursuing infrastructure projects that automate terminal operations rather than other options that may as effectively improve a terminal’s efficiency, such as investing in longshoremen’s training and extending gate hours. Representatives from a trucking association also questioned infrastructure investments, such as some projects involving terminal automation, to enhance the efficiency of trucks picking up cargo without commensurate investments to improve inland roadways that are used to access the port. Global shipping changes have affected how key equipment, specifically chassis, are made available, as well as strained traditional port and terminal gate hours, according to some literature and stakeholders included in our review. Difficulties with truck chassis availability and condition: In recent years, it has become increasingly difficult and time-consuming for truckers to obtain and pass road safety inspections, complete repairs, and reposition chassis, according to some literature and port stakeholders we interviewed. For example, according to a 2015 Federal Maritime Commission report, performing inspections just prior to exiting the terminal (instead of inspecting chassis beforehand and then loading only those that are roadworthy) can cause delays. If needed repairs are identified, the driver must wait for maintenance and repair crews at the port, who can be in short supply. Additionally, if an inspection finds damage on a chassis that is owned by a driver or a trucking company (rather than a third-party leasing company), the driver may elect to have repairs conducted off-site. However, the loaded containers would be required to be returned to the terminal, further delaying the movement of cargo. Other reported chassis issues stem from provisions in some contracts between a third-party leasing company and an ocean carrier, which specify the brand of chassis to be used or where the chassis must be repositioned after use. According to representatives from one trucking association we interviewed, such provisions limit chassis options for truckers and require them to make extra trips retrieving and repositioning approved chassis rather than hauling containers. Changes in cargo loads and schedule delays due to alliances: Broader shipping alliances have complicated vessel unloading and loading, as cargo booked with multiple ocean carriers may be onboard the same vessel but bound for different terminals within a port or different destination ports. According to some stakeholders, containers typically are not loaded at origin ports in Asia by “block stowage,” where containers bound for a particular terminal are grouped together onboard to facilitate more efficient unloading. The mixture of containers from multiple alliance partners on a vessel increases the time it takes to unload and sort containers. This in turn can lead to a cascading effect, potentially delaying the arrival of other vessels at an occupied terminal. Adequacy of terminal gate hours: The standard daytime gate hours of marine terminals (7 or 8 am to 4 or 5 pm) may be inadequate, particularly given the complexity and time required to load and unload containers. Some port stakeholders, specifically, trucking and labor representatives, indicated that additional gate hours could improve congestion. Most stakeholders we interviewed agreed that marine terminal operators do not hire labor for extra shifts unless there is a specific demand (i.e., request or requirement) for it by cargo owners, because the additional costs associated with these shifts would not be offset by the amount carriers or cargo owners generally pay. Some stakeholders acknowledged that there may not be sufficient demand from shippers to pick up cargo in the off-peak hours if, for example, distribution warehouses are not open to receive these containers. Where night gate hours have been instituted, such as the ports of Los Angeles and Long Beach, several stakeholders said it contributed to congestion at certain times because drivers and shippers, wanting to avoid the traffic mitigation fees charged for daytime pickup, line up prior to off-peak hours. A senior official from PierPass, the organization that manages the collection of daytime fees and marine terminal operators’ participation, suggested that port authorities provide a staging area for those truckers waiting to pick up cargo during the off-peak shift that could provide a place for them to rest, eat, and access restroom facilities. Stakeholders at all major West Coast ports have taken a number of actions to address impacts from larger ships, alliances, and the provision and condition of chassis, according to the stakeholders we interviewed. Some efforts have been undertaken in a collaborative manner, while others have been pursued individually by stakeholders. These efforts seek to maximize competitive advantages for a port complex or a private entity to maintain or secure shipping business. Illustrative examples include the following: In May 2015, the Ports of Los Angeles and Long Beach created the Supply Chain Optimization Steering Committee to organize supply- chain stakeholder working groups. One working group facilitated a chassis pool that allows any chassis in the combined fleet to be utilized by any authorized user and expands the number of pick-up and drop-off locations. Other working groups are addressing container terminal optimization, key performance indicators, information flow and data solutions, off-dock solutions, drayage (the movement of containers in and out of ports by truck), and intermodal rail. In February 2016, the Port of Oakland allocated $1.5 million to reimburse marine terminal operators up to 50 percent of their costs for operating night gates over a 12-week period. In June 2016, the port allocated another $1.7 million for these extended night gate operations. According to an Oakland port official, the subsidy was instituted in response to increased cargo flows at several of its marine terminals following the cessation of operations of its second-largest terminal operator due to bankruptcy in early 2016. According to this official, the largest terminal operator reported about 600 container transactions every night and 1,200 on Saturdays, easing daytime, peak gate hours. This terminal operator has begun assessing a flat fee of $30 on all loaded import and export containers to continue night gate operations. In August 2015, the Ports of Seattle and Tacoma formed the Northwest Seaport Alliance as a way of staying regionally competitive against other North American ports. Each port maintains its own board of commissioners. By combining resources and jointly managing terminal assets, the alliance hopes to undertake specific facility improvement projects that might have been infeasible as separate port entities. For example, in April 2016, the two boards voted to approve $141 million for infrastructure improvements at one terminal at the Port of Tacoma, as well as to extend the marine terminal operator’s lease at this terminal for an additional 20 years. The alliance plans a similar terminal modernization project in Seattle. Through the alliance, the two ports jointly advocate for regional projects to the Washington state legislature, according to port officials. The alliance has also developed a unified marketing program to communicate its combined competitiveness to shippers, ocean carriers, and the public. Terminal operators have also sought to address container yard acreage and gate hour constraints. For example, some terminal operators, such as those at the Ports of Oakland, Los Angles, and Long Beach, have instituted trucker appointment systems that allot a window of time for truckers to arrive at the terminal. This allows operators to approximate when a container is expected to leave the terminal and enhances their ability to effectively stage a container for efficient pick-up. However, appointment systems can be costly to set up and traffic outside the port and other factors can force appointment windows to be missed, according to some stakeholders. One terminal operator we interviewed is using a 100-acre off-dock depot for container storage in Southern California, which is located some distance away from the port, where shippers can pick up and send containers. According to this terminal operator, such facilities allow truckers to move containers more efficiently because they can avoid congested roads near the ports. Port stakeholders interviewed as part of our case studies highlighted some key challenges to mitigating infrastructure and operational constraints stemming from global shipping changes. Port stakeholders, in particular state and local governmental agencies, said that aligning public and private competing priorities or interests to fund or construct port infrastructure projects was difficult. We have previously found that freight projects may not compete well with other types of transportation projects for limited available public funds because their benefits are not always obvious to the public. State and local government officials we interviewed noted that this tension may be particularly acute for ports located in large metropolitan areas, such as the major West Coast ports. These areas are experiencing significant population growth with demand for housing, transit, and environmental protections. For example, plans for a near-dock railyard at the Los Angeles-Long Beach port complex could falter because of local lawsuits over its potential environmental impact. Funding port or freight infrastructure for large volumes of “discretionary” cargo (that is, cargo not destined for the local or regional markets, but bound for the national market) can also be perceived as heightening overall congestion or producing negative effects in local communities. Moreover, as we have previously found, federal programs that can be used to address certain freight-related issues do not always align with local priorities, and state and local transportation funds are often limited and prioritized for operating and maintaining existing highway infrastructure. According to port authorities we spoke with, local and state DOTs are beginning to recognize the importance of freight mobility, but the voting public may be less supportive of freight projects and as a consequence, transportation funding is often focused on commuters. Private sector interests, such as shifts in shipping alliances, may also conflict with planning efforts to facilitate cargo movement. It can be difficult for port authorities to target their investments in infrastructure projects that will yield sustained improvements in cargo movement because of evolving industry alliances. For example, new shipping alliance agreements may require all vessels within the alliance to call on specified port terminals, quickly changing the flow of cargo through a port. These changes may conflict with what importers, exporters, or port authorities may believe to be the best-suited terminal for their respective needs (i.e., does the appropriate terminal or inland have capacity such as on-dock rail to handle additional volumes?). For example, at the Port of Seattle in 2013, after a shift in an alliance, a major ocean carrier directed its vessels to call on a different terminal, moving from a larger terminal to a smaller one, and increasing congestion within that terminal. Additionally, marine terminal operators may abruptly end operations at a port, even when they have a long-term contract, if the operators are not able to attract sufficient cargo volumes to sustain profitability. This situation happened in Oakland in 2016, when a terminal operator filed for bankruptcy 6 years into a 50-year lease—publicly citing that it was choosing to concentrate its resources at its other terminals, including those at the port complexes of Los Angeles-Long Beach and Seattle- Tacoma. Some state and local government officials from our case studies of port complexes said that information on port performance and supply chains would be helpful to help target operational and infrastructure efforts. For example, local officials in Seattle indicated they have some information on truck counts, but lacked information about cargo loads (e.g., number of empty trucks versus trucks carrying heavy hauls) and their interim and final destinations. Officials explained that having that information would help them design and prioritize street improvements, such as signal timing, turning radius, and pavement conditions on certain streets. Similarly, officials from the Southern California Association of Governments said that while they were able to conduct roadside truck counts to tally the number of trucks coming and leaving the port, they did not have information into the origins and destinations of these trucks. Moreover, these limited counts can become quickly outdated for planning purposes and agency officials stated they lack the resources to continually gather new data. Without these data, local and regional planners may be less likely to use a performance-based approach and less able to justify transportation projects, such as port-related projects relative to other modes or priorities. Similarly, limited information on supply chain practices can lead to public investments underperforming. For example, use of the Alameda Corridor—a 20-mile freight rail expressway linking the ports of Long Beach and Los Angeles to the nation’s transcontinental rail network—was lower than expected because it was anticipated that 50 percent of port cargo that left southern California by rail would do so using the corridor. However, after operations began in April of 2002, only about 30 percent of the ports’ containerized cargo was using the rail corridor. A 2004 study revealed that a new cargo handling practice called transloading was occurring in the transportation logistics industry. This practice entails moving containerized imports by truck from ports to local and regional distribution centers. The cargo then is transferred from 40-foot ocean containers to longer domestic containers before being shipped by rail from loading points that bypass the corridor. Transloading practices are used by shippers to more efficiently control inventory by postponing domestic destination and volume decisions until after cargo arrives in the United States. According to officials from the Alameda Corridor Transportation Authority, transloading partly explains the lower than expected use of the Alameda Corridor. Ports that are already strained and experiencing congestion may be particularly vulnerable to events such as natural disasters or disruptions that can further impede the movement of cargo through ports and, in turn, impact shippers’ supply chains. When we asked representatives from selected industry groups about recent disruptive events to shippers’ supply chains, almost all of them told us that at least some shippers experienced impacts to their supply chains from recent port disruptions. Most industry groups brought up the 2014 and 2015 West Coast labor negotiation as the most disruptive event in the last 5 years; some also mentioned other disruptive events. Of our 21 selected industry groups, over half, or 13 industry groups, told us some shippers took actions in response to the 2014 and 2015 disruption, such as modifying their supply chains. However, about one-third, or 6 industry groups, said some shippers had difficulty making such modifications due to specific firm or commodity attributes or prohibitively high costs. Other industry groups said shippers made no supply chain modifications because they were able to weather the disruption. Our analysis, using U.S. Census international trade data from the first quarter of 2005 through the first quarter of 2016, found some significant changes in trade flows, especially decreased exports, during the disruption period, suggesting the disruption may have had an impact on exports from West Coast ports. Almost all of our 21 selected industry groups said that shippers in their respective industries using major West Coast ports were affected by recent port disruptions. Specifically, representatives from 18 such groups told us that at least some shippers experienced some impacts to their supply chains from recent disruptive events such as the 2014 and 2015 port disruption, while about half (11 out of 21) said that all or a majority of shippers who ship out of West Coast ports were affected by that disruption. Interviewees said the disruption in 2014 and 2015 mainly affected containerized shipments. Some industry groups also told us that other events such as severe weather events have also caused port disruptions in the last 5 years. For example, winter weather conditions have closed the Snoqualmie Pass on Interstate 90 in Washington State—a critical transportation corridor linking the port of Seattle to the agricultural industries of Eastern Washington— with little advanced warning, making it difficult at times to arrange reliable transportation to and from the port, an industry group said. In addition, a severe winter in 2013-2014 in the Plains resulted in rail backups to West Coast ports for Midwest corn growers and exporters, industry representatives told us. Representatives from one industry group said it is difficult to make contingency plans for unpredictable events like these, particularly since shippers make shipment decisions months in advance. Most industry group representatives we spoke with said the main types of short- and long-term financial and business impacts they experienced as a result of the 2014 and 2015 port disruption included increased costs, decreased revenue, and shipment delays (see table 2). For example, almost all of the industry groups (17 out of 21) told us they experienced some form of increased costs, and several industry groups experienced multiple types of increased costs. Specifically, 13 of those 17 industry groups noted shippers experienced increased transportation or storage costs, and 6 noted shippers also experienced late fees imposed for late shipments. Some of the impacts were short-term—such as increased costs or shipment delays—while other impacts were of longer-term duration, such as the loss of sales, customers, or market share. In order to mitigate some of the impacts of the disruption, over half of the selected industry groups (13 out of 21) told us at least some shippers responded to the 2014 and 2015 port disruption by temporarily modifying their supply chains. Modifications included diverting shipments to other ports or alternate modes of transportation—mostly air freight—or diverting shipments intended for the export market to the domestic market. According to these industry groups, all of these supply chain modifications increased costs or decreased revenues. About one-third, or 6 industry groups, said some shippers had difficulty modifying their supply chains or making alternative shipping arrangements due to specific firm or commodity attributes or simply due to the prohibitive increased costs of doing so. Other industry groups said shippers in their industry did not deem it necessary to make such arrangements because, for example, their shipments were not perishable or time sensitive (see table 3). Following the end of the recent port disruption, industry groups said shippers in their industry maintained and permanently implemented some of the supply chain modifications they made, such as shipping some commodities through East or Gulf Coast ports instead of West Coast ports, in order to diversify their shipping routes and minimize their risk exposure to West Coast ports in the case of future disruptions there. Following earlier disruptions at ports, such as the 2002 labor dispute and work stoppage at the major West Coast ports, or other events such as hurricanes, some companies made significant modifications to their supply chains, shipping practices, and business models that diversified the number and location of ports they used. Some shippers also made contingency plans as much as a year prior to the ILWU-PMA labor contract expiration in July 2014 to reroute cargo or to ship commodities earlier than usual. Those industries or shippers that made such contingency plans told us they were well-positioned to do so because of commodity, firm or industry characteristics. Specifically, well-positioned shippers included bigger firms that could manage higher transportation costs as well as those that already had diversified geographic supply chains. Based on our interviews with 21 selected industry groups, we found that certain firm or commodity attributes can affect the extent to which a port disruption impacts a firm or industry’s supply chain as well as shippers’ ability to respond to such events. During a port disruption, a shipping route that is typically the most economical or efficient might become less cost-effective or even infeasible, according to the Transportation Research Board. As a result, shippers may strive to make alternative plans to minimize any additional costs and time. After speaking with the 21 selected industry groups, we found several commodity attributes, as discussed below, that were frequently important in influencing the ability of shippers to respond to a disruption. Some industries or commodities might possess several of these attributes simultaneously, which may complicate their shipping options further. Geography of a shipper’s supply chain: Fourteen industry groups said the location of many shippers and their individual supply chains— namely, where the product is produced (or in the case of agricultural commodities, grown) and sold—affects the magnitude of impacts on and responses by shippers in an industry to a port disruption. Geographic factors influence supply chain decisions, as shippers search for routing and shipping options with low costs. For example, the entire U.S. commercial supply of almonds is grown in California, near the Port of Oakland. As a result, shipping almonds from other ports can be too costly, according to an industry group. A disruption at the Port of Oakland might impact these exporters to a greater degree than those exporters whose products are grown or manufactured in multiple locations and, therefore, near multiple ports that can be reached at low transportation costs. Time-sensitivity: Six industry groups told us their products or shipments are time-sensitive because they are seasonal, perishable, or rely on a “just-in-time” business model. For example, apple industry representatives told us shipments of apples follow regular and predictable growing and harvesting seasons. Some shipments are also time-sensitive if they are meant to reach the market in time for a particular shopping seasons driven by consumer demand—such as the back-to-school or winter holiday seasons. For example, about 60 percent of total toy sales occur in advance of the winter holiday season, according to an industry group we spoke with. Consequently, the peak shipping season for toys is from late August to November. Perishable agricultural products are time-sensitive because they have a limited shelf-life. For example, exports of chilled meat to East Asia are time-sensitive because transit across the Pacific Ocean takes several weeks from the West Coast and the chilled meat has a limited shelf-life, according to an industry group we spoke with. In addition, several industry groups told us they rely on a “just-in-time” business model, making their shipments time-sensitive as well. Profit margins: Six industry groups told us that shippers with low profit margins may be more affected by disruptions at ports they use because alternative routes may not be cost-effective. For example, soybean shippers told us profit margins in their industry can be as low as 1 to 2 percent, and any additional costs, such as late fees assessed by a trade arbitration organization when shipments are delayed, potentially result in firms losing money. In addition, apparel products have low gross profit margins, which preclude them from switching from ocean freight to more expensive air freight, according to an industry representative. Dependence on imports/exports: Ten industry groups told us their industries were highly reliant on imports or exports. For example, some retail industry groups said nearly all of what they sell in the United States is imported, while other manufacturers said they are highly reliant on imported components. Specifically, 98 percent of apparel sold in the United States is imported, mostly from Asia, according to an industry representative we spoke with. In addition, about 90 percent of wood furniture sold in the United States is imported, according to industry representatives we interviewed. Other industries may have highly trade-dependent niche products. For example, the vast majority of the nation’s hay crop is used domestically, but some types of hay are almost exclusively exported, industry representatives told us. If importers or exporters in an industry are almost entirely reliant on ports for market access, then any disruption at those ports would likely have large impacts on those firms. Storage or inspection requirements: Six industry groups told us that their industry has specific storage or inspection requirements relevant to importing or exporting their cargo and that these requirements affect their ability to modify their supply chain in response to port disruptions. The requirements cannot be met by all ports because some ports, or port regions, lack the necessary facilities. For example, according to an industry expert, petroleum coke, a byproduct of the oil refining process used for energy in some other countries, has specific storage requirements due to environmental concerns. Likewise, as part of the United States Department of Agriculture’s (USDA) port-of- entry inspections, some agricultural imports must be treated (e.g., with chemicals, heat, or irradiation) prior to their release in the U.S. market because of concerns with plant pests not known to occur in the United States but prevalent in the country of origin. For example, imports of Chilean grapes must be fumigated with methyl bromide prior to release in the United States. As a result, disruptions at a port with the specialized facilities might have more of an impact on those industries since fewer ports can handle rerouted cargo, industry groups told us. Our analysis of U.S. Census international trade data from January 2005 to March 2016 finds some significant changes in the dollar value of trade flows at certain ports coinciding with the 2014 and 2015 West Coast port disruption. Specifically, our statistical analysis showed total exports at major West Coast ports were significantly lower in this time frame than during other quarters included in the analysis, given other established trends in the economy and other factors we were able to control for. Trade flows can be affected by many factors, so it is difficult to know the extent to which the 2014 and 2015 port disruption contributed to variations in trade flows without considering the impact of other factors that can affect trade. Therefore, we developed a statistical method to examine trade flows at large U.S. ports during the West Coast port disruption. The model helped identify whether the level of trade at West Coast ports was significantly different compared to other quarters included in our analysis, after controlling for factors that might influence trade over time. Specifically, the model controlled for some variables that might influence the level of trade over time, such as trends in trade volumes over time, seasonality, and the influence of the recession of late 2007 to 2009. It also controlled for a set of variables to capture the influence of specific characteristics of each port, each commodity category, and each trading partner country. Our analysis examined: (1) whether the dollar value of all imports and exports at West Coast ports (and ports at other coasts) on average, across ports, commodities and trading partners during the port disruption period was significantly different compared to other quarters included in our analysis, and (2) whether the dollar value for our 13 selected import commodities in aggregate and our 14 selected export commodities in aggregate appeared to be different during the relevant timeframe than during other quarters, and (3) whether trade flows to and from U.S. airports along the three coasts were different during the disruption timeframe compared to other quarters. Appendix IV provides more detailed information on our statistical model. For all vessel exports as well as for some of our selected export commodities, we found significant changes in export levels at West Coast ports during the first quarter of 2015. By contrast, import levels for all imports as well as for all of our selected import commodities at West Coast ports during the relevant time frame were not statistically different than during other quarters, given established trends and our other control variables. Exports: For vessel exports, it appears that the port disruption of 2014 and 2015 coincided with reduced exports from large West Coast ports. First, we found that the extent to which total exports were lower during this time frame than in past quarters was not constant over the three quarters examined. Specifically, during the third quarter of 2014, exports from West Coast ports were not statistically lower than other quarters, while in the fourth quarter of 2014, our findings suggest, with only weak statistical significance, that exports were likely somewhat lower than past quarters. By the first quarter of 2015, we find that, on average across our port, commodity, and trading partner observations, exports appear to have been about 50 percent lower than past levels based on what we could control for in the model. These findings suggest that the reduction in the value of exports may have been in the billions of dollars. However, it is important to note that there could be other elements at play that also had an influence on trade flows. Second, we ran a separate regression to examine trade flows for 14 specific West Coast export commodities. We found that those exports on average were not different than past levels in either of the last two quarters in 2014, but were lower than past levels in the first quarter of 2015. In addition, those exports remained lower than past levels after the port disruption was resolved, possibly indicating lingering effects from the disruption (e.g., resulting from lost customers or market share, or permanent diversions of cargo to other ports), some other factor not accounted for in the model, or, a combination of both. These results could be consistent with information we gathered during our interviews. Namely, some exporters experienced revenue losses, including lost customers or market share, and exports in their industry are not back to pre-disruption levels. Imports: We did not find that West Coast port vessel imports were statistically different during any of the three quarters that correspond with the 2014 and 2015 port disruption when compared to imports during other quarters included in our analysis. We also found that there were no differences in total imports at East Coast ports during this time frame compared to other quarters included in the analysis. However, our model did find that the dollar value of imports at Gulf Coast ports was substantially higher during the three quarters of the port disruption than other quarters included in the model, and those imports continued to be higher than past levels in every subsequent quarter at Gulf Coast ports, up to and including the first quarter of 2016. These findings may suggest that some factor or factors other than those considered in our analysis are related to rising imports in the Gulf region in recent years. It is also possible that diversion from West Coast ports may have played some role in these increases during the port disruption but because we found no statistical evidence that imports were lower at West Coast ports, it would appear that diversion likely played a small role, if at all. Second, we ran a separate regression to examine whether trade flows for 13 specific West Coast imported commodities were statistically different during the disruption period compared to other periods after controlling for the various factors mentioned above. For these 13 imported commodities, consistent with our findings for all commodities imported from West Coast ports, we found that trade was not different from past levels in any of the time frames examined. Our analysis also indicates that, for imports, there may have been some shifts to air freight during the disruption period. We used our model to examine whether trade flows at large U.S. airports exhibited any unusual changes during the same quarters of the 2014 and 2015 disruption. We found that imports were statistically higher during the last two quarters of 2014 but not during the first quarter of 2015, compared to past levels at the West Coast airports. However, we found no changes in air imports at East or Gulf Coast airports during any of the time frame examined in our analysis. While our findings may suggest that some imports that might have typically been shipped by sea to West Coast ports were diverted to West Coast airports—which we also heard during some of our interviews with trade groups—it is possible that other factors influenced the trends we found at West Coast airports. DOT’s freight-related activities have grown increasingly multi-modal and inclusive of ports since 2012. In the draft National Freight Strategic Plan, issued in October 2015, DOT signaled the importance of ports to the freight system and, through the inclusion of ports in two new funding programs, DOT is better positioned to support ports than in previous years. However, there are substantive gaps in the supply chain information DOT (and state and local governments) have available to them to support freight efforts. Disruptions at ports can have ramifications throughout industry supply chains. Based on leading practices in capital decision making, we previously recommended that DOT develop a freight-data improvement strategy to address gaps related to, among other things, local impacts of freight congestion. These practices emphasize that quality information gives organizations the ability to support strategic as well as operational decisions. Since the passage of MAP-21 in 2012 and the FAST Act in 2015, DOT’s freight-related activities have increased, with more focus on multimodal freight infrastructure, including ports. For example, DOT’s draft National Freight Strategic Plan, issued in October 2015, acknowledges the importance of ports and includes several port strategies to advance freight goals. The draft plan discusses the need to upgrade water and landside port facilities and acknowledges that ports face many challenges as they adapt to larger vessels and other global shipping changes. The draft plan also includes some strategies that could help cargo move through ports more smoothly, including facilitating intermodal connectivity and supporting efforts, such as chassis pooling, to address port congestion. The draft strategic plan was released for public comment and DOT officials anticipate finalizing the strategic plan by the end of 2017, in accordance with the statutory requirement of the FAST Act. Officials noted that many of the strategies identified in the draft will be updated in light of new programs and authorities, such as new funding programs, provided by the FAST Act. DOT officials stated that the department was working to address multimodal infrastructure and pushing for a more comprehensive approach to freight prior to these acts, but the new authorities enable DOT to take a less highway-focused approach. As a result, at this time, the precise nature and scope of the strategies of the department related to ports are a work in progress. DOT is also pursuing an increasingly multimodal perspective in its efforts to identify and define the national multimodal freight network, or map, which includes ports and intermodal connectors that meet certain criteria. Previously, MAP-21 directed DOT to identify a national freight network of highways, of which the Primary Freight Network was the core. The FAST Act required DOT to develop, and release for public comment, an Interim National Multimodal Freight Network by June 2016 and a final National Multimodal Freight Network by December 2016. The act required that freight facilities with certain characteristics—for example, public ports with total foreign and domestic trades of at least 2 million tons—be included in the Interim National Multimodal Freight Network. DOT released the interim network on time and the public comment period closed on September 6, 2016. DOT plans to issue a final National Multimodal Freight Network by December 4, 2016, according to officials. DOT officials indicated that the final network is likely to include port and port-related facilities due to their importance to freight. Some state and regional government officials we interviewed acknowledged that DOT is making progress toward developing a more complete multimodal network that accurately identifies freight facilities. However, most raised concerns that the interim network has errors, omissions, and disconnected segments of roads. For example, freight staff in Washington State’s Department of Transportation indicated that some important road connections to the Port of Tacoma were missing in the interim network. DOT officials emphasized to us that anyone may submit comments addressing any aspect of the network, including any errors, omissions, or disconnected network segments that they feel should be addressed in the final network. DOT is also positioned, through two new freight funding programs established by the FAST Act to potentially fund more port, freight rail, and intermodal projects than in previous years. In the National Highway Freight Program (NHFP), formula funds are allocated to states based on similar formula factors used in federal highway programs. Up to 10 percent of the funds may be used for freight rail and intermodal projects, including projects at ports. Thus, funding is spread across the country, and states will decide whether to prioritize port projects or to focus on other freight transportation projects. DOT officials noted that many states have established freight advisory committees that have a role in prioritizing freight projects. Given states’ historical focus on highway infrastructure, states may prioritize those project types over port projects. Out of about $1.1 billion total of formula funds, DOT estimated that California and Washington State—where the largest West Coast ports are located—will be allocated approximately $126 million in fiscal year 2016. State officials we interviewed from these two states indicated that port or near-port projects (e.g., road or rail projects in close proximity to a port and likely to benefit port cargo) are among the projects they expect to consider for these funds. For the new FASTLANE program, DOT solicited project proposals from a broad group of constituencies (highway, rail, ports, etc.) in which nearly $800 million was available for competitively awarded grants in fiscal year 2016. Over the 5-year term of the FAST Act, up to $500 million of the $4.5 billion authorized, may be used for freight rail, intermodal and port projects. DOT officials stated they received 212 applications that requested $9.8 billion total, 35 of which were for projects at or near ports. In September 2016, DOT awarded 18 grants, including 5 port project grants. Though none of these 5 awards went to major West Coast Ports, 2 of the other 13 projects are in the Seattle area—one road and one rail— and are expected to facilitate port cargo movement. According to program officials, in the first round of FASTLANE grants, the department did not target specific types of projects or synergies between projects but rather, selected applications based on their individual merits and the program’s statutory requirements. According to DOT freight officials, the agency is trying to maximize public benefits and balance national priorities with local project selection. In part to facilitate ports’ access to funding sources such as these, MARAD has established the StrongPortsSM program, which provides a range of technical support to ports upon request, according to program officials. For example, the StrongPortsSM program team educated the ports of Los Angeles and Long Beach about available federal assistance for intermodal projects. Some port stakeholders told us these new funding programs could be important in addressing challenges facing ports, but it remains to be seen how well port projects will compete for funds against highway or other freight projects. DOT is also taking steps to develop port performance measures. As required by the FAST Act, DOT’s Bureau of Transportation Statistics (BTS) recently convened a group of port stakeholders to form a Port Performance Freight Statistics Working Group and instituted a port performance freight statistics program. This effort is focused on developing standard freight measures that DOT could publicly report. According to industry groups we interviewed, port performance measures—such as truck waiting times to pick up cargo and terminal throughput activity—could be used by shippers to assess ports or port operations and adapt supply chains accordingly. According to DOT officials, the group will focus on port operations, with specific priorities to be determined by members, and the department does not have specific goals for the group, beyond those outlined in statute. The working group met for the first time in July 2016, with the goal of recommending port performance measures by December 4, 2016, as required by the FAST Act. Most port stakeholders we interviewed, including participants in the working group, such as a port authority, noted that developing uniform metrics of port performance will be challenging because of differences among ports, the proprietary nature of some data, and other hurdles. However, some generally support efforts to better understand port performance. The FAST Act requires DOT to issue its first annual report on nationally consistent measures of port performance in January 2017. BTS officials explained that this time frame is short given the complexity of the topic and limited staff available in BTS for the program. As a result, they told us that they have begun to draft the report based on the limited data that are readily available and that it will be difficult to incorporate any of the working group’s recommendations in the first report, although they will be considered for future annual reports. Better information and analytical tools to assess supply chains may improve DOT’s freight efforts. Although DOT has information on freight movements, less is known about supply chains and how they affect the freight transportation network. DOT’s Transportation Statistics Annual Report 2015 states, for example, that while data on tonnage and the value of region-to-region commodity flows exist, data on the relationships between industry supply chains and region-to-region commodity flows have major gaps. Filling those data gaps could help, for example, guide investments in transportation facilities, assess international trade flows within the United States, and identify and address freight bottlenecks that are barriers to economic development and competitiveness, according to the report. For example, public sector decision makers do not typically have the data and analytic models to understand and incorporate into infrastructure decisions how freight moves to and from shippers’ warehouses, a critical component of a supply chain. The report also notes that while the movement of goods between ports and foreign countries is tracked continuously, the movement of international trade between ports and domestic origin for exports and domestic destinations for imports is not measured. Understanding how shippers are adapting their supply chains in light of global shipping changes could give DOT a more informed basis to assess current and future demands on the freight network and make sound infrastructure investment decisions. DOT’s Freight Analysis Framework (FAF)—which DOT officials said is the agency’s most comprehensive source of freight data—is not able to support supply chain analyses because it lacks key information on industries, cargo destinations, and other facets of supply chains. DOT officials, regional and state freight planners, and transportation economists we interviewed at the Brookings Institution agreed that FAF data are too aggregated for analyzing commodity flows at metropolitan levels, making it difficult to use FAF data to accurately analyze supply chains. Most public freight planners in all the major West Coast port regions that we spoke with similarly noted they have access to some data related to port cargo movement—such as the number of trucks on roads near ports, the overall value and weight of cargo, and the major commodities shipped in their area—but have much less data about that tracks end-to-end cargo movements from origin to destination and across modes. Recent work by the Transportation Research Board (TRB), has called for developing and using better information on supply chains for public sector infrastructure decision making. For example, a 2013 TRB report noted the differences between public and private sector decision making related to good movement and infrastructure. The report recommended freight data and modeling improvements to integrate real-world supply chain management practices with public sector decision making and the development of analytic tools to predict freight activity from the perspective of shippers, carriers, and others in the supply chain. Another TRB report called for more comprehensive and realistic information on freight movement and logistics, following freight through intermodal interchanges and identifying the locations of resources such as manufacturing and distribution facilities. In 2014, the National Freight Advisory Committee also, highlighted the importance of addressing ports and supply chains in DOT’s freight efforts, and specifically, in developing the National Freight Strategic Plan. For example, it recommended that DOT address the inadequacy of multimodal freight flow (origin-destination) data and support research on better metropolitan and regional freight models, including supply chain based modeling approaches. Federal guidance and practices highlight the need for quality information for planning and effective decision making and achieving agency objectives. For example, leading practices in capital planning emphasize the importance of good information for sound capital planning and effective decision making. According to these practices, information provided by well-planned information systems gives organizations the ability to perform analyses that can be used to support strategic as well as operational budgeting decisions. We previously found that making available good information on highway freight trends to states and the federal government could help establish relevant goals and prioritize mitigation efforts for freight-related traffic congestion. Furthermore, Federal Internal Control Standards states that quality information is vital to achieving agency objectives. These standards further define quality information as being appropriate, current, complete, accurate, and accessible. Management should use quality information to make informed decisions and evaluate performance in achieving key objectives. DOT officials we spoke with acknowledged that better information on supply chains would help DOT’s freight efforts, and part of DOT’s approach is to encourage state partners to gather and use this information. For example, in 2012, DOT issued interim guidance for the development of state freight plans. DOT advised states that they include a discussion of the role that freight transportation plays in the state’s overall economy; identify those industries that are most important to the state; and identify what supply chains (including the transportation modes that support them) are critical to the state’s industries and exports from the state. Some of the state transportation agencies we talked to have taken some steps to better understand supply chains within their state as part of developing state freight plans. For example, the Washington State Department of Transportation has identified corridors with freight intensive land uses, intermodal facilities, and agricultural processing facilities (e.g., apple packers, dairy plants) that are part of important supply chains within the state. The FAST Act requires that states have a freight plan by December 4, 2017, to obligate National Highway Freight Program funds. DOT officials indicated however, that it was too soon to know how, if at all, the plans states develop can help inform DOT’s freight efforts or what information on supply chains the plans will provide. However, some supply chain information can be difficult for local, state, and federal entities to obtain because it may be proprietary, expensive, become dated quickly, or be difficult to aggregate. For example, shippers may be reluctant to share information with local and state DOTs due to proprietary and competitive interests. According to most port authorities we spoke with, terminal operators are not contractually obligated to disclose, for example, information about terminal productivity or capacity as part of their lease agreements with the ports. Public agencies using private third-party data also may be subject to non-disclosure agreements governing the access to or sharing of data with other agencies, strictures that, in turn, limits the ability of state and local planning agencies to use these data. In addition, some state and regional government officials we interviewed explained that buying data can be too costly for cash- strapped agencies. For example, officials from a regional government noted that purchasing private real estate data on warehouse inventories could cost hundreds of thousands of dollars. Without this data, the officials told us they have difficulties modeling and targeting local infrastructure investments or making land use decisions to support freight movement. Some of DOT’s freight-related ongoing efforts could help provide information on supply chains; however, because these efforts are still in the early stages, it is not clear how, if at all, DOT plans to use that information. For example, the I-95 Corridor Coalition, which includes DOT as a member, convened a broader range of supply chain stakeholders to identify and study freight needs and selected supply chains, across all modes of transportation, along the I-95 corridor between Florida and Maine. A related effort lead by FHWA focuses on “freight fluidity.” The effort focuses primarily on truck probe data (i.e., GPS data on trucks location and movement) and will be supplemented by multimodal data— which might include ports—as the project progresses, according to DOT program officials. According to officials that effort plans to hold workshops in 2017 with the goal of developing white papers on applications that could support metropolitan, regional, and state transportation multi-modal freight planning. Likewise, MARAD program officials explained to us that understanding supply-chain issues—such as how the opening of the expanded Panama Canal may impact trade—is one of the expectations of Gateway directors in each port region, but efforts to capture and disseminate this expertise across DOT are a work in progress and early, according to MARAD officials. DOT has also taken steps to gather supply chain information from other federal agencies, but it is unclear how the department will use this information to inform its freight efforts. For example, DOT officials said that they have been working with the Department of Commerce, which coordinates with industry representatives about supply chain issues, to better understand supply chains. Specifically, officials said they have attended the Department of Commerce’s roundtable discussions with supply chain industry members on port efficiency and competitiveness issues, at which industry members have offered and shared best practices for port-user coordination, collaboration, and information sharing. Commerce officials indicated they plan to publish a report on this initiative’s results in December 2016. Additionally, DOT officials stated that FHWA’s Freight Fluidity initiative involves close interaction with Commerce, and DOT is represented on the Department of Commerce’s Advisory Committee on Supply Chain Competitiveness (ACSCC) as an ex-officio member. DOT has not explained in the draft strategic freight plan, or elsewhere, how other agencies and sources of information fit into the department’s freight goals and objectives. DOT officials acknowledged that although they have made strides, they have yet to determine how supply chain information gathered from federal partners should be used in DOT’s freight efforts. DOT has articulated broad strategies to improve freight data and analytic tools in the draft National Freight Strategic Plan. For example, DOT officials told us they incorporated the National Freight Advisory Committee’s recommendations in drafting the plan: (1) to address the inadequacy of multimodal freight data; (2) to support research on better metropolitan and regional freight models, including supply-chain-based modeling; and (3) to evaluate freight movement from an end-to-end supply chain perspective. DOT officials emphasized to us that the draft strategy includes an extensive discussion of data issues, including the need for supply chain information, as one of the key issues facing the department that the strategies are meant to address. However, the draft National Freight Strategic Plan does not specifically outline how DOT plans to leverage the various ongoing initiatives, or begin new ones, to identify information sources, improve supply chain data, and advance analytic tools related to ports and supply chains. For example, the draft National Freight Strategic Plan does not explain how the department will use the supply chain information that could come from existing efforts to inform freight planning and programming or lay out a specific path for how the limitations of existing sources, such as the FAF, will be overcome. DOT may be able to address supply chain information needs through its effort to develop a freight data strategy. In our 2014 report on freight- related traffic congestion, we found a number of data limitations that, if resolved, could assist DOT in prioritizing projects to mitigate freight- related community impacts. Furthermore, we also found and discuss in that report, based on leading practices in capital planning, without a written strategy defining clear missions and desired outcomes related to improving data on freight-related traffic congestion, DOT may miss the opportunity to advance its data-improvement efforts and clarify its national role in supporting the freight infrastructure critical to supply chains. DOT officials told us that they plan to develop the recommended freight data strategy in conjunction with the finalization of the National Freight Strategic Plan at the end of 2017, either as part of the strategic plan or as a standalone document. Including supply chain information in the development of this data strategy, may provide an opportunity for DOT to think more comprehensively and strategically about current and planned freight data efforts. Doing so could help ensure the agency obtains the supply chain information needed to support its port-related efforts and advance national freight policy goals such as enhancing resiliency to freight disruptions. The United States is part of a global economy, and industry supply chains are the backbone of international trade and commerce. Ports, an important segment of the U.S. freight network, are critical to the efficient movement of freight that countless supply chains depend upon. Ports face an array of challenges, including increasing congestion, and are adapting operations and infrastructure to remain competitive amid significant global shipping changes, such as the increasing size of vessels and changing shipping alliances between ocean carriers that are exacerbating existing conditions. These changes affect ports and the cargo moving through them in sometimes unexpected ways, further challenging efforts to effectively anticipate and plan for the changes. A disruption at a port can ripple throughout a supply chain and have business and economic impacts. Shippers, and their supply chains, have varying degrees of dependence on specific ports based on their industry, traded commodities, and other attributes. When the ports that shippers use are not functioning as expected, because of disruptions or simply endemic congestion, shippers face higher costs, decreased revenues, and delays. Including information on supply chains as part of DOT’s existing effort to develop a written freight data strategy provides an opportunity for the agency to think more comprehensively about the information needed to support its freight efforts including further refining the objectives and goals in its National Freight Strategic Plan. Our review of the disruption in 2014—2015 at West Coast Ports and the resulting impacts on the supply chain highlights DOT’s need for additional supply chain information. Federal guidance and leading practices in capital planning emphasize the importance of the use of good information to achieve agency objectives. Including information on supply chains in DOT’s freight data efforts could be beneficial. As we previously recommended and DOT agreed, a freight data strategy that addressed freight trends and freight-related congestion impacts would help to better define the agency’s role in this area. If DOT develops the freight data strategy in a way that includes information about supply chains, then it may be even more effective in providing direction to the various efforts under way, helping to close existing gaps or identifying new efforts that could be undertaken to further DOT’s freight goals. As shipping and supply chains change, if DOT and other public officials who make decisions about port and near-port infrastructure do not anticipate how demand for that infrastructure will change, then new investments might not provide the full benefits expected or operate as well as hoped. Likewise, as DOT continues to develop its freight efforts, better supply chain information could help the department’s decision making such as by prioritizing freight infrastructure needs and achieving its freight policy goals. To inform DOT’s development of its national freight strategy and associated freight efforts, such as states’ development of freight plans, newly established freight funding programs, and advancing DOT’s efforts to implement national freight policies, we recommend that in the development of the freight data strategy the Secretary of Transportation include a specific plan to identify: appropriate freight data sources, information, and analytic tools for transportation modes involved in the freight network and supply chains; data gaps that could help both the agency and states and local governments in the development of their freight plans, and an approach for addressing obstacles to developing high-quality, reliable supply chain information; current and planned efforts that can provide insights into supply chains and their impacts on freight networks; and how DOT plans to use the supply chain information and analytical tools to inform freight planning and programming. We provided a draft of this report to DOT, DOC, FMC, and USDA for review and comment. DOT provided a letter stating it concurred with our recommendation (see app. V). DOT as well as DOC and FMC provided technical comments, which we have incorporated into the report where appropriate. USDA did not have any comments on the draft report. We are sending copies of this report to the Secretary of the Department of Transportation, Department of Commerce, Federal Maritime Commission, Department of Agriculture, and interested congressional requesters. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report addresses the following objectives: (1) how recent changes in global shipping have impacted the movement of cargo at major U.S. West Coast ports and how these ports and their stakeholders have responded; (2) how selected shippers have been impacted by and responded to disruptions at West Coast ports during 2014-2015 as well as other recent or potential disruptions; and (3) how the Department of Transportation’s (DOT) current freight-related efforts support cargo movement through ports and whether these effort can be improved. To understand how global shipping changes have affected ports, we conducted three in-depth case studies of the largest port complexes on the West Coast ports—Los Angeles-Long Beach, Oakland, and Seattle- Tacoma, which were selected based on their total trade value (imports and exports). In 2015, the Ports of Los Angeles, Long Beach, Oakland, Seattle, and Tacoma handled 88 percent of total West Coast port volume. For the purposes of this report, we define a port as the area “inside the gate” and under the control of the local port authority or marine terminal operator, where cargo is loaded and unloaded to and from ships. We refer to a “port complex” as encompassing one to two ports and the nearby roadways, rail, bridges, and intermodal facilities (i.e., connectors) on which cargo arrives or departs the port. The results of these case studies are not generalizable, but do provide insights regarding port, state, local, and private-sector roles and experiences in cargo movement constraints from global shipping changes and efforts to address these constraints. These case studies included site visits of facilities inside the gate (such as container yards and on-dock rail) and outside the gate (such as adjacent local streets and neighborhoods). We interviewed stakeholders and reviewed relevant documents on planning and projects, including the California and Washington freight mobility plans and similar plans issued by the metropolitan planning organizations for each of the port complexes. Based on these interviews and documents, we identified the infrastructure projects and operational actions undertaken by stakeholders to address impacts from global shipping. We then confirmed with the port authorities that these projects were significant and the information presented about each project or action. Table 4 describes each of the stakeholders we interviewed as part of each case study’s site visit. Additionally, we interviewed representatives from American Association of Port Authorities, Pacific Maritime Association, International Longshore and Warehouse Union, the California Association of Port Authorities, and the port authorities of two smaller West Coast ports (San Diego and Portland), one East Coast port (New York / New Jersey) and one Gulf Coast port (Houston). These sets of interviews provided additional context for the constraints on cargo movement at West Coast ports created by global shipping changes, as well as, impacts on the national freight network and supply chains. We selected the smaller West Coast ports because of their relatively larger size (in terms of twenty-foot equivalent units, a measure of volume) and for geographic diversity, among other reasons. Similarly we selected the ports of New York/New Jersey and Houston, because these ports are the largest on their respective coasts, handling the most trade (in terms of TEUs) in 2013. We also reviewed a selection of governmental reports, non-governmental research, and academic literature on global shipping changes and their impact on cargo movement published since 2005, including reports recently issued by the Federal Maritime Commission and the Transportation Research Board. We identified these articles and reports through our interviews and by conducting a literature search. Search terms included ones pertaining to our West Coast port complexes, and related subjects, such as “ocean carriers and alliances,” “intermodal supply chain and logistics shifts, growth, trends,” and “chassis supply, shortages.” Various databases were used, including ProQuest and Transport Research International Documentation. We determined the literature cited in our report were sufficiently reliable for our research objective describing the impacts on cargo movement from global shipping changes and actions taken by major West Coast ports. Our work is also informed by prior GAO reports on freight mobility, intermodalism, and marine transportation finance. To assess how shippers have been impacted by and responded to port disruptions, we conducted semi-structured interviews with one or more representatives of 21 industry trade groups representing shippers. In order to select the industry trade groups we interviewed, we first identified the top 30 commodities imported and exported through major West Coast ports by analyzing U.S. international trade data. We identified the top imported and exported commodities by dollar value and by weight (in kilograms) from each of the three major West Coast port regions: Los Angeles/Long Beach, Oakland, and Seattle/Tacoma, as well as at all three port regions combined. As a result of the many different top commodities at the three port regions and due to limited resources for interviewing relevant industry association groups, we chose a nongeneralizable sample of about 27 commodities that represented diversity, in terms of geography at the three port regions, perishability, mode of transportation, high or low dollar value commodities, high or low weight commodities, as well as commodities that represented different agricultural, retail, or manufacturing sectors. The 27 commodities consisted of 13 imported commodities and 14 exported commodities. We identified via Internet searches and then interviewed appropriate and relevant industry groups that represented those 27 commodities with either an emphasis on trade or a regional West Coast emphasis. We shared our list of potential industry groups with the U.S. Department of Agriculture, as well as the Department of Commerce, to see whether agency officials believed our list of industry groups adequately represented the relevant commodities and then incorporated their suggestions in order to pare down the initial list to 21 industry groups (see table 5). Additionally, to understand the logistical impacts of disruptions, we interviewed a selection of customs broker and freight forwarder regional associations, which represent logistics handlers. We chose a nongeneralizable sample of 9 regional associations from the 28 affiliated associations of the National Customs Brokers and Forwarders Association of America. Of the 9 affiliated associations, we chose associations that represented four West Coast port regions, three East Coast port regions, and two Gulf Coast port regions in order to provide insight into the potentially different impacts that port disruptions have on logistics handlers around the country. To complement our qualitative analysis, we conducted statistical analyses of U.S. international trade data maintained by the Census Bureau. The data we collected covered all imports and exports from January 2005 through March 2016. We used Census trade data for the port, month, country of origin, or destination, and six-digit product code level. We aggregated that data to port, quarter, country, and two-digit product code. We estimated a statistical model designed to examine whether exports and/or imports at West Coast ports during the three quarters of the disruption were different than other quarters included in the analysis, controlling for linear trends, seasonality, and time invariant port, country, and product characteristics. We also controlled for exchange rates in some specifications. For more information and results, see appendix II. To identify and evaluate ways DOT’s current freight-related efforts support cargo movement through ports and whether these efforts could be improved, we gathered information on an array of topics related to cargo moving through ports and relevant federal efforts to support this movement. We focused on DOT initiatives and programs, but also efforts of the Department of Commerce, Department of Agriculture (USDA), and Federal Maritime Commission (FMC), and reviewed surface transportation legislation. DOT programs and initiatives we reviewed included the activities of the Maritime Administration within the DOT, the Office of Freight in the Federal Highway Administration, and the Bureau of Transportation Statistics and freight policy activities within DOT’s Office of the Secretary. We also looked into activities within Commerce’s International Trade Administration, especially the Advisory Committee on Supply Chain Competitiveness; the USDA’s role in agricultural inspections at ports and promoting agricultural exports; and the FMC’s efforts to address port efficiency, especially the Supply Chain Innovation Initiative. We did not evaluate activities outside of DOT. After compiling a list of relevant efforts, we interviewed program officials and reviewed program documentation to understand the nature and scope of these efforts. We reviewed selected literature to identify areas others have noted as needing attention. We reviewed literature and reports from the National Freight Advisory Committee, the American Association of State and Highway Transportation Officials, the I-95 Corridor Coalition, the Transportation Research Board, and others. We also interviewed federal officials responsible for relevant aspects of federal transportation and trade policies and programs and industry associations to gain an understanding of areas that could be improved. We interviewed selected transportation and logistics experts, such as economists with the Brookings Institute and Global Insight, who had conducted relevant work or were known experts in the issues raised in our work. Additionally, during the interviews we conducted for the other engagement objectives, we asked about areas in need of federal attention and how well current efforts were working. Through these activities, we identified the need for supply chain information (e.g., freight data for quantitative analysis of trends as well as qualitative information on market trends or dynamics) that is broadly recognized as an area in need of improvement and that DOT is in a position to change. To find ways for these efforts to be improved, we reviewed various criteria that had been used in prior GAO reports on freight or related issues. We focused our attention on whether DOT had good information available for decision making, an important factor in leading practices in capital planning as articulated in GAO’s Executive Guide on Capital Decision-Making and its Federal Internal Control Standards. These practices emphasize the importance of good information and information systems, among other practices, to support sound decision-making. We reviewed the draft National Freight Strategic Plan and other DOT’s efforts to determine if DOT had a defined, written strategy for supply chain information because stakeholders we interviewed during our work identified this as an area in need of improvement. We conducted this performance audit from July 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Entities Involved in Cargo Movement to and from Ports and Their Role in the Supply Chain The ports of Los Angeles, Long Beach, Oakland, Seattle, and Tacoma are “landlord” ports, in that they lease their property and infrastructure to marine terminal operators. Port authorities may also operate port facilities, whereby they act as the marine terminal operator. For example, the port authorities of San Diego and Houston own and operate marine terminals. This appendix describes the analyses we conducted to assess whether trade flows through West Coast ports during the port disruption that occurred in late 2014 and early 2015 appeared to be discernably different than other quarters included in our analysis. West Coast ports account for a large share of U.S. trade. For example in 2015 West Coast ports handled almost 23.2 percent of U.S. vessel exports ($118.7 out of a total of $512.6 billion in vessel exports) and 40.5 percent of vessel imports ($425.76 out of a total of $1.1 trillion in vessel imports); that is, in 2015, the West Coast ports handled almost 35 percent of more than $1.6 trillion dollars in total trade. Moreover, large West Coast ports—Los Angeles, Long Beach, Oakland, Seattle, and Tacoma—handled 81.4 percent of vessel exports ($96.7 out of $118.7 billion) and 89.8 percent of vessel imports ($382.2 out of $425.7 billion); that is, large West Coast ports handled 88.0 percent of total West Coast port volume in 2015. As we have noted in this report, our audit work indicated that although this disruption occurred during a timeframe when labor contracts with port workers at West Coast ports had lapsed, other factors also likely contributed to difficulties for importers and exporters at this time. Our work was designed to assess whether there were discernable trade flow anomalies during this time frame, but not to identify the specific cause of any such anomalies. This appendix discusses: (1) the conceptual framework of the analysis, (2) data sources for international trade data and independent factors controlled included in the model, and (3) model results for all trade, and for the 23 unique selected products. During the later months of 2014 and into early 2015 many ports in the United States, but particularly on the West Coast, became highly congested according to trade associations we spoke with. Our analysis was designed to assess the extent that trade flows during this period were substantially different than other quarters included in our analysis after controlling for various factors. Specifically, our model examined quarterly data over an 11-year period and is designed to examine whether there was any discontinuity in trade patterns during the third and fourth quarters of 2014 or the first quarter of 2015, holding other independent factors constant including economic trends, seasonal factors, and well as fixed effects for ports, country of origin/destination, and product classification. The time frame for our analysis was motivated by our discussion with stakeholders; for instance, a trade association indicated that the port difficulties became significant in Fall 2014 and did not diminish until after the second quarter of 2015. We examined whether aggregate exports and imports had any discontinuous pattern in that timeframe not only at the large West Coast ports as a group, but also at the large Gulf and large East Coast Ports. Also, for 23 unique selected products that accounted for substantial shares of either exports or imports at West Coast Ports, we also examine whether the aggregate trade in each direction for those products indicated any discontinuity in the relevant time period. The primary data source for trade information was the U.S. Census Bureau international trade statistics. The U.S. Census Bureau collects import and export data primarily through electronic transmission and some forms that exporters and importers file with the U.S. Customs and Border Protection and, in some instances, directly with the Census Bureau. The trade data provides both the dollar value and weight of trade flows. We used dollar value as the primary focus of our analysis, but alternatively used weight as a robustness check on our findings. The data available are fairly disaggregated and can be pulled from Census in variety of ways. We used data that centered on activity at ports, and made various other decisions about how to assemble that data for the analysis: 1. Port: We accessed data that are available at the port level, meaning that information on trade flows are recorded based on the port of entry or exit. We made certain decisions as to what ports to use in the analysis and the extent of aggregation across ports. First, we determined that, in addition to analyzing trade flows at West Coast ports, we would also run the same analysis for ports on other U.S. coasts as a frame of comparison. As such, we used data on ports that were located on the West, East, or Gulf coasts, and then used aggregated port traffic according to these three coastal groupings. In addition, since larger ports account for the vast majority of traffic, we only include ports in our analysis that accounted for at least 5 percent of 2012 directional trade on the relevant coast. That is, a port on the East Coast had to account for at least 5 percent of, for example, exports from East Coast ports in 2012, to be included in the export analysis of East Coast ports. This reduced the number of ports in the analysis considerably. For example, for the West Coast there were 40 ports in total for imports and 41 for exports, but after applying the 5 percent screen we found that only 5 West Coast Ports exceeded the screen for both directions of trade. In the West Coast these selected ports accounted for 86.4 percent of exports and 92.6 of imports in the region. Additionally, we focused solely on the ports with containerized vessel trade during our study period along each of the three coasts. We also separately conducted an analysis of trade through airports located on each of these coasts, for which a similar screening criterion was applied. 2. Direction of trade: We conducted separate analyses for imports and exports. Therefore, each record in the data set we developed was classified according to the directional trade flow it represented and included in the model accordingly. 3. Trading partners: The data available from the Census Bureau includes information for trade between the United States and all countries for which there is any reported trade. However, we found that the majority of countries have little trade with the United States. For example, in 2015, Census officials explained that 50 countries made up 79 percent of imported volume into the United States. Because each country’s observation would weigh equally in the model, we determined that it would be appropriate to focus the analysis on the larger trading partners—that is, those countries that account for the majority of trade with each coastal region of the United States. Therefore, after reducing the number of ports based on the port screen described above, we imposed an additional screen for countries. Specifically we only included a country in our analysis if its trade with the United States constituted at least 0.1 percent of either imports or exports of U.S. trade for the large ports on each coast. For example, for a country to be included in the export analysis from West Coast ports, at least 0.1 percent of exports through large West Coast ports needed to be destined to that country. Applying this screen reduced the number of countries included in the analysis considerably. In total, there were 239 countries in the full export dataset that had some trade with the United States and 237 in the full import dataset. After applying the screen in the export analysis we reduced the number of countries to 44 for the West Coast region, 79 for the East Coast region and 85 for the Gulf Coast. In 2012, at the West Coast these selected countries accounted for 97.4 percent of exports through the large ports, for the East Coast they accounted for 97.5 percent and for the Gulf Coast they accounted for 97.6 percent. In the import analysis we reduced the number of countries to 32 for the West Coast, 64 for the East Coast, and 71 for the Gulf Coast. In 2012, at the West Coast these selected countries accounted for 98.2 percent of imports through the large ports, in the East Coast they accounted for 98.3 percent and for the Gulf Coast they accounted for 98.9 percent. 4. Level of Commodity Classification: Commodity information can be classified at various levels of aggregation. The six-digit commodity classification was the most disaggregated classification of commodities available in the Census Bureau’s files we accessed. For example, a six-digit classification for “apples, fresh”, is a component of the more aggregated 2-digit “edible fruit and nuts; citrus fruit or melon peel” commodity group. Our primary analysis uses the 2-digit commodity classification—of which there are 98 groups. 5. Time Frame: Census data was available by month. However, we aggregated data to the quarterly level for the analysis. We collected data beginning with the first quarter of 2005 and ending with first quarter of 2016, which were the most recent data available when we conducted the analysis. Based on the data collected and the elements of aggregation described above, our data set was organized as follows: Import data: A file in which each record contains information for the dollar value of imports of a particular commodity at the 2-digit level, coming into a particular U.S. port, during a particular quarter, which originated in a particular country: for example: the vessel dollar value of apparel and accessories, knit and crochet, imported through the port of San Francisco, during the third quarter of 2010 that originated in Korea. Export data: A file in which each record contains information for the dollar value of exports of a particular commodity at the 2-digit level, embarking from a particular U.S. port, during a particular quarter, destined for a particular country. Example: the vessel dollar value of edible fruits and nuts exported through the Port of Seattle, during the 2nd quarter of 2013 and destined for Japan. Our model attempts to examine whether there were any significant shifts in trade patterns during the 2014q3-2015q1 period, after controlling the various factors that may also influence trade volume. That is, our model will estimate any break in the levels of trade during the 2014q3-2015q1 period given (1) existing historic trends in trade growth over time— expressed in the model as a linear time trend, (2) seasonal impacts, (3) time invariant port, country, and product characteristics, and, in some specifications, (4) exchange rates. i denotes port, p denotes product category at the two-digit Harmonized System (HS) code level, t denotes quarter c denotes country of origin or destination ln (y+1)iptc, the dependent variable, is the natural log of the dollar value of either exports or imports plus one, in order to account for zeroes in the data, passing through port i (which will be identified as being on one of the three coasts), for product category p, during quarter t, and coming from or destined for country c port disruption is a dummy variable designed to capture whether there was any shift in the volume of trade during the entire time frame of the port disruption—2014q3-2015q1—or, alternatively, for each of those three quarters separately as well as the quarters following the port disruption, up to and including 2016q1 trendt is a linear time trend that controls for trends in trade overtime, Recessiont is an indicator equal to one during the quarters that the pattern of which is likely related to trends in overall economic ∝q, are quarter of the year indicators to control for seasonality correspond to the great recession, 2007q4-2009q2, to account for any activity and other factors that may influence the underlying pattern of ERct is the exchange rate for country c in time t (included in only changes in trade during that period related to the economic downturn trade growth. ∝i are port fixed effects, which control for time invariant port characteristics. Such characteristics might include factors such as the ∝p are product category fixed effects, which control for time invariant products that are produced near the port that would drive elements of ∝c are country fixed effects, which control for time invariant country the trade it handles, management characteristics of the particular port, The parameter of interest, 𝛽𝛽1, measures any break in the level of trade or other similar port-specific related factors. some specifications of the model) product characteristics, such as the underlying demand characteristics of the product in the U.S. or in other countries. characteristics, such as location of the country and its bilateral trade agreements with the United States. 𝜀𝜀𝑖𝑖𝑖𝑖𝑡𝑡𝑡𝑡 is the error term. compared to other quarters during our study period after controlling for the independent factors included in the model. We estimated the equation above separately for exports and imports across ports with containerized cargo trade on each of the three coasts. For example, one estimation analyzed vessel exports from the West Coast ports, while another was vessel imports into East Coast ports. As such, there are 6 different estimations to examine trade at ports with containerized cargo and an additional 6 estimations to examine trade at ports with air cargo in the three coast regions. The standard errors are clustered at the port level in order to account for serial correlation in trade for a given port over time. Table 7 displays the model’s results for exports from ports with containerized cargo along each of the 3 coasts. We provide results for two model specifications. In the first, we test whether exports for the entire port disruption period—including the last two quarters of 2014 as well as the first quarter of 2015—appear to be significantly different along each of the coasts. The second examines the same issue, but rather than collapsing the time frame all together, we model each of the three quarters separately as well as the subsequent quarters post-disruption to assess whether there were any abnormal trade patterns even after the port situation had abated. In table 7, we provide the coefficients directly from the regressions for the port disruption time frames in the odd- numbered columns of results for each coast, but in the even columns we provide the percentage change in exports our model would suggest if each of the variables in the model were increased by one. These results indicate that for the 3 quarters of the port disruption in aggregate, there appears to be a decline in the value of exports from West Coast ports relative to the level of exports in other non-disruption quarters after controlling for the various factors in the model. In particular, from the regression results for the entire port disruption period we find that exports appear to have been 23.5 percent lower during this period. This finding is significant at the 5 percent level. We find no statistically significant changes in exports from the other coasts for the entire 3- quarter period. When looking at the three quarters separately our model suggests that the extent by which exports in each quarter of the disruption period were different than past quarters varied across the disruption period. Notably, we find no statistical changes in exports from the West Coast ports during the third quarter of 2014, and a weakly significant finding of reduced exports during the last quarter of 2014 from those ports. However, during the first quarter of 2015 we find that exports from West Coast ports appear, on average across the port, commodity, and trading partner observations, to have been substantially lower than past quarters by roughly 50 percent. This finding is statistically significant at the 1 percent level. In addition, while we do not find any unusual changes in exports from East Coast ports during this time frame, it does appear that exports were lower than during late 2014 at Gulf Coast ports, and again at these ports in the later part of 2015 and early 2016, compared to other quarters included in the analysis. Finally, results for the control variables are generally as expected—exports have tended to rise over time, and each group of fixed effects for port, country, and product, are jointly statistically significant. Table 8 displays the model results for imports from ports with containerized cargo along each of the 3 coasts. As above, we provide results for two model specifications—the first examines imports for the entire port disruption period and the second assesses imports for each of the three quarters separately as well as the quarters post-disruption. As shown on table 8, we did not find any indication that imports into West Coast ports during the port disruption were statistically different than import levels in the other quarters, after controlling for the various factors in the model. In addition, we found no evidence of unusual changes in trade flows at East Coast ports during this time frame. However, we did find that imports at Gulf Coast ports were higher during the West Coast port disruption time frame, as well as for every quarter we examined thereafter, compared to other quarters included in the analysis. This may suggest that some factor not accounted for in the model was leading to increased imports in the Gulf region during and after the disruption period. Table 9 provides results for our regression analysis for the 2 distinct sets of selected interview commodities—13 commodities that are major imported goods at West Coast ports, and 14 commodities that are major exported goods from West Coast ports. For each direction of trade, in turn, we aggregated the dollar value of trade flows at the four-digit commodity level for the specific products that fell under these categories and ran the model on this subset of the trade data. Our findings for these commodities align with our findings for total trade flows discussed above. Notably, we found that during the entire time frame of the port disruption, there appears to have been a statistically significant reduction in exports, but that reduction was not experienced equally across the 3 quarters. In the case of the 14 export commodities combined, we found no statistical reduction in trade in the last two quarters of 2014, but by the first quarter of 2015 the dollar value of these exports, on average across the port, commodity, and trading partner observations, appears to be about half of the levels in other quarters in the analysis after controlling for the various factors in the model. Additionally, for these commodities we found that exports remain below past levels in the second quarter of 2015, which would align with stakeholders views expressed to us that the port difficulties took some time to ameliorate in the winter of 2015 and trade was affected into the second quarter. We find no evidence that imports at West Coast ports showed any unusual change during any of the time period of the port disruption. Table 10 displays the model results for air exports and imports for each of the three coasts. Panel A shows the percentage change in exports and panel B shows the percentage change in imports for each region for the three quarters combined, column (1), as well as for each of the quarters separately, columns (2)-(8). As shown on table 10, we did not find any indication that exports during the disruption were statistically different than past export levels at airports on any of the three coasts. However, we did find that imports were significantly higher during each of the quarters of the disruption period for the West coast but not for the other regions. As the results show, air imports to the West coast remained higher than past levels even after the disruption time frame, suggesting that the increase might have been part of a more general change in trend. We conducted a variety of analyses to assess the robustness of our model results. In particular: Timing: In alternative specifications we conducted falsification tests under which we ran a separate regression with an indicator for each of the quarters between 2010q1 and 2014q3, along with our baseline controls, to test whether there was a change in trade patterns in any of those quarters. We did not find any significant changes in any of the quarters after 2010q1 and before the disruption. Cargo weight: The base case analysis used the dollar value of shipments as the measurement of the extent of trade. We alternatively used cargo weight to gauge extent of trade. Results were stable in this alternative analysis. Non-agricultural products: We ran the analysis on the West Coast ports using only non-agricultural products and the results were similar to the model when both agricultural and non-agricultural products were included. Thus it does not seem that the results were driven by shocks to the agricultural sector, such as weather shocks. Alternative trend analysis: Two of the independent variables included in the models are designed to capture elements of how macroeconomic conditions may influence trade flows over time. The first of these variables is the linear time trend, and the second is the dummy variable for the quarters associated with the recession that began in late 2007 and ended in mid-2009. In an alternative specification of the model, we excluded these two variables and instead included a measure of U.S. quarterly GDP to reflect the health of the economy over time. This specification resulted in the same general outcomes for our variable of interest. That is, we found exports from West Coast ports to be significantly lower than other quarters during the port disruption but found no such results for imports into those ports. Alternative specification for port level trends: In the original analysis the time trend was assumed to be the same for all ports. In an alternative specification we allowed this linear time trend in trade to vary by port—that is, the time trend could be different across ports through the time frame of our data. This alteration did not have any effect on the core findings of the model. Inclusion of exchange rates: The base-case analysis did not control for exchange rates between the U.S. and each of the trading countries. In an alternative specification we included these data. Model results were not affected by the inclusion of exchange rates. Exclusion of observations with zero value: In our main analysis the dependent variable was log(y+1), where y was either exports or imports, in order to include observations with zero value in the analysis. In an alternative sample any port/commodity/country observation that had zero vessel value at any point during our study period was excluded, and thus we ended up with a balanced panel with only non-zero values. The dependent variable in this analysis was log(y) since all observations were non-zero. This modification did not affect the main findings. As we have noted above, the results are not meant to disentangle the cause of any changes in trade patterns during the 2014q3-2015q1 period. It is meant to establish whether there were any changes in trade patterns during this time period after accounting for linear time trends, seasonality patterns as well as port, product and country fixed effects. There could be other factors that we are not accounting for, such as economic shocks to trading partners, industry level shocks, among others, that could have impacted trade through the various regions during the disruption period. In addition to the individual named above, Sharon Silas (Assistant Director), John Stambaugh (Analyst in Charge), Amy Abramowitz, Lilia Chaidez, Ming Chen, Leia Dickerson, Delwen Jones, Jessica Lewis, Maureen Luna-Long, SaraAnn Moessbauer, Josh Ormond, Cheryl Peterson, and Friendly Vang-Johnson made key contributions to this report. The following are GAO products pertinent to the issues discussed in this report. Other products may be found at GAO’s Web site at www.gao.gov. U.S. Border Communities: Ongoing DOT Efforts Could Help Address Impacts of International Freight Rail. GAO-16-274. Washington, D.C.: January 28, 2016. Hurricane Sandy: An Investment Strategy Could Help the Federal Government Enhance National Resilience for Future Disasters. GAO-15-515. Washington, D.C.: July 30, 2015. Surface Transportation: DOT Is Progressing toward a Performance- Based Approach, but State and Grantees Report Potential Implementation Challenges. GAO-15-217. Washington, D.C.: January 16, 2015. Freight Transportation: Developing National Strategy Would Benefit from Added Focus on Community Congestion Impacts. GAO-14-740. Washington, D.C.; September 19, 2014. Maritime Infrastructure: Opportunities Exist to Improve the Effectiveness of Federal Efforts to Support the Marine Transportation System. GAO-13-80. Washington, D.C.: November 13, 2012. Intercity Passenger and Freight Rail: Better Data and Communication of Uncertainties Can Help Decision Makers Understand Benefits and Trade- offs of Program and Policies. GAO-11-290. Washington, D.C.: February 24, 2011. Surface Freight Transportation: A Comparison of the Costs of Road, Rail, and Waterways Freight Shipments That Are Not Passed on to Consumers. GAO-11-134. Washington, D.C.: January 26, 2011. Statewide Transportation Planning: Opportunities Existing to Transition to Performance-Based Planning and Federal Oversight. GAO-11-77. Washington, D.C.: December 15, 2010. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008. Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs. GAO-08-400. Washington, D.C.: March 6, 2008. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003 Marine Transportation: Federal Financing and a Framework for Infrastructure Investments. GAO-02-1033. Washington, D.C.: September 9, 2002.
U.S. West Coast ports are critical to the national transportation freight network and global supply chains. Changes in global shipping and disruptions at ports can create congestion and economic hardship for shippers with resulting effects throughout supply chains. The 2015 Fixing America's Surface Transportation Act provides freight policy goals, including increasing U.S. economic competitiveness; reducing freight congestion; and improving the safety, reliability, and efficiency of the freight network. The act also established new DOT freight funding programs. This report addresses: (1) how major U.S. West Coast ports have responded to recent changes in global shipping; (2) how selected shippers have been impacted by and responded to a recent port disruption, and (3) how DOT's efforts support port cargo movement and whether they can be improved. GAO conducted case studies of the three major port regions on the West Coast; interviewed key stakeholders—such as port authorities and state and local transportation agencies—for each region and 21 industry representatives, and evaluated DOT's freight efforts relative to criteria on using quality information to support decision-making. Some infrastructure and operations at major West Coast ports are strained in the face of recent changes in global shipping, but port stakeholders are attempting to address these constraints. For example, as the shipping industry deploys larger vessels capable of delivering more cargo, some port terminals lack big enough cranes, or other infrastructure, needed to handle these vessels. All major West Coast ports have planned or completed port-related infrastructure projects and implemented operational changes. For example, in Long Beach, California, the Gerald Desmond Bridge is being heightened to enable larger vessels to pass underneath. Port stakeholders also noted that efforts to address constraints at ports can be hampered by competing priorities and limited data. For example, most state and local government officials said that having information on ports' performance and industry supply chains—the end-to-end process of producing and distributing a product or commodity from raw materials to the final customer—would be helpful to target efforts to address constraints at ports. Selected shippers were impacted by and responded to one recent port disruption in various ways. In July 2014, the labor agreement that covers most West Coast port workers expired and was not renewed until February 2015. During this period, as widely reported, ports remained open, but vessels backed up in harbors, and loading and unloading of cargo were delayed. In response to this disruption, 13 of 21 selected industry groups representing shippers of some of the top commodities moving through West Coast ports said at least some of their members modified their supply chains by, for example, diverting shipments to ports outside the West Coast or to alternate modes of transportation. All 13 said shippers' costs increased or revenues declined. Six industry groups said some members had difficulty altering shipping plans because of commodity attributes, such as perishability or prohibitive costs. The Department of Transportation's (DOT) freight-related activities are increasingly multi-modal and inclusive of ports, but gaps exist in the information available to DOT and state and local governments about important aspects of supply chains. For example, a 2015 DOT report notes that movements of international trade between ports and domestic origin for exports and domestic destinations for imports are not measured. This report further states that this information could help DOT to assess international trade flows within the United States and strengthen the role of freight transportation in U.S. economic competitiveness. Federal guidance and leading practices in capital planning emphasize that good information is essential to sound decision making and achieving agency objectives. A few current DOT initiatives may help address some information gaps, but they are in the early stages. DOT has also articulated the need for supply chain information in its draft National Freight Strategic Plan , but does not outline how DOT will obtain this information or how it will be used. Based on a 2014 GAO recommendation, DOT is in the early stages of developing a written freight data strategy to improve the availability of national data on freight trends, among other things. Broadening its freight data strategy to include supply chain information could help DOT to think more strategically about the specific supply chain information needed to support its freight efforts and advance national freight policy goals. In developing a freight data strategy, DOT should identify: what supply chain information is needed, potential sources of that information, data gaps, and how it intends to use this information to inform freight efforts. DOT concurred with the recommendation.